Sample records for depth estimation network

  1. Spatially continuous interpolation of water stage and water depths using the Everglades depth estimation network (EDEN)

    USGS Publications Warehouse

    Pearlstine, Leonard; Higer, Aaron; Palaseanu, Monica; Fujisaki, Ikuko; Mazzotti, Frank

    2007-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on a 400-square-meter grid spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades.

  2. Crack orientation and depth estimation in a low-pressure turbine disc using a phased array ultrasonic transducer and an artificial neural network.

    PubMed

    Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang

    2013-09-13

    Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks.

  3. Crack Orientation and Depth Estimation in a Low-Pressure Turbine Disc Using a Phased Array Ultrasonic Transducer and an Artificial Neural Network

    PubMed Central

    Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang

    2013-01-01

    Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks. PMID:24064602

  4. Estimation of missing water-level data for the Everglades Depth Estimation Network (EDEN), 2013 update

    USGS Publications Warehouse

    Petkewich, Matthew D.; Conrads, Paul

    2013-01-01

    The Everglades Depth Estimation Network is an integrated network of real-time water-level gaging stations, a ground-elevation model, and a water-surface elevation model designed to provide scientists, engineers, and water-resource managers with water-level and water-depth information (1991-2013) for the entire freshwater portion of the Greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for the Everglades Depth Estimation Network in order for the Network to provide quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. In a previous study, water-level estimation equations were developed to fill in missing data to increase the accuracy of the daily water-surface elevation model. During this study, those equations were updated because of the addition and removal of water-level gaging stations, the consistent use of water-level data relative to the North American Vertical Datum of 1988, and availability of recent data (March 1, 2006, to September 30, 2011). Up to three linear regression equations were developed for each station by using three different input stations to minimize the occurrences of missing data for an input station. Of the 667 water-level estimation equations developed to fill missing data at 223 stations, more than 72 percent of the equations have coefficients of determination greater than 0.90, and 97 percent have coefficients of determination greater than 0.70.

  5. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  6. The Everglades Depth Estimation Network (EDEN) for Support of Ecological and Biological Assessments

    USGS Publications Warehouse

    Telis, Pamela A.

    2006-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (1999-present), online water-depth information for the entire freshwater portion of the Greater Everglades. Presented on a 400-square-meter grid spacing, EDEN offers a consistent and documented dataset that can be used by scientists and managers to (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan.

  7. User’s manual for the Automated Data Assurance and Management application developed for quality control of Everglades Depth Estimation Network water-level data

    USGS Publications Warehouse

    Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul

    2016-09-29

    The generation of Everglades Depth Estimation Network (EDEN) daily water-level and water-depth maps is dependent on high quality real-time data from over 240 water-level stations. To increase the accuracy of the daily water-surface maps, the Automated Data Assurance and Management (ADAM) tool was created by the U.S. Geological Survey as part of Greater Everglades Priority Ecosystems Science. The ADAM tool is used to provide accurate quality-assurance review of the real-time data from the EDEN network and allows estimation or replacement of missing or erroneous data. This user’s manual describes how to install and operate the ADAM software. File structure and operation of the ADAM software is explained using examples.

  8. Estimation of Missing Water-Level Data for the Everglades Depth Estimation Network (EDEN)

    USGS Publications Warehouse

    Conrads, Paul; Petkewich, Matthew D.

    2009-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level gaging stations, ground-elevation models, and water-surface elevation models designed to provide scientists, engineers, and water-resource managers with current (2000-2009) water-depth information for the entire freshwater portion of the greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for EDEN and their goal of providing quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. To increase the accuracy of the daily water-surface elevation model, water-level estimation equations were developed to fill missing data. To minimize the occurrences of no estimation of data due to missing data for an input station, a minimum of three linear regression equations were developed for each station using different input stations. Of the 726 water-level estimation equations developed to fill missing data at 239 stations, more than 60 percent of the equations have coefficients of determination greater than 0.90, and 92 percent have an coefficient of determination greater than 0.70.

  9. Using machine learning to produce near surface soil moisture estimates from deeper in situ records at U.S. Climate Reference Network (USCRN) locations: Analysis and applications to AMSR-E satellite validation

    NASA Astrophysics Data System (ADS)

    Coopersmith, Evan J.; Cosh, Michael H.; Bell, Jesse E.; Boyles, Ryan

    2016-12-01

    Surface soil moisture is a critical parameter for understanding the energy flux at the land atmosphere boundary. Weather modeling, climate prediction, and remote sensing validation are some of the applications for surface soil moisture information. The most common in situ measurement for these purposes are sensors that are installed at depths of approximately 5 cm. There are however, sensor technologies and network designs that do not provide an estimate at this depth. If soil moisture estimates at deeper depths could be extrapolated to the near surface, in situ networks providing estimates at other depths would see their values enhanced. Soil moisture sensors from the U.S. Climate Reference Network (USCRN) were used to generate models of 5 cm soil moisture, with 10 cm soil moisture measurements and antecedent precipitation as inputs, via machine learning techniques. Validation was conducted with the available, in situ, 5 cm resources. It was shown that a 5 cm estimate, which was extrapolated from a 10 cm sensor and antecedent local precipitation, produced a root-mean-squared-error (RMSE) of 0.0215 m3/m3. Next, these machine-learning-generated 5 cm estimates were also compared to AMSR-E estimates at these locations. These results were then compared with the performance of the actual in situ readings against the AMSR-E data. The machine learning estimates at 5 cm produced an RMSE of approximately 0.03 m3/m3 when an optimized gain and offset were applied. This is necessary considering the performance of AMSR-E in locations characterized by high vegetation water contents, which are present across North Carolina. Lastly, the application of this extrapolation technique is applied to the ECONet in North Carolina, which provides a 10 cm depth measurement as its shallowest soil moisture estimate. A raw RMSE of 0.028 m3/m3 was achieved, and with a linear gain and offset applied at each ECONet site, an RMSE of 0.013 m3/m3 was possible.

  10. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  11. Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.

  12. Real-time estimation of lesion depth and control of radiofrequency ablation within ex vivo animal tissues using a neural network.

    PubMed

    Wang, Yearnchee Curtis; Chan, Terence Chee-Hung; Sahakian, Alan Varteres

    2018-01-04

    Radiofrequency ablation (RFA), a method of inducing thermal ablation (cell death), is often used to destroy tumours or potentially cancerous tissue. Current techniques for RFA estimation (electrical impedance tomography, Nakagami ultrasound, etc.) require long compute times (≥ 2 s) and measurement devices other than the RFA device. This study aims to determine if a neural network (NN) can estimate ablation lesion depth for control of bipolar RFA using complex electrical impedance - since tissue electrical conductivity varies as a function of tissue temperature - in real time using only the RFA therapy device's electrodes. Three-dimensional, cubic models comprised of beef liver, pork loin or pork belly represented target tissue. Temperature and complex electrical impedance from 72 data generation ablations in pork loin and belly were used for training the NN (403 s on Xeon processor). NN inputs were inquiry depth, starting complex impedance and current complex impedance. Training-validation-test splits were 70%-0%-30% and 80%-10%-10% (overfit test). Once the NN-estimated lesion depth for a margin reached the target lesion depth, RFA was stopped for that margin of tissue. The NN trained to 93% accuracy and an NN-integrated control ablated tissue to within 1.0 mm of the target lesion depth on average. Full 15-mm depth maps were calculated in 0.2 s on a single-core ARMv7 processor. The results show that a NN could make lesion depth estimations in real-time using less in situ devices than current techniques. With the NN-based technique, physicians could deliver quicker and more precise ablation therapy.

  13. Everglades Depth Estimation Network (EDEN)—A decade of serving hydrologic information to scientists and resource managers

    USGS Publications Warehouse

    Patino, Eduardo; Conrads, Paul; Swain, Eric; Beerens, James M.

    2017-10-30

    IntroductionThe Everglades Depth Estimation Network (EDEN) provides scientists and resource managers with regional maps of daily water levels and depths in the freshwater part of the Greater Everglades landscape. The EDEN domain includes all or parts of five Water Conservation Areas, Big Cypress National Preserve, Pennsuco Wetlands, and Everglades National Park. Daily water-level maps are interpolated from water-level data at monitoring gages, and depth is estimated by using a digital elevation model of the land surface. Online datasets provide time series of daily water levels at gages and rainfall and evapotranspiration data (https://sofia.usgs.gov/eden/). These datasets are used by scientists and resource managers to guide large-scale field operations, describe hydrologic changes, and support biological and ecological assessments that measure ecosystem response to the implementation of the Comprehensive Everglades Restoration Plan. EDEN water-level data have been used in a variety of biological and ecological studies including (1) the health of American alligators as a function of water depth, (2) the variability of post-fire landscape dynamics in relation to water depth, (3) the habitat quality for wading birds with dynamic habitat selection, and (4) an evaluation of the habitat of the Cape Sable seaside sparrow.

  14. Event-Based Stereo Depth Estimation Using Belief Propagation.

    PubMed

    Xie, Zhen; Chen, Shengyong; Orchard, Garrick

    2017-01-01

    Compared to standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. These event-based sensors are also far less prone to motion blur than traditional cameras, and still operate effectively in high dynamic range scenes. However, classical framed-based algorithms are not typically suitable for these event-based data and new processing algorithms are required. This paper focuses on the problem of depth estimation from a stereo pair of event-based sensors. A fully event-based stereo depth estimation algorithm which relies on message passing is proposed. The algorithm not only considers the properties of a single event but also uses a Markov Random Field (MRF) to consider the constraints between the nearby events, such as disparity uniqueness and depth continuity. The method is tested on five different scenes and compared to other state-of-art event-based stereo matching methods. The results show that the method detects more stereo matches than other methods, with each match having a higher accuracy. The method can operate in an event-driven manner where depths are reported for individual events as they are received, or the network can be queried at any time to generate a sparse depth frame which represents the current state of the network.

  15. Hydrologic Record Extension of Water-Level Data in the Everglades Depth Estimation Network (EDEN) Using Artificial Neural Network Models, 2000-2006

    USGS Publications Warehouse

    Conrads, Paul; Roehl, Edwin A.

    2007-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level gaging stations, ground-elevation models, and water-surface models designed to provide scientists, engineers, and water-resource managers with current (2000-present) water-depth information for the entire freshwater portion of the greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystem Science provides support for EDEN and the goal of providing quality assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. To increase the accuracy of the water-surface models, 25 real-time water-level gaging stations were added to the network of 253 established water-level gaging stations. To incorporate the data from the newly added stations to the 7-year EDEN database in the greater Everglades, the short-term water-level records (generally less than 1 year) needed to be simulated back in time (hindcasted) to be concurrent with data from the established gaging stations in the database. A three-step modeling approach using artificial neural network models was used to estimate the water levels at the new stations. The artificial neural network models used static variables that represent the gaging station location and percent vegetation in addition to dynamic variables that represent water-level data from the established EDEN gaging stations. The final step of the modeling approach was to simulate the computed error of the initial estimate to increase the accuracy of the final water-level estimate. The three-step modeling approach for estimating water levels at the new EDEN gaging stations produced satisfactory results. The coefficients of determination (R2) for 21 of the 25 estimates were greater than 0.95, and all of the estimates (25 of 25) were greater than 0.82. The model estimates showed good agreement with the measured data. For some new EDEN stations with limited measured data, the record extension (hindcasts) included periods beyond the range of the data used to train the artificial neural network models. The comparison of the hindcasts with long-term water-level data proximal to the new EDEN gaging stations indicated that the water-level estimates were reasonable. The percent model error (root mean square error divided by the range of the measured data) was less than 6 percent, and for the majority of stations (20 of 25), the percent model error was less than 1 percent.

  16. RAPID DETERMINATION OF FOCAL DEPTH USING A GLOBAL NETWORK OF SMALL-APERTURE SEISMIC ARRAYS

    NASA Astrophysics Data System (ADS)

    Seats, K.; Koper, K.; Benz, H.

    2009-12-01

    The National Earthquake Information Center (NEIC) of the United States Geological Survey (USGS) operates 24 hours a day, 365 days a year with the mission of locating and characterizing seismic events around the world. A key component of this task is quickly determining the focal depth of each seismic event, which has a first-order effect on estimates of ground shaking used in the impact assessment applications of emergency response activities. Current methods of depth estimation used at the NEIC include arrival time inversion both with and without depth phases, a Bayesian depth constraint based on historical seismicity (1973-present), and moment tensor inversion primarily using P- and S-wave waveforms. In this study, we explore the possibility of automated modeling of waveforms from vertical-component arrays of the International Monitoring System (IMS) to improve rapid depth estimation at NEIC. Because these arrays are small-aperture, they are effective at increasing signal to noise ratios for frequencies of 1 Hz and higher. Currently, NEIC receives continuous real-time data from 23 IMS arrays. Following work done by previous researchers, we developed a technique that acts as an array of arrays. For a given epicentral location we calculate fourth root beams for each IMS array in the distance range of 30 to 95 degrees at the expected slowness vector of the first arrival. Because the IMS arrays are small-aperture, these beams highlight energy that has slowness similar to the first arrival, such as depth phases. The beams are rectified by taking the envelope and then automatically aligned on the largest peak within 5 seconds of the expected arrival time. The station beams are then combined into network beams assuming a range of depths varying from 10 km to 700 km in increments of 1 km. The network beams are computed assuming both pP and sP propagation, and a measure of beam power is output as a function of depth for both propagation models, as well as their sum. We validated this approach using several hundred seismic events in the magnitude range 4.5-6.5 mb that occurred in 2008 and 2009. In most cases, clear spikes in the network beam power existed at depths around those estimated by the NEIC using traditional location procedures. However, in most cases there was also a bimodality in the network beam power because of the ambiguity between assuming pP or sP propagation for later arriving energy. There were only a handful of cases in which a seismic event generated both sP and pP phases with sizes large enough to resolve the ambiguity. We are currently working to include PKP arrivals into the network beams and experimenting with various tuning parameters to improve the efficiency of the algorithm. This promising approach will allow NEIC to significantly and systematically improve the quality of hypocentral locations reported in the PDE and provide NEIC with additional valuable information on seismic source parameters needed in emergency response applications.

  17. Volume of Valley Networks on Mars and Its Hydrologic Implications

    NASA Astrophysics Data System (ADS)

    Luo, W.; Cang, X.; Howard, A. D.; Heo, J.

    2015-12-01

    Valley networks on Mars are river-like features that offer the best evidence for water activities in its geologic past. Previous studies have extracted valley network lines automatically from digital elevation model (DEM) data and manually from remotely sensed images. The volume of material removed by valley networks is an important parameter that could help us infer the amount of water needed to carve the valleys. A progressive black top hat (PBTH) transformation algorithm has been adapted from image processing to extract valley volume and successfully applied to simulated landform and Ma'adim Valles, Mars. However, the volume of valley network excavation on Mars has not been estimated on a global scale. In this study, the PBTH method was applied to the whole Mars to estimate this important parameter. The process was automated with Python in ArcGIS. Polygons delineating the valley associated depressions were generated by using a multi-flow direction growth method, which started with selected high point seeds on a depth grid (essentially an inverted valley) created by PBTH transformation and grew outward following multi-flow direction on the depth grid. Two published versions of valley network lines were integrated to automatically select depression polygons that represent the valleys. Some crater depressions that are connected with valleys and thus selected in the previous step were removed by using information from a crater database. Because of large distortion associated with global dataset in projected maps, the volume of each cell within a valley was calculated using the depth of the cell multiplied by the spherical area of the cell. The volumes of all the valley cells were then summed to produce the estimate of global valley excavation volume. Our initial result of this estimate was ~2.4×1014 m3. Assuming a sediment density of 2900 kg/m3, a porosity of 0.35, and a sediment load of 1.5 kg/m3, the global volume of water needed to carve the valleys was estimated to be ~7.1×1017 m3. Because of the coarse resolution of MOLA data, this is a conservative lower bound. Comparing with the hypothesized northern ocean volume 2.3×1016 m3 estimated by Carr and Head (2003), our estimate of water volume suggests and confirms an active hydrologic cycle for early Mars. Further hydrologic analysis will improve the estimate accuracy.

  18. Geometrical features assessment of liver's tumor with application of artificial neural network evolved by imperialist competitive algorithm.

    PubMed

    Keshavarz, M; Mojra, A

    2015-05-01

    Geometrical features of a cancerous tumor embedded in biological soft tissue, including tumor size and depth, are a necessity in the follow-up procedure and making suitable therapeutic decisions. In this paper, a new socio-politically motivated global search strategy which is called imperialist competitive algorithm (ICA) is implemented to train a feed forward neural network (FFNN) to estimate the tumor's geometrical characteristics (FFNNICA). First, a viscoelastic model of liver tissue is constructed by using a series of in vitro uniaxial and relaxation test data. Then, 163 samples of the tissue including a tumor with different depths and diameters are generated by making use of PYTHON programming to link the ABAQUS and MATLAB together. Next, the samples are divided into 123 samples as training dataset and 40 samples as testing dataset. Training inputs of the network are mechanical parameters extracted from palpation of the tissue through a developing noninvasive technology called artificial tactile sensing (ATS). Last, to evaluate the FFNNICA performance, outputs of the network including tumor's depth and diameter are compared with desired values for both training and testing datasets. Deviations of the outputs from desired values are calculated by a regression analysis. Statistical analysis is also performed by measuring Root Mean Square Error (RMSE) and Efficiency (E). RMSE in diameter and depth estimations are 0.50 mm and 1.49, respectively, for the testing dataset. Results affirm that the proposed optimization algorithm for training neural network can be useful to characterize soft tissue tumors accurately by employing an artificial palpation approach. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Estimating nocturnal opaque ice cloud optical depth from MODIS multispectral infrared radiances using a neural network method

    NASA Astrophysics Data System (ADS)

    Minnis, Patrick; Hong, Gang; Sun-Mack, Szedung; Smith, William L.; Chen, Yan; Miller, Steven D.

    2016-05-01

    Retrieval of ice cloud properties using IR measurements has a distinct advantage over the visible and near-IR techniques by providing consistent monitoring regardless of solar illumination conditions. Historically, the IR bands at 3.7, 6.7, 11.0, and 12.0 µm have been used to infer ice cloud parameters by various methods, but the reliable retrieval of ice cloud optical depth τ is limited to nonopaque cirrus with τ < 8. The Ice Cloud Optical Depth from Infrared using a Neural network (ICODIN) method is developed in this paper by training Moderate Resolution Imaging Spectroradiometer (MODIS) radiances at 3.7, 6.7, 11.0, and 12.0 µm against CloudSat-estimated τ during the nighttime using 2 months of matched global data from 2007. An independent data set comprising observations from the same 2 months of 2008 was used to validate the ICODIN. One 4-channel and three 3-channel versions of the ICODIN were tested. The training and validation results show that IR channels can be used to estimate ice cloud τ up to 150 with correlations above 78% and 69% for all clouds and only opaque ice clouds, respectively. However, τ for the deepest clouds is still underestimated in many instances. The corresponding RMS differences relative to CloudSat are ~100 and ~72%. If the opaque clouds are properly identified with the IR methods, the RMS differences in the retrieved optical depths are ~62%. The 3.7 µm channel appears to be most sensitive to optical depth changes but is constrained by poor precision at low temperatures. A method for estimating total optical depth is explored for estimation of cloud water path in the future. Factors affecting the uncertainties and potential improvements are discussed. With improved techniques for discriminating between opaque and semitransparent ice clouds, the method can ultimately improve cloud property monitoring over the entire diurnal cycle.

  20. Improved mapping of National Atmospheric Deposition Program wet-deposition in complex terrain using PRISM-gridded data sets

    USGS Publications Warehouse

    Latysh, Natalie E.; Wetherbee, Gregory Alan

    2012-01-01

    High-elevation regions in the United States lack detailed atmospheric wet-deposition data. The National Atmospheric Deposition Program/National Trends Network (NADP/NTN) measures and reports precipitation amounts and chemical constituent concentration and deposition data for the United States on annual isopleth maps using inverse distance weighted (IDW) interpolation methods. This interpolation for unsampled areas does not account for topographic influences. Therefore, NADP/NTN isopleth maps lack detail and potentially underestimate wet deposition in high-elevation regions. The NADP/NTN wet-deposition maps may be improved using precipitation grids generated by other networks. The Parameter-elevation Regressions on Independent Slopes Model (PRISM) produces digital grids of precipitation estimates from many precipitation-monitoring networks and incorporates influences of topographical and geographical features. Because NADP/NTN ion concentrations do not vary with elevation as much as precipitation depths, PRISM is used with unadjusted NADP/NTN data in this paper to calculate ion wet deposition in complex terrain to yield more accurate and detailed isopleth deposition maps in complex terrain. PRISM precipitation estimates generally exceed NADP/NTN precipitation estimates for coastal and mountainous regions in the western United States. NADP/NTN precipitation estimates generally exceed PRISM precipitation estimates for leeward mountainous regions in Washington, Oregon, and Nevada, where abrupt changes in precipitation depths induced by topography are not depicted by IDW interpolation. PRISM-based deposition estimates for nitrate can exceed NADP/NTN estimates by more than 100% for mountainous regions in the western United States.

  1. Improved mapping of National Atmospheric Deposition Program wet-deposition in complex terrain using PRISM-gridded data sets.

    PubMed

    Latysh, Natalie E; Wetherbee, Gregory Alan

    2012-01-01

    High-elevation regions in the United States lack detailed atmospheric wet-deposition data. The National Atmospheric Deposition Program/National Trends Network (NADP/NTN) measures and reports precipitation amounts and chemical constituent concentration and deposition data for the United States on annual isopleth maps using inverse distance weighted (IDW) interpolation methods. This interpolation for unsampled areas does not account for topographic influences. Therefore, NADP/NTN isopleth maps lack detail and potentially underestimate wet deposition in high-elevation regions. The NADP/NTN wet-deposition maps may be improved using precipitation grids generated by other networks. The Parameter-elevation Regressions on Independent Slopes Model (PRISM) produces digital grids of precipitation estimates from many precipitation-monitoring networks and incorporates influences of topographical and geographical features. Because NADP/NTN ion concentrations do not vary with elevation as much as precipitation depths, PRISM is used with unadjusted NADP/NTN data in this paper to calculate ion wet deposition in complex terrain to yield more accurate and detailed isopleth deposition maps in complex terrain. PRISM precipitation estimates generally exceed NADP/NTN precipitation estimates for coastal and mountainous regions in the western United States. NADP/NTN precipitation estimates generally exceed PRISM precipitation estimates for leeward mountainous regions in Washington, Oregon, and Nevada, where abrupt changes in precipitation depths induced by topography are not depicted by IDW interpolation. PRISM-based deposition estimates for nitrate can exceed NADP/NTN estimates by more than 100% for mountainous regions in the western United States.

  2. Design and development of a wireless sensor network to monitor snow depth in multiple catchments in the American River basin, California: hardware selection and sensor placement techniques

    NASA Astrophysics Data System (ADS)

    Kerkez, B.; Rice, R.; Glaser, S. D.; Bales, R. C.; Saksa, P. C.

    2010-12-01

    A 100-node wireless sensor network (WSN) was designed for the purpose of monitoring snow depth in two watersheds, spanning 3 km2 in the American River basin, in the central Sierra Nevada of California. The network will be deployed as a prototype project that will become a core element of a larger water information system for the Sierra Nevada. The site conditions range from mid-elevation forested areas to sub-alpine terrain with light forest cover. Extreme temperature and humidity fluctuations, along with heavy rain and snowfall events, create particularly challenging conditions for wireless communications. We show how statistics gathered from a previously deployed 60-node WSN, located in the Southern Sierra Critical Zone Observatory, were used to inform design. We adapted robust network hardware, manufactured by Dust Networks for highly demanding industrial monitoring, and added linear amplifiers to the radios to improve transmission distances. We also designed a custom data-logging board to interface the WSN hardware with snow-depth sensors. Due to the large distance between sensing locations, and complexity of terrain, we analyzed network statistics to select the location of repeater nodes, to create a redundant and reliable mesh. This optimized network topology will maximize transmission distances, while ensuring power-efficient network operations throughout harsh winter conditions. At least 30 of the 100 nodes will actively sense snow depth, while the remainder will act as sensor-ready repeaters in the mesh. Data from a previously conducted snow survey was used to create a Gaussian Process model of snow depth; variance estimates produced by this model were used to suggest near-optimal locations for snow-depth sensors to measure the variability across a 1 km2 grid. We compare the locations selected by the sensor placement algorithm to those made through expert opinion, and offer explanations for differences resulting from each approach.

  3. Initial Everglades Depth Estimation Network (EDEN) Digital Elevation Model Research and Development

    USGS Publications Warehouse

    Jones, John W.; Price, Susan D.

    2007-01-01

    Introduction The Everglades Depth Estimation Network (EDEN) offers a consistent and documented dataset that can be used to guide large-scale field operations, to integrate hydrologic and ecological responses, and to support biological and ecological assessments that measure ecosystem responses to the Comprehensive Everglades Restoration Plan (Telis, 2006). To produce historic and near-real time maps of water depths, the EDEN requires a system-wide digital elevation model (DEM) of the ground surface. Accurate Everglades wetland ground surface elevation data were non-existent before the U.S. Geological Survey (USGS) undertook the collection of highly accurate surface elevations at the regional scale. These form the foundation for EDEN DEM development. This development process is iterative as additional high accuracy elevation data (HAED) are collected, water surfacing algorithms improve, and additional ground-based ancillary data become available. Models are tested using withheld HAED and independently measured water depth data, and by using DEM data in EDEN adaptive management applications. Here the collection of HAED is briefly described before the approach to DEM development and the current EDEN DEM are detailed. Finally future research directions for continued model development, testing, and refinement are provided.

  4. Everglades Depth Estimation Network (EDEN) Applications: Tools to View, Extract, Plot, and Manipulate EDEN Data

    USGS Publications Warehouse

    Telis, Pamela A.; Henkel, Heather

    2009-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated system of real-time water-level monitoring, ground-elevation data, and water-surface elevation modeling to provide scientists and water managers with current on-line water-depth information for the entire freshwater part of the greater Everglades. To assist users in applying the EDEN data to their particular needs, a series of five EDEN tools, or applications (EDENapps), were developed. Using EDEN's tools, scientists can view the EDEN datasets of daily water-level and ground elevations, compute and view daily water depth and hydroperiod surfaces, extract data for user-specified locations, plot transects of water level, and animate water-level transects over time. Also, users can retrieve data from the EDEN datasets for analysis and display in other analysis software programs. As scientists and managers attempt to restore the natural volume, timing, and distribution of sheetflow in the wetlands, such information is invaluable. Information analyzed and presented with these tools is used to advise policy makers, planners, and decision makers of the potential effects of water management and restoration scenarios on the natural resources of the Everglades.

  5. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    NASA Astrophysics Data System (ADS)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  6. Estimating regional-scale permeability-depth relations in a fractured-rock terrain using groundwater-flow model calibration

    NASA Astrophysics Data System (ADS)

    Sanford, Ward E.

    2017-03-01

    The trend of decreasing permeability with depth was estimated in the fractured-rock terrain of the upper Potomac River basin in the eastern USA using model calibration on 200 water-level observations in wells and 12 base-flow observations in subwatersheds. Results indicate that permeability at the 1-10 km scale (for groundwater flowpaths) decreases by several orders of magnitude within the top 100 m of land surface. This depth range represents the transition from the weathered, fractured regolith into unweathered bedrock. This rate of decline is substantially greater than has been observed by previous investigators that have plotted in situ wellbore measurements versus depth. The difference is that regional water levels give information on kilometer-scale connectivity of the regolith and adjacent fracture networks, whereas in situ measurements give information on near-hole fractures and fracture networks. The approach taken was to calibrate model layer-to-layer ratios of hydraulic conductivity (LLKs) for each major rock type. Most rock types gave optimal LLK values of 40-60, where each layer was twice a thick as the one overlying it. Previous estimates of permeability with depth from deeper data showed less of a decline at <300 m than the regional modeling results. There was less certainty in the modeling results deeper than 200 m and for certain rock types where fewer water-level observations were available. The results have implications for improved understanding of watershed-scale groundwater flow and transport, such as for the timing of the migration of pollutants from the water table to streams.

  7. Estimating regional-scale permeability–depth relations in a fractured-rock terrain using groundwater-flow model calibration

    USGS Publications Warehouse

    Sanford, Ward E.

    2017-01-01

    The trend of decreasing permeability with depth was estimated in the fractured-rock terrain of the upper Potomac River basin in the eastern USA using model calibration on 200 water-level observations in wells and 12 base-flow observations in subwatersheds. Results indicate that permeability at the 1–10 km scale (for groundwater flowpaths) decreases by several orders of magnitude within the top 100 m of land surface. This depth range represents the transition from the weathered, fractured regolith into unweathered bedrock. This rate of decline is substantially greater than has been observed by previous investigators that have plotted in situ wellbore measurements versus depth. The difference is that regional water levels give information on kilometer-scale connectivity of the regolith and adjacent fracture networks, whereas in situ measurements give information on near-hole fractures and fracture networks. The approach taken was to calibrate model layer-to-layer ratios of hydraulic conductivity (LLKs) for each major rock type. Most rock types gave optimal LLK values of 40–60, where each layer was twice a thick as the one overlying it. Previous estimates of permeability with depth from deeper data showed less of a decline at <300 m than the regional modeling results. There was less certainty in the modeling results deeper than 200 m and for certain rock types where fewer water-level observations were available. The results have implications for improved understanding of watershed-scale groundwater flow and transport, such as for the timing of the migration of pollutants from the water table to streams.

  8. Phase structure within a fracture network beneath a surface pond: Field experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    GLASS JR.,ROBERT J.; NICHOLL,M.J.

    2000-05-09

    The authors performed a simple experiment to elucidate phase structure within a pervasively fractured welded tuff. Dyed water was infiltrated from a surface pond over a 36 minute period while a geophysical array monitored the wetted region within vertical planes directly beneath. They then excavated the rock mass to a depth of {approximately}5 m and mapped the fracture network and extent of dye staining in a series of horizontal pavements. Near the pond the network was fully stained. Below, the phase structure immediately expanded and with depth, the structure became fragmented and complicated exhibiting evidence of preferential flow, fingers, irregularmore » wetting patterns, and varied behavior at fracture intersections. Limited transient geophysical data suggested that strong vertical pathways form first followed by increased horizontal expansion and connection within the network. These rapid pathways are also the first to drain. Estimates also suggest that the excavation captured from {approximately}10% to 1% or less of the volume of rock interrogated by the infiltration slug and thus the penetration depth could have been quite large.« less

  9. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    PubMed

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  10. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information

    PubMed Central

    Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun

    2016-01-01

    In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256

  11. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  12. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  13. Uncovering robust patterns of microRNA co-expression across cancers using Bayesian Relevance Networks

    PubMed Central

    2017-01-01

    Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing—with its unique statistical properties—became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca. PMID:28817636

  14. Uncovering robust patterns of microRNA co-expression across cancers using Bayesian Relevance Networks.

    PubMed

    Ramachandran, Parameswaran; Sánchez-Taltavull, Daniel; Perkins, Theodore J

    2017-01-01

    Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing-with its unique statistical properties-became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca.

  15. Fault model of the M7.1 intraslab earthquake on April 7 following the 2011 Great Tohoku earthquake (M9.0) estimated by the dense GPS network data

    NASA Astrophysics Data System (ADS)

    Miura, S.; Ohta, Y.; Ohzono, M.; Kita, S.; Iinuma, T.; Demachi, T.; Tachibana, K.; Nakayama, T.; Hirahara, S.; Suzuki, S.; Sato, T.; Uchida, N.; Hasegawa, A.; Umino, N.

    2011-12-01

    We propose a source fault model of the large intraslab earthquake with M7.1 deduced from a dense GPS network. The coseismic displacements obtained by GPS data analysis clearly show the spatial pattern specific to intraslab earthquakes not only in the horizontal components but also the vertical ones. A rectangular fault with uniform slip was estimated by a non-linear inversion approach. The results indicate that the simple rectangular fault model can explain the overall features of the observations. The amount of moment released is equivalent to Mw 7.17. The hypocenter depth of the main shock estimated by the Japan Meteorological Agency is slightly deeper than the neutral plane between down-dip compression (DC) and down-dip extension (DE) stress zones of the double-planed seismic zone. This suggests that the depth of the neutral plane was deepened by the huge slip of the 2011 M9.0 Tohoku earthquake, and the rupture of the thrust M7.1 earthquake was initiated at that depth, although more investigations are required to confirm this idea. The estimated fault plane has an angle of ~60 degrees from the surface of subducting Pacific plate. It is consistent with the hypothesis that intraslab earthquakes are thought to be reactivation of the preexisting hydrated weak zones made in bending process of oceanic plates around outer-rise regions.

  16. Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances

    NASA Astrophysics Data System (ADS)

    Stroujkova, A.; Reiter, D. T.; Shumway, R. H.

    2006-12-01

    The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.

  17. The emotional cost of distance: Geographic social network dispersion and post-traumatic stress among survivors of Hurricane Katrina.

    PubMed

    Morris, Katherine Ann; Deterding, Nicole M

    2016-09-01

    Social networks offer important emotional and instrumental support following natural disasters. However, displacement may geographically disperse network members, making it difficult to provide and receive support necessary for psychological recovery after trauma. We examine the association between distance to network members and post-traumatic stress using survey data, and identify potential mechanisms underlying this association using in-depth qualitative interviews. We use longitudinal, mixed-methods data from the Resilience in Survivors of Katrina (RISK) Project to capture the long-term effects of Hurricane Katrina on low-income mothers from New Orleans. Baseline surveys occurred approximately one year before the storm and follow-up surveys and in-depth interviews were conducted five years later. We use a sequential explanatory analytic design. With logistic regression, we estimate the association of geographic network dispersion with the likelihood of post-traumatic stress. With linear regressions, we estimate the association of network dispersion with the three post-traumatic stress sub-scales. Using maximal variation sampling, we use qualitative interview data to elaborate identified statistical associations. We find network dispersion is positively associated with the likelihood of post-traumatic stress, controlling for individual-level socio-demographic characteristics, exposure to hurricane-related trauma, perceived social support, and New Orleans residency. We identify two social-psychological mechanisms present in qualitative data: respondents with distant network members report a lack of deep belonging and a lack of mattering as they are unable to fulfill obligations to important distant ties. Results indicate the importance of physical proximity to emotionally-intimate network ties for long-term psychological recovery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Estimating Snow Water Equivalent over the American River in the Sierra Nevada Basin Using Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Welch, S. C.; Kerkez, B.; Glaser, S. D.; Bales, R. C.; Rice, R.

    2011-12-01

    We have designed a basin-scale (>2000 km2) instrument cluster, made up of 20 local-scale (1-km footprint) wireless sensor networks (WSNs), to measure patterns of snow depth and snow water equivalent (SWE) across the main snowmelt producing area within the American River basin. Each of the 20 WSNs has on the order of 25 wireless nodes, with over 10 nodes actively sensing snow depth, and thus snow accumulation and melt. When combined with existing snow density measurements and full-basin satellite snowcover data, these measurements are designed to provide dense ground-truth snow properties for research and real-time SWE for water management. The design of this large-scale network is based on rigorous testing of previous, smaller-scale studies, permitting for the development of methods to significantly, and efficiently scale up network operations. Recent advances in WSN technology have resulted in a modularized strategy that permits rapid future network deployment. To select network and sensor locations, various sensor placement approaches were compared, including random placement, placement of WSNs in locations that have captured the historical basin mean, as well as a placement algorithm leveraging the covariance structure of the SWE distribution. We show that that the optimal network locations do not exhibit a uniform grid, but rather follow strategic patterns based on physiographic terrain parameters. Uncertainty estimates are also provided to assess the confidence in the placement approach. To ensure near-optimal coverage of the full basin, we validated each placement approach with a multi-year record of SWE derived from reconstruction of historical satellite measurements.

  19. A New Operational Snow Retrieval Algorithm Applied to Historical AMSR-E Brightness Temperatures

    NASA Technical Reports Server (NTRS)

    Tedesco, Marco; Jeyaratnam, Jeyavinoth

    2016-01-01

    Snow is a key element of the water and energy cycles and the knowledge of spatio-temporal distribution of snow depth and snow water equivalent (SWE) is fundamental for hydrological and climatological applications. SWE and snow depth estimates can be obtained from spaceborne microwave brightness temperatures at global scale and high temporal resolution (daily). In this regard, the data recorded by the Advanced Microwave Scanning Radiometer-Earth Orbiting System (EOS) (AMSR-E) onboard the National Aeronautics and Space Administration's (NASA) AQUA spacecraft have been used to generate operational estimates of SWE and snow depth, complementing estimates generated with other microwave sensors flying on other platforms. In this study, we report the results concerning the development and assessment of a new operational algorithm applied to historical AMSR-E data. The new algorithm here proposed makes use of climatological data, electromagnetic modeling and artificial neural networks for estimating snow depth as well as a spatio-temporal dynamic density scheme to convert snow depth to SWE. The outputs of the new algorithm are compared with those of the current AMSR-E operational algorithm as well as in-situ measurements and other operational snow products, specifically the Canadian Meteorological Center (CMC) and GlobSnow datasets. Our results show that the AMSR-E algorithm here proposed generally performs better than the operational one and addresses some major issues identified in the spatial distribution of snow depth fields associated with the evolution of effective grain size.

  20. Seismic Readings from the Deepest Borehole in the New Madrid Seismic Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woolery, Edward W; Wang, Zhenming; Sturchio, Neil C

    2006-03-01

    Since the 1980s, the research associated with the UK network has been primarily strong-motion seismology of engineering interest. Currently the University of Kentucky operates a strong-motion network of nine stations in the New Madrid Seismic Zone. A unique feature of the network is the inclusions of vertical strong-motion arrays, each with one or two downhole accelerometers. The deepest borehole array is 260 m below the surfaces at station VASA in Fulton County, Kentucky. A preliminary surface seismic refraction survey was conducted at the site before drilling the hole at VSAS (Woolery and Wang, 2002). The depth to the Paleozoic bedrockmore » at the site was estimated to be approximately 595 m, and the depth to the first very stiff layer (i.e. Porters Creek Clay) was found to be about 260 m. These depths and stratigraphic interpretation correlated well with a proprietary seismic reflection line and the Ken-Ten Oil Exploration No. 1 Sanger hole (Schwalb, 1969), as well as our experience in the area (Street et al., 1995; Woolery et al., 1999).« less

  1. Optimizing placements of ground-based snow sensors for areal snow cover estimation using a machine-learning algorithm and melt-season snow-LiDAR data

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2016-12-01

    We present a structured, analytical approach to optimize ground-sensor placements based on time-series remotely sensed (LiDAR) data and machine-learning algorithms. We focused on catchments within the Merced and Tuolumne river basins, covered by the JPL Airborne Snow Observatory LiDAR program. First, we used a Gaussian mixture model to identify representative sensor locations in the space of independent variables for each catchment. Multiple independent variables that govern the distribution of snow depth were used, including elevation, slope, and aspect. Second, we used a Gaussian process to estimate the areal distribution of snow depth from the initial set of measurements. This is a covariance-based model that also estimates the areal distribution of model uncertainty based on the independent variable weights and autocorrelation. The uncertainty raster was used to strategically add sensors to minimize model uncertainty. We assessed the temporal accuracy of the method using LiDAR-derived snow-depth rasters collected in water-year 2014. In each area, optimal sensor placements were determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys was compared to 100 configurations of sensors selected at random. We found the accuracy of the model from the proposed placements to be higher and more consistent in each remaining survey than the average random configuration. We found that a relatively small number of sensors can be used to accurately reproduce the spatial patterns of snow depth across the basins, when placed using spatial snow data. Our approach also simplifies sensor placement. At present, field surveys are required to identify representative locations for such networks, a process that is labor intensive and provides limited guarantees on the networks' representation of catchment independent variables.

  2. "You See Yourself Like in a Mirror": The Effects of Internet-Mediated Personal Networks on Body Image and Eating Disorders.

    PubMed

    Pallotti, Francesca; Tubaro, Paola; Casilli, Antonio A; Valente, Thomas W

    2018-09-01

    Body image issues associated with eating disorders involve attitudinal and perceptual components: individuals' dissatisfaction with body shape or weight, and inability to assess body size correctly. While prior research has mainly explored social pressures produced by the media, fashion, and advertising industries, this paper focuses on the effects of personal networks on body image, particularly in the context of internet communities. We use data collected on a sample of participants to websites on eating disorders, and map their personal networks. We specify and estimate a model for the joint distribution of attitudinal and perceptual components of body image as a function of network-related characteristics and attributional factors. Supported by information gathered through in-depth interviews, the empirical estimates provide evidence that personal networks can be conducive to positive body image development, and that the influence of personal networks varies significantly by body size. We situate our discussion in current debates about the effects of computer-mediated and face-to-face communication networks on eating disorders and related behaviors.

  3. Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes

    PubMed Central

    Sampaio, Renato Coral; Vargas, José A. R.

    2018-01-01

    The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments. PMID:29570698

  4. Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes.

    PubMed

    Bestard, Guillermo Alvarez; Sampaio, Renato Coral; Vargas, José A R; Alfaro, Sadek C Absi

    2018-03-23

    The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments.

  5. LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

    PubMed

    Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu

    2018-09-01

    The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.

  6. Late Noachian fluvial erosion on Mars: Cumulative water volumes required to carve the valley networks and grain size of bed-sediment

    NASA Astrophysics Data System (ADS)

    Rosenberg, Eliott N.; Head, James W., III

    2015-11-01

    Our goal is to quantify the cumulative water volume that was required to carve the Late Noachian valley networks on Mars. We employ an improved methodology in which fluid/sediment flux ratios are based on empirical data, not assumed. We use a large quantity of data from terrestrial rivers to assess the variability of actual fluid/sediment flux sediment ratios. We find the flow depth by using an empirical relationship to estimate the fluid flux from the estimated channel width, and then using estimated grain sizes (theoretical sediment grain size predictions and comparison with observations by the Curiosity rover) to find the flow depth to which the resulting fluid flux corresponds. Assuming that the valley networks contained alluvial bed rivers, we find, from their current slopes and widths, that the onset of suspended transport occurs near the sand-gravel boundary. Thus, any bed sediment must have been fine gravel or coarser, whereas fine sediment would be carried downstream. Subsequent to the cessation of fluvial activity, aeolian processes have partially redistributed fine-grain particles in the valleys, often forming dunes. It seems likely that the dominant bed sediment size was near the threshold for suspension, and assuming that this was the case could make our final results underestimates, which is the same tendency that our other assumptions have. Making this assumption, we find a global equivalent layer (GEL) of 3-100 m of water to be the most probable cumulative volume that passed through the valley networks. This value is similar to the ∼34 m water GEL currently on the surface and in the near-surface in the form of ice. Note that the amount of water required to carve the valley networks could represent the same water recycled through a surface valley network hydrological system many times in separate or continuous precipitation/runoff/collection/evaporation/precipitation cycles.

  7. Using inferential sensors for quality control of Everglades Depth Estimation Network water-level data

    USGS Publications Warehouse

    Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul

    2016-09-29

    The Everglades Depth Estimation Network (EDEN), with over 240 real-time gaging stations, provides hydrologic data for freshwater and tidal areas of the Everglades. These data are used to generate daily water-level and water-depth maps of the Everglades that are used to assess biotic responses to hydrologic change resulting from the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. The generation of EDEN daily water-level and water-depth maps is dependent on high quality real-time data from water-level stations. Real-time data are automatically checked for outliers by assigning minimum and maximum thresholds for each station. Small errors in the real-time data, such as gradual drift of malfunctioning pressure transducers, are more difficult to immediately identify with visual inspection of time-series plots and may only be identified during on-site inspections of the stations. Correcting these small errors in the data often is time consuming and water-level data may not be finalized for several months. To provide daily water-level and water-depth maps on a near real-time basis, EDEN needed an automated process to identify errors in water-level data and to provide estimates for missing or erroneous water-level data.The Automated Data Assurance and Management (ADAM) software uses inferential sensor technology often used in industrial applications. Rather than installing a redundant sensor to measure a process, such as an additional water-level station, inferential sensors, or virtual sensors, were developed for each station that make accurate estimates of the process measured by the hard sensor (water-level gaging station). The inferential sensors in the ADAM software are empirical models that use inputs from one or more proximal stations. The advantage of ADAM is that it provides a redundant signal to the sensor in the field without the environmental threats associated with field conditions at stations (flood or hurricane, for example). In the event that a station does malfunction, ADAM provides an accurate estimate for the period of missing data. The ADAM software also is used in the quality assurance and quality control of the data. The virtual signals are compared to the real-time data, and if the difference between the two signals exceeds a certain tolerance, corrective action to the data and (or) the gaging station can be taken. The ADAM software is automated so that, each morning, the real-time EDEN data are compared to the inferential sensor signals and digital reports highlighting potential erroneous real-time data are generated for appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.

  8. PSF estimation for defocus blurred image based on quantum back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang

    2010-11-01

    Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.

  9. Spatial Representativeness Error in the Ground-Level Observation Networks for Black Carbon Radiation Absorption

    NASA Astrophysics Data System (ADS)

    Wang, Rong; Andrews, Elisabeth; Balkanski, Yves; Boucher, Olivier; Myhre, Gunnar; Samset, Bjørn Hallvard; Schulz, Michael; Schuster, Gregory L.; Valari, Myrto; Tao, Shu

    2018-02-01

    There is high uncertainty in the direct radiative forcing of black carbon (BC), an aerosol that strongly absorbs solar radiation. The observation-constrained estimate, which is several times larger than the bottom-up estimate, is influenced by the spatial representativeness error due to the mesoscale inhomogeneity of the aerosol fields and the relatively low resolution of global chemistry-transport models. Here we evaluated the spatial representativeness error for two widely used observational networks (AErosol RObotic NETwork and Global Atmosphere Watch) by downscaling the geospatial grid in a global model of BC aerosol absorption optical depth to 0.1° × 0.1°. Comparing the models at a spatial resolution of 2° × 2° with BC aerosol absorption at AErosol RObotic NETwork sites (which are commonly located near emission hot spots) tends to cause a global spatial representativeness error of 30%, as a positive bias for the current top-down estimate of global BC direct radiative forcing. By contrast, the global spatial representativeness error will be 7% for the Global Atmosphere Watch network, because the sites are located in such a way that there are almost an equal number of sites with positive or negative representativeness error.

  10. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search.

    PubMed

    Liu, Meiqin; Zhang, Duo; Zhang, Senlin; Zhang, Qunfei

    2017-12-04

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme.

  11. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search

    PubMed Central

    Zhang, Senlin; Zhang, Qunfei

    2017-01-01

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme. PMID:29207541

  12. Domain-averaged snow depth over complex terrain from flat field measurements

    NASA Astrophysics Data System (ADS)

    Helbig, Nora; van Herwijnen, Alec

    2017-04-01

    Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.

  13. Neural-Network Approach to Hyperspectral Data Analysis for Volcanic Ash Clouds Monitoring

    NASA Astrophysics Data System (ADS)

    Piscini, Alessandro; Ventress, Lucy; Carboni, Elisa; Grainger, Roy Gordon; Del Frate, Fabio

    2015-11-01

    In this study three artificial neural networks (ANN) were implemented in order to emulate a retrieval model and to estimate the ash Aerosol optical Depth (AOD), particle effective radius (reff) and cloud height from volcanic eruption using hyperspectral remotely sensed data. ANNs were trained using a selection of Infrared Atmospheric Sounding Interferometer (IASI) channels in Thermal Infrared (TIR) as inputs, and the corresponding ash parameters retrieved obtained using the Oxford retrievals as target outputs. The retrieval is demonstrated for the eruption of the Eyjafjallajo ̈kull volcano (Iceland) occurred in 2010. The results of validation provided root mean square error (RMSE) values between neural network outputs and targets lower than standard deviation (STD) of corresponding target outputs, therefore demonstrating the feasibility to estimate volcanic ash parameters using an ANN approach, and its importance in near real time monitoring activities, owing to its fast application. A high accuracy has been achieved for reff and cloud height estimation, while a decreasing in accuracy was obtained when applying the NN approach for AOD estimation, in particular for those values not well characterized during NN training phase.

  14. Relations that affect the probability and prediction of nitrate concentration in private wells in the glacial aquifer system in the United States

    USGS Publications Warehouse

    Warner, Kelly L.; Arnold, Terri L.

    2010-01-01

    Nitrate in private wells in the glacial aquifer system is a concern for an estimated 17 million people using private wells because of the proximity of many private wells to nitrogen sources. Yet, less than 5 percent of private wells sampled in this study contained nitrate in concentrations that exceeded the U.S. Environmental Protection Agency (USEPA) Maximum Contaminant Level (MCL) of 10 mg/L (milligrams per liter) as N (nitrogen). However, this small group with nitrate concentrations above the USEPA MCL includes some of the highest nitrate concentrations detected in groundwater from private wells (77 mg/L). Median nitrate concentration measured in groundwater from private wells in the glacial aquifer system (0.11 mg/L as N) is lower than that in water from other unconsolidated aquifers and is not strongly related to surface sources of nitrate. Background concentration of nitrate is less than 1 mg/L as N. Although overall nitrate concentration in private wells was low relative to the MCL, concentrations were highly variable over short distances and at various depths below land surface. Groundwater from wells in the glacial aquifer system at all depths was a mixture of old and young water. Oxidation and reduction potential changes with depth and groundwater age were important influences on nitrate concentrations in private wells. A series of 10 logistic regression models was developed to estimate the probability of nitrate concentration above various thresholds. The threshold concentration (1 to 10 mg/L) affected the number of variables in the model. Fewer explanatory variables are needed to predict nitrate at higher threshold concentrations. The variables that were identified as significant predictors for nitrate concentration above 4 mg/L as N included well characteristics such as open-interval diameter, open-interval length, and depth to top of open interval. Environmental variables in the models were mean percent silt in soil, soil type, and mean depth to saturated soil. The 10-year mean (1992-2001) application rate of nitrogen fertilizer applied to farms was included as the potential source variable. A linear regression model also was developed to predict mean nitrate concentrations in well networks. The model is based on network averages because nitrate concentrations are highly variable over short distances. Using values for each of the predictor variables averaged by network (network mean value) from the logistic regression models, the linear regression model developed in this study predicted the mean nitrate concentration in well networks with a 95 percent confidence in predictions.

  15. Improved UUV Positioning Using Acoustic Communications and a Potential for Real-Time Networking and Collaboration

    DTIC Science & Technology

    2017-06-01

    12 III. ACOUSTIC WAVE TRAVEL TIME ESTIMATION...Mission ...............................125 Table 8. Average Horizontal Distance from the UUV to the Reference Points when a Travel Time Measurement is...Taken ............................................126 Table 9. Average UUV Depth when a Travel Time Measurement is Taken .........126 Table 10. Ratio

  16. Body Weight Estimation for Dose-Finding and Health Monitoring of Lying, Standing and Walking Patients Based on RGB-D Data

    PubMed Central

    May, Stefan

    2018-01-01

    This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients. PMID:29695098

  17. Body Weight Estimation for Dose-Finding and Health Monitoring of Lying, Standing and Walking Patients Based on RGB-D Data.

    PubMed

    Pfitzner, Christian; May, Stefan; Nüchter, Andreas

    2018-04-24

    This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients.

  18. Parameter estimation of brain tumors using intraoperative thermal imaging based on artificial tactile sensing in conjunction with artificial neural network

    NASA Astrophysics Data System (ADS)

    Sadeghi-Goughari, M.; Mojra, A.; Sadeghi, S.

    2016-02-01

    Intraoperative Thermal Imaging (ITI) is a new minimally invasive diagnosis technique that can potentially locate margins of brain tumor in order to achieve maximum tumor resection with least morbidity. This study introduces a new approach to ITI based on artificial tactile sensing (ATS) technology in conjunction with artificial neural networks (ANN) and feasibility and applicability of this method in diagnosis and localization of brain tumors is investigated. In order to analyze validity and reliability of the proposed method, two simulations were performed. (i) An in vitro experimental setup was designed and fabricated using a resistance heater embedded in agar tissue phantom in order to simulate heat generation by a tumor in the brain tissue; and (ii) A case report patient with parafalcine meningioma was presented to simulate ITI in the neurosurgical procedure. In the case report, both brain and tumor geometries were constructed from MRI data and tumor temperature and depth of location were estimated. For experimental tests, a novel assisted surgery robot was developed to palpate the tissue phantom surface to measure temperature variations and ANN was trained to estimate the simulated tumor’s power and depth. Results affirm that ITI based ATS is a non-invasive method which can be useful to detect, localize and characterize brain tumors.

  19. Relocation of Groningen seismicity using refracted waves

    NASA Astrophysics Data System (ADS)

    Ruigrok, E.; Trampert, J.; Paulssen, H.; Dost, B.

    2015-12-01

    The Groningen gas field is a giant natural gas accumulation in the Northeast of the Netherlands. The gas is in a reservoir at a depth of about 3 km. The naturally-fractured gas-filled sandstone extends roughly 45 by 25 km laterally and 140 m vertically. Decades of production have led to significant compaction of the sandstone. The (differential) compaction is thought to have reactivated existing faults and being the main driver of induced seismicity. Precise earthquake location is difficult due to a complicated subsurface, and that is the likely reason, the current hypocentre estimates do not clearly correlate with the well-known fault network. The seismic velocity model down to reservoir depth is quite well known from extensive seismic surveys and borehole data. Most to date earthquake detections, however, were made with a sparse pre-2015 seismic network. For shallow seismicity (<5 km depth) horizontal source-receiver distances tend to be much larger than vertical distances. Consequently, preferred source-receiver travel paths are refractions over high-velocity layers below the reservoir. However, the seismic velocities of layers below the reservoir are poorly known. We estimated an effective velocity model of the main refracting layer below the reservoir and use this for relocating past seismicity. We took advantage of vertical-borehole recordings for estimating precise P-wave (refraction) onset times and used a tomographic approach to find the laterally varying velocity field of the refracting layer. This refracting layer is then added to the known velocity model, and the combined model is used to relocate the past seismicity. From the resulting relocations we assess which of the faults are being reactivated.

  20. Assessing artificial neural networks and statistical methods for infilling missing soil moisture records

    NASA Astrophysics Data System (ADS)

    Dumedah, Gift; Walker, Jeffrey P.; Chik, Li

    2014-07-01

    Soil moisture information is critically important for water management operations including flood forecasting, drought monitoring, and groundwater recharge estimation. While an accurate and continuous record of soil moisture is required for these applications, the available soil moisture data, in practice, is typically fraught with missing values. There are a wide range of methods available to infilling hydrologic variables, but a thorough inter-comparison between statistical methods and artificial neural networks has not been made. This study examines 5 statistical methods including monthly averages, weighted Pearson correlation coefficient, a method based on temporal stability of soil moisture, and a weighted merging of the three methods, together with a method based on the concept of rough sets. Additionally, 9 artificial neural networks are examined, broadly categorized into feedforward, dynamic, and radial basis networks. These 14 infilling methods were used to estimate missing soil moisture records and subsequently validated against known values for 13 soil moisture monitoring stations for three different soil layer depths in the Yanco region in southeast Australia. The evaluation results show that the top three highest performing methods are the nonlinear autoregressive neural network, rough sets method, and monthly replacement. A high estimation accuracy (root mean square error (RMSE) of about 0.03 m/m) was found in the nonlinear autoregressive network, due to its regression based dynamic network which allows feedback connections through discrete-time estimation. An equally high accuracy (0.05 m/m RMSE) in the rough sets procedure illustrates the important role of temporal persistence of soil moisture, with the capability to account for different soil moisture conditions.

  1. Using Bayesian Network as a tool for coastal storm flood impact prediction at Varna Bay (Bulgaria, Western Black Sea)

    NASA Astrophysics Data System (ADS)

    Valchev, Nikolay; Eftimova, Petya; Andreeva, Nataliya; Prodanov, Bogdan

    2017-04-01

    Coastal zone is among the fastest evolving areas worldwide. Ever increasing population inhabiting coastal settlements develops often conflicting economic and societal activities. The existing imbalance between the expansion of these activities, on one hand, and the potential to accommodate them in a sustainable manner, on the other, becomes a critical problem. Concurrently, coasts are affected by various hydro-meteorological phenomena such as storm surges, heavy seas, strong winds and flash floods, which intensities and occurrence frequency is likely to increase due to the climate change. This implies elaboration of tools capable of quick prediction of impact of those phenomena on the coast and providing solutions in terms of disaster risk reduction measures. One such tool is Bayesian network. Proposed paper describes the set-up of such network for Varna Bay (Bulgaria, Western Black Sea). It relates near-shore storm conditions to their onshore flood potential and ultimately to relevant impact as relative damage on coastal and manmade environment. Methodology for set-up and training of the Bayesian network was developed within RISC-KIT project (Resilience-Increasing Strategies for Coasts - toolKIT). Proposed BN reflects the interaction between boundary conditions, receptors, hazard, and consequences. Storm boundary conditions - maximum significant wave height and peak surge level, were determined on the basis of their historical and projected occurrence. The only hazard considered in this study is flooding characterized by maximum inundation depth. BN was trained with synthetic events created by combining estimated boundary conditions. Flood impact was modeled with the process-based morphodynamical model XBeach. Restaurants, sport and leisure facilities, administrative buildings, and car parks were introduced in the network as receptors. Consequences (impact) are estimated in terms of relative damage caused by given inundation depth. National depth-damage (susceptibility) curves were used to define the percentage of damage ranked as low, moderate, high and very high. Besides previously described components, BN includes also two hazard influencing disaster risk reduction (DRR) measures: re-enforced embankment of Varna Port wall and beach nourishment. As a result of training process the network is able to evaluate spatially varying hazards and damages for specific storm conditions. Moreover, it is able to predict where on the site the highest impact would occur and to quantify the mitigation capacity of proposed DRR measures. For example, it is estimated that storm impact would be considerably reduced in present conditions but vulnerability would be still high in climate change perspective.

  2. Energy consumption analysis for various memristive networks under different learning strategies

    NASA Astrophysics Data System (ADS)

    Deng, Lei; Wang, Dong; Zhang, Ziyang; Tang, Pei; Li, Guoqi; Pei, Jing

    2016-02-01

    Recently, various memristive systems emerge to emulate the efficient computing paradigm of the brain cortex; whereas, how to make them energy efficient still remains unclear, especially from an overall perspective. Here, a systematical and bottom-up energy consumption analysis is demonstrated, including the memristor device level and the network learning level. We propose an energy estimating methodology when modulating the memristive synapses, which is simulated in three typical neural networks with different synaptic structures and learning strategies for both offline and online learning. These results provide an in-depth insight to create energy efficient brain-inspired neuromorphic devices in the future.

  3. Representativeness of the ground observational sites and up-scaling of the point soil moisture measurements

    NASA Astrophysics Data System (ADS)

    Chen, Jinlei; Wen, Jun; Tian, Hui

    2016-02-01

    Soil moisture plays an increasingly important role in the cycle of energy-water exchange, climate change, and hydrologic processes. It is usually measured at a point site, but regional soil moisture is essential for validating remote sensing products and numerical modeling results. In the study reported in this paper, the minimal number of required sites (NRS) for establishing a research observational network and the representative single sites for regional soil moisture estimation are discussed using the soil moisture data derived from the ;Maqu soil moisture observational network; (101°40‧-102°40‧E, 33°30‧-35°45‧N), which is supported by Chinese Academy of Science. Furthermore, the best up-scaling method suitable for this network has been studied by evaluating four commonly used up-scaling methods. The results showed that (1) Under a given accuracy requirement R ⩾ 0.99, RMSD ⩽ 0.02 m3/m3, NRS at both 5 and 10 cm depth is 10. (2) Representativeness of the sites has been validated by time stability analysis (TSA), time sliding correlation analysis (TSCA) and optimal combination of sites (OCS). NST01 is the most representative site at 5 cm depth for the first two methods; NST07 and NST02 are the most representative sites at 10 cm depth. The optimum combination sites at 5 cm depth are NST01, NST02, and NST07. NST05, NST08, and NST13 are the best group at 10 cm depth. (3) Linear fitting, compared with other three methods, is the best up-scaling method for all types of representative sites obtained above, and linear regression equations between a single site and regional soil moisture are established hereafter. ;Single site; obtained by OCS has the greatest up-scaling effect, and TSCA takes the second place. (4) Linear fitting equations show good practicability in estimating the variation of regional soil moisture from July 3, 2013 to July 3, 2014, when a large number of observed soil moisture data are lost.

  4. Spatial Representativeness Error in the Ground‐Level Observation Networks for Black Carbon Radiation Absorption

    PubMed Central

    Andrews, Elisabeth; Balkanski, Yves; Boucher, Olivier; Myhre, Gunnar; Samset, Bjørn Hallvard; Schulz, Michael; Schuster, Gregory L.; Valari, Myrto; Tao, Shu

    2018-01-01

    Abstract There is high uncertainty in the direct radiative forcing of black carbon (BC), an aerosol that strongly absorbs solar radiation. The observation‐constrained estimate, which is several times larger than the bottom‐up estimate, is influenced by the spatial representativeness error due to the mesoscale inhomogeneity of the aerosol fields and the relatively low resolution of global chemistry‐transport models. Here we evaluated the spatial representativeness error for two widely used observational networks (AErosol RObotic NETwork and Global Atmosphere Watch) by downscaling the geospatial grid in a global model of BC aerosol absorption optical depth to 0.1° × 0.1°. Comparing the models at a spatial resolution of 2° × 2° with BC aerosol absorption at AErosol RObotic NETwork sites (which are commonly located near emission hot spots) tends to cause a global spatial representativeness error of 30%, as a positive bias for the current top‐down estimate of global BC direct radiative forcing. By contrast, the global spatial representativeness error will be 7% for the Global Atmosphere Watch network, because the sites are located in such a way that there are almost an equal number of sites with positive or negative representativeness error. PMID:29937603

  5. Verification of the WFAS Lightning Efficiency Map

    Treesearch

    Paul Sopko; Don Latham; Isaac Grenfell

    2007-01-01

    A Lightning Ignition Efficiency map was added to the suite of daily maps offered by the Wildland Fire Assessment System (WFAS) in 1999. This map computes a lightning probability of ignition (POI) based on the estimated fuel type, fuel depth, and 100-hour fuel moisture interpolated from the Remote Automated Weather Station (RAWS) network. An attempt to verify the...

  6. Estimation of water surface elevations for the Everglades, Florida

    USGS Publications Warehouse

    Palaseanu, Monica; Pearlstine, Leonard

    2008-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring gages and modeling methods that provides scientists and managers with current (2000–present) online water surface and water depth information for the freshwater domain of the Greater Everglades. This integrated system presents data on a 400-m square grid to assist in (1) large-scale field operations; (2) integration of hydrologic and ecologic responses; (3) supporting biological and ecological assessment of the implementation of the Comprehensive Everglades Restoration Plan (CERP); and (4) assessing trophic-level responses to hydrodynamic changes in the Everglades.This paper investigates the radial basis function multiquadric method of interpolation to obtain a continuous freshwater surface across the entire Everglades using radio-transmitted data from a network of water-level gages managed by the US Geological Survey (USGS), the South Florida Water Management District (SFWMD), and the Everglades National Park (ENP). Since the hydrological connection is interrupted by canals and levees across the study area, boundary conditions were simulated by linearly interpolating along those features and integrating the results together with the data from marsh stations to obtain a continuous water surface through multiquadric interpolation. The absolute cross-validation errors greater than 5 cm correlate well with the local outliers and the minimum distance between the closest stations within 2000-m radius, but seem to be independent of vegetation or season.

  7. Investigating local controls on temporal stability of soil water content using sensor network data and an inverse modeling approach

    NASA Astrophysics Data System (ADS)

    Qu, W.; Bogena, H. R.; Huisman, J. A.; Martinez, G.; Pachepsky, Y. A.; Vereecken, H.

    2013-12-01

    Soil water content is a key variable in the soil, vegetation and atmosphere continuum with high spatial and temporal variability. Temporal stability of soil water content (SWC) has been observed in multiple monitoring studies and the quantification of controls on soil moisture variability and temporal stability presents substantial interest. The objective of this work was to assess the effect of soil hydraulic parameters on the temporal stability. The inverse modeling based on large observed time series SWC with in-situ sensor network was used to estimate the van Genuchten-Mualem (VGM) soil hydraulic parameters in a small grassland catchment located in western Germany. For the inverse modeling, the shuffled complex evaluation (SCE) optimization algorithm was coupled with the HYDRUS 1D code. We considered two cases: without and with prior information about the correlation between VGM parameters. The temporal stability of observed SWC was well pronounced at all observation depths. Both the spatial variability of SWC and the robustness of temporal stability increased with depth. Calibrated models both with and without prior information provided reasonable correspondence between simulated and measured time series of SWC. Furthermore, we found a linear relationship between the mean relative difference (MRD) of SWC and the saturated SWC (θs). Also, the logarithm of saturated hydraulic conductivity (Ks), the VGM parameter n and logarithm of α were strongly correlated with the MRD of saturation degree for the prior information case, but no correlation was found for the non-prior information case except at the 50cm depth. Based on these results we propose that establishing relationships between temporal stability and spatial variability of soil properties presents a promising research avenue for a better understanding of the controls on soil moisture variability. Correlation between Mean Relative Difference of soil water content (or saturation degree) and inversely estimated soil hydraulic parameters (log10(Ks), log10(α), n, and θs) at 5-cm, 20-cm and 50-cm depths. Solid circles represent parameters estimated by using prior information; open circles represent parameters estimated without using prior information.

  8. Development of inferential sensors for real-time quality control of water-level data for the Everglades Depth Estimation Network

    USGS Publications Warehouse

    Daamen, Ruby C.; Edwin A. Roehl, Jr.; Conrads, Paul

    2010-01-01

    A technology often used for industrial applications is “inferential sensor.” Rather than installing a redundant sensor to measure a process, such as an additional waterlevel gage, an inferential sensor, or virtual sensor, is developed that estimates the processes measured by the physical sensor. The advantage of an inferential sensor is that it provides a redundant signal to the sensor in the field but without exposure to environmental threats. In the event that a gage does malfunction, the inferential sensor provides an estimate for the period of missing data. The inferential sensor also can be used in the quality assurance and quality control of the data. Inferential sensors for gages in the EDEN network are currently (2010) under development. The inferential sensors will be automated so that the real-time EDEN data will continuously be compared to the inferential sensor signal and digital reports of the status of the real-time data will be sent periodically to the appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.

  9. Seismic Source Scaling and Characteristics of Six North Korean Underground Nuclear Explosions

    NASA Astrophysics Data System (ADS)

    Park, J.; Stump, B. W.; Che, I. Y.; Hayward, C.

    2017-12-01

    We estimate the range of yields and source depths for the six North Korean underground nuclear explosions in 2006, 2009, 2013, 2016 (January and September), and 2017, based on regional seismic observations in South Korea and China. Seismic data used in this study are from three seismo-acoustic stations, BRDAR, CHNAR, and KSGAR, cooperatively operated by SMU and KIGAM, the KSRS seismic array operated by the Comprehensive Nuclear Test Ban Treaty Organization, and MDJ, a station in the Global Seismographic Network. We calculate spectral ratios for event pairs using seismograms from the six explosions observed along the same paths and at the same receivers. These relative seismic source scaling spectra for Pn, Pg, Sn, and surface wave windows provide a basis for a grid search source solution that estimates source yield and depth for each event based on both the modified Mueller and Murphy (1971; MM71) and Denny and Johnson (1991; DJ91) source models. The grid search is used to identify the best-fit empirical spectral ratios subject to the source models by minimizing the goodness-of-fit (GOF) in the frequency range of 0.5-15 Hz. For all cases, the DJ91 model produces higher ratios of depth and yield than MM71. These initial results include significant trade-offs between depth and yield in all cases. In order to better take the effect of source depth into account, a modified grid search was implemented that includes the propagation effects for different source depths by including reflectivity Greens functions in the grid search procedure. This revision reduces the trade-offs between depth and yield, results in better model fits to frequencies as high as 15 Hz, and GOF values smaller than those where the depth effects on the Greens functions were ignored. The depth and yield estimates for all six explosions using this new procedure will be presented.

  10. Fast Moment Magnitude Determination from P-wave Trains for Bucharest Rapid Early Warning System (BREWS)

    NASA Astrophysics Data System (ADS)

    Lizurek, Grzegorz; Marmureanu, Alexandru; Wiszniowski, Jan

    2017-03-01

    Bucharest, with a population of approximately 2 million people, has suffered damage from earthquakes in the Vrancea seismic zone, which is located about 170 km from Bucharest, at a depth of 80-200 km. Consequently, an earthquake early warning system (Bucharest Rapid earthquake Early Warning System or BREWS) was constructed to provide some warning about impending shaking from large earthquakes in the Vrancea zone. In order to provide quick estimates of magnitude, seismic moment was first determined from P-waves and then a moment magnitude was determined from the moment. However, this magnitude may not be consistent with previous estimates of magnitude from the Romanian Seismic Network. This paper introduces the algorithm using P-wave spectral levels and compares them with catalog estimates. The testing procedure used waveforms from about 90 events with catalog magnitudes from 3.5 to 5.4. Corrections to the P-wave determined magnitudes according to dominant intermediate depth events mechanism were tested for November 22, 2014, M5.6 and October 17, M6 events. The corrections worked well, but unveiled overestimation of the average magnitude result of about 0.2 magnitude unit in the case of shallow depth event ( H < 60 km). The P-wave spectral approach allows for the relatively fast estimates of magnitude for use in BREWS. The average correction taking into account the most common focal mechanism for radiation pattern coefficient may lead to overestimation of the magnitude for shallow events of about 0.2 magnitude unit. However, in case of events of intermediate depth of M6 the resulting M w is underestimated at about 0.1-0.2. We conclude that our P-wave spectral approach is sufficiently robust for the needs of BREWS for both shallow and intermediate depth events.

  11. Structural and Network-based Methods for Knowledge-Based Systems

    DTIC Science & Technology

    2011-12-01

    depth) provide important information about knowledge gaps in the KB. For example, if SuccessEstimate (causes-EventEvent, Typhoid - Fever , 1, 3) is...equal to 0, it points toward lack of biological knowledge about Typhoid - Fever in our KB. Similar information can also be obtained from the...position of the consequent. ⋃ ( ( ) ) Therefore, if Q does not contain Typhoid - Fever , then obtaining

  12. A Neural Network Approach to Infer Optical Depth of Thick Ice Clouds at Night

    NASA Technical Reports Server (NTRS)

    Minnis, P.; Hong, G.; Sun-Mack, S.; Chen, Yan; Smith, W. L., Jr.

    2016-01-01

    One of the roadblocks to continuously monitoring cloud properties is the tendency of clouds to become optically black at cloud optical depths (COD) of 6 or less. This constraint dramatically reduces the quantitative information content at night. A recent study found that because of their diffuse nature, ice clouds remain optically gray, to some extent, up to COD of 100 at certain wavelengths. Taking advantage of this weak dependency and the availability of COD retrievals from CloudSat, an artificial neural network algorithm was developed to estimate COD values up to 70 from common satellite imager infrared channels. The method was trained using matched 2007 CloudSat and Aqua MODIS data and is tested using similar data from 2008. The results show a significant improvement over the use of default values at night with high correlation. This paper summarizes the results and suggests paths for future improvement.

  13. Cone penetrometer testing and discrete-depth ground water sampling techniques: A cost-effective method of site characterization in a multiple-aquifer setting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Pierce, Y.G.; Gallinatti, J.D.

    Cone penetrometer testing (CPT), combined with discrete-depth ground water sampling methods, can significantly reduce the time and expense required to characterize large sites that have multiple aquifers. Results from the screening site characterization can then be used to design and install a cost-effective monitoring well network. At a site in northern California, it was necessary to characterize the stratigraphy and the distribution of volatile organic compounds (VOCs). To expedite characterization, a five-week field screening program was implemented that consisted of a shallow ground water survey, CPT soundings and pore-pressure measurements, and discrete-depth ground water sampling. Based on continuous lithologic informationmore » provided by the CPT soundings, four predominantly coarse-grained, water yielding stratigraphic packages were identified. Seventy-nine discrete-depth ground water samples were collected using either shallow ground water survey techniques, the BAT Enviroprobe, or the QED HydroPunch I, depending on subsurface conditions. Using results from these efforts, a 20-well monitoring network was designed and installed to monitor critical points within each stratigraphic package. Good correlation was found for hydraulic head and chemical results between discrete-depth screening data and monitoring well data. Understanding the vertical VOC distribution and concentrations produced substantial time and cost savings by minimizing the number of permanent monitoring wells and reducing the number of costly conductor casings that had to be installed. Additionally, significant long-term cost savings will result from reduced sampling costs, because fewer wells comprise the monitoring network. The authors estimate these savings to be 50% for site characterization costs, 65% for site characterization time, and 60% for long-term monitoring costs.« less

  14. End-to-end deep neural network for optical inversion in quantitative photoacoustic imaging.

    PubMed

    Cai, Chuangjian; Deng, Kexin; Ma, Cheng; Luo, Jianwen

    2018-06-15

    An end-to-end deep neural network, ResU-net, is developed for quantitative photoacoustic imaging. A residual learning framework is used to facilitate optimization and to gain better accuracy from considerably increased network depth. The contracting and expanding paths enable ResU-net to extract comprehensive context information from multispectral initial pressure images and, subsequently, to infer a quantitative image of chromophore concentration or oxygen saturation (sO 2 ). According to our numerical experiments, the estimations of sO 2 and indocyanine green concentration are accurate and robust against variations in both optical property and object geometry. An extremely short reconstruction time of 22 ms is achieved.

  15. Spatial properties of snow cover in the Upper Merced River Basin: implications for a distributed snow measurement network

    NASA Astrophysics Data System (ADS)

    Bouffon, T.; Rice, R.; Bales, R.

    2006-12-01

    The spatial distributions of snow water equivalent (SWE) and snow depth within a 1, 4, and 16 km2 grid element around two automated snow pillows in a forested and open- forested region of the Upper Merced River Basin (2,800 km2) of Yosemite National Park were characterized using field observations and analyzed using binary regression trees. Snow surveys occurred at the forested site during the accumulation and ablation seasons, while at the open-forest site a survey was performed only during the accumulation season. An average of 130 snow depth and 7 snow density measurements were made on each survey, within the 4 km2 grid. Snow depth was distributed using binary regression trees and geostatistical methods using the physiographic parameters (e.g. elevation, slope, vegetation, aspect). Results in the forest region indicate that the snow pillow overestimated average SWE within the 1, 4, and 16 km2 areas by 34 percent during ablation, but during accumulation the snow pillow provides a good estimate of the modeled mean SWE grid value, however it is suspected that the snow pillow was underestimating SWE. However, at the open forest site, during accumulation, the snow pillow was 28 percent greater than the mean modeled grid element. In addition, the binary regression trees indicate that the independent variables of vegetation, slope, and aspect are the most influential parameters of snow depth distribution. The binary regression tree and multivariate linear regression models explain about 60 percent of the initial variance for snow depth and 80 percent for density, respectively. This short-term study provides motivation and direction for the installation of a distributed snow measurement network to fill the information gap in basin-wide SWE and snow depth measurements. Guided by these results, a distributed snow measurement network was installed in the Fall 2006 at Gin Flat in the Upper Merced River Basin with the specific objective of measuring accumulation and ablation across topographic variables with the aim of providing guidance for future larger scale observation network designs.

  16. Estimation of the intrinsic absorption and scattering attenuation in Northeastern Venezuela (Southeastern Caribbean) using coda waves

    USGS Publications Warehouse

    Ugalde, A.; Pujades, L.G.; Canas, J.A.; Villasenor, A.

    1998-01-01

    Northeastern Venezuela has been studied in terms of coda wave attenuation using seismograms from local earthquakes recorded by a temporary short-period seismic network. The studied area has been separated into two subregions in order to investigate lateral variations in the attenuation parameters. Coda-Q-1 (Q(c)-1) has been obtained using the single-scattering theory. The contribution of the intrinsic absorption (Q(i)-1) and scattering (Q(s)-1) to total attenuation (Q(t)-1) has been estimated by means of a multiple lapse time window method, based on the hypothesis of multiple isotropic scattering with uniform distribution of scatterers. Results show significant spatial variations of attenuation: the estimates for intermediate depth events and for shallow events present major differences. This fact may be related to different tectonic characteristics that may be due to the presence of the Lesser Antilles subduction zone, because the intermediate depth seismic zone may be coincident with the southern continuation of the subducting slab under the arc.

  17. Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization

    PubMed Central

    Lee, Jong-Ha; Kim, Yoon Nyun; Park, Hee-Jun

    2015-01-01

    The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the tactile data obtained at the surface of the tissue using an optical tactile sensation imaging system (TSIS). A forward algorithm is designed to comprehensively predict the tactile data based on the mechanical properties of tissue inclusion using finite element modeling (FEM). This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the tactile data. We utilize the artificial neural network (ANN) for the inversion algorithm. The proposed estimation method was validated by a realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58%, 3.82%, and 2.51% relative errors, respectively. The obtained results prove that the proposed method has potential to become a useful screening and diagnostic method for breast cancer. PMID:25785306

  18. Evaluation for relationship among source parameters of underground nuclear tests in Northern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Kim, G.; Che, I. Y.

    2017-12-01

    We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.

  19. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  20. HB06 : Field Validation of Realtime Predictions of Surfzone Waves and Currents

    NASA Astrophysics Data System (ADS)

    Guza, R. T.; O'Reilly, W. C.; Feddersen, F.

    2006-12-01

    California shorelines can be contaminated by the discharge of polluted streams and rivers onto the beach face or into the surf zone. Management decisions (for example, beach closures) can be assisted by accurate characterization of the waves and currents that transport and mix these pollutants. A real-time, operational waves and alongshore current model, developed for a 5 km alongshore reach at Huntington Beach (http://cdip.ucsd.edu/hb06/), will be tested for a month during Fall 2006 as part of the HB06 field experiment. The model has two components: prediction of incident waves immediately seaward of the surf zone, and the transformation of breaking waves across the surf zone. The California Safe Boating Network Model (O'Reilly et al., California World Ocean Conference, 2006) is used to estimate incident wave properties. This regional wave model accounts for blocking and refraction by offshore islands and shoals, and variation of the shoreline orientation. At Huntington Beach, the network model uses four buoys exposed to the deep ocean to estimate swell, and four nearby buoys to estimate locally generated seas. The model predictions will be compared with directional wave buoy observations in 22 m depth, 1 km from the shore. The computationally fast model for surfzone waves and breaking-wave driven alongshore currents, appropriate for random waves on beaches with simple bathymetry, is based on concepts developed and tested by Ed Thornton and his colleagues over the last 30 years. Modeled alongshore currents at Huntington Beach, with incident waves predicted by the Network model, will be compared with waves and currents observed during HB06 along a transect extending from 4 m depth to the shoreline. Support from the California Coastal Conservancy, NOAA, and ONR is gratefully acknowledged.

  1. Global teleseismic earthquake relocation with improved travel times and procedures for depth determination

    USGS Publications Warehouse

    Robert, Engdah E.; Van Hilst, R. D.; Buland, Raymond P.

    1998-01-01

    We relocate nearly 100, 000 events that occurred during the period 1964 to 1995 and are well-constrained teleseismically by arrival-time data reported to the International Seismological Centre (ISC) and to the U. S. Geological Survey's National Earthquake Information Center (NEIC). Hypocenter determination is significantly improved by using, in addition to regional and teleseismic P and S phases, the arrival times of PKiKP, PKPdf, and the teleseismic depth phases pP, pwP, and sP in the relocation procedure. A global probability model developed for later-arriving phases is used to independently identify the depth phases. The relocations are compared to hypocenters reported in the ISC and NEIC catalogs and by other sources. Differences in our epicenters with respect to ISC and NEIC estimates are generally small and regionally systematic due to the combined effects of the observing station network and plate geometry regionally, differences in upper mantle travel times between the reference earth models used, and the use of later-arriving phases. Focal depths are improved substantially over most other independent estimates, demonstrating (for example) how regional structures such as downgoing slabs can severely bias depth estimation when only regional and teleseismic P arrivals are used to determine the hypocenter. The new data base, which is complete to about Mw 5. 2 and includes all events for which moment-tensor solutions are available, has immediate application to high-resolution definition of Wadati-Benioff Zones (WBZs) worldwide, regional and global tomographic imaging, and other studies of earth structure.

  2. Depth optimal sorting networks resistant to k passive faults

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piotrow, M.

    In this paper, we study the problem of constructing a sorting network that is tolerant to faults and whose running time (i.e. depth) is as small as possible. We consider the scenario of worst-case comparator faults and follow the model of passive comparator failure proposed by Yao and Yao, in which a faulty comparator outputs directly its inputs without comparison. Our main result is the first construction of an N-input, k-fault-tolerant sorting network that is of an asymptotically optimal depth {theta}(log N+k). That improves over the recent result of Leighton and Ma, whose network is of depth O(log N +more » k log log N/log k). Actually, we present a fault-tolerant correction network that can be added after any N-input sorting network to correct its output in the presence of at most k faulty comparators. Since the depth of the network is O(log N + k) and the constants hidden behind the {open_quotes}O{close_quotes} notation are not big, the construction can be of practical use. Developing the techniques necessary to show the main result, we construct a fault-tolerant network for the insertion problem. As a by-product, we get an N-input, O(log N)-depth INSERT-network that is tolerant to random faults, thereby answering a question posed by Ma in his PhD thesis. The results are based on a new notion of constant delay comparator networks, that is, networks in which each register is used (compared) only in a period of time of a constant length. Copies of such networks can be put one after another with only a constant increase in depth per copy.« less

  3. Parameterization of clear-sky surface irradiance and its implications for estimation of aerosol direct radiative effect and aerosol optical depth

    PubMed Central

    Xia, Xiangao

    2015-01-01

    Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310

  4. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks.

    PubMed

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-07-14

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don't discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.

  5. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks

    PubMed Central

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-01-01

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability. PMID:27428970

  6. Overview of MPLNET Version 3 Cloud Detection

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Campbell, James; Welton, Ellsworth J.; Stewart, Sebastian A.; Haftings, Phillip

    2016-01-01

    The National Aeronautics and Space Administration Micro Pulse Lidar Network, version 3, cloud detection algorithm is described and differences relative to the previous version are highlighted. Clouds are identified from normalized level 1 signal profiles using two complementary methods. The first method considers vertical signal derivatives for detecting low-level clouds. The second method, which detects high-level clouds like cirrus, is based on signal uncertainties necessitated by the relatively low signal-to-noise ratio exhibited in the upper troposphere by eye-safe network instruments, especially during daytime. Furthermore, a multitemporal averaging scheme is used to improve cloud detection under conditions of a weak signal-to-noise ratio. Diurnal and seasonal cycles of cloud occurrence frequency based on one year of measurements at the Goddard Space Flight Center (Greenbelt, Maryland) site are compared for the new and previous versions. The largest differences, and perceived improvement, in detection occurs for high clouds (above 5 km, above MSL), which increase in occurrence by over 5%. There is also an increase in the detection of multilayered cloud profiles from 9% to 19%. Macrophysical properties and estimates of cloud optical depth are presented for a transparent cirrus dataset. However, the limit to which the cirrus cloud optical depth could be reliably estimated occurs between 0.5 and 0.8. A comparison using collocated CALIPSO measurements at the Goddard Space Flight Center and Singapore Micro Pulse Lidar Network (MPLNET) sites indicates improvements in cloud occurrence frequencies and layer heights.

  7. The Snowtweets Project: communicating snow depth measurements from specialists and non-specialists via mobile communication technologies and social networks

    NASA Astrophysics Data System (ADS)

    King, J. M.; Cabrera, A. R.; Kelly, R. E.

    2009-12-01

    With the global decline of in situ snow measurements for hydrometeorological applications, there is an evolving need to find alternative ways to collect localized measurements of snow. The Snowtweets Project is an experiment aimed at providing a way for people interested in making snow measurements to quickly broadcast their measurements to the internet. The goal of the project is to encourage specialists and non-specialists alike to share simple snow depth measurements through widely available social networking sites. We are currently using the rapidly growing microblogging site Twitter (www.twitter.com) as a broadcasting vehicle to collect the snow depth measurements. Using 140 characters or less, users "tweet" their snow depth from their location through the Twitter website. This can be done from a computer or smartphone with internet access or through SMS messaging. The project has developed a Snowtweets web application that interrogates Twitter by parsing the 140 character string to obtain a geographic position and snow depth. GeoRSS and KML feeds are available to visualize the tweets in GoogleEarth or they can be viewed in our own visualiser, Snowbird. The emphasis is on achieving wide coverage to increase the number of microblogs. Furthermore, after some quality control filters, the project is able to combine the broadcast snow depths with traditional and objective satellite remote sensing-based observations or hydrologic model estimates. Our site, snowcore.uwaterloo.ca, was launched in July 2009 and is ready for the 2009-2010 northern hemisphere winter. We invite comments from experienced community participation projects to help improve our product.

  8. Anelastic attenuation structure of the southern Aegean subduction area

    NASA Astrophysics Data System (ADS)

    Ventouzi, Chrisanthi; Papazachos, Constantinos; Papaioannou, Christos; Hatzidimitriou, Panagiotis

    2014-05-01

    The study of the anelastic attenuation structure plays a very important role for seismic wave propagation and provides not only valuable constraints for the Earth's interior (temperature, relative viscosity, slab dehydration and melt transport) but also significant information for the simulation of strong ground motions. In order to investigate the attenuation structure of the broader Southern Aegean subduction area, acceleration spectra of intermediate depth earthquakes produced from data provided by two local networks which operated in the area were used. More specifically, we employed data from approximately 400 intermediate-depth earthquakes, as these were recorded from the EGELADOS seismic monitoring project which consisted of 65 land stations and 24 OBS recorders and operated during 2005-2007, as well as data from the earlier installed CYCNET local network, which operated during 2002-2005. A frequency-independent path attenuation operator t* was computed for both P and S arrivals for each waveform, using amplitude spectra generated by the recorded data of the aforementioned networks. Initially, estimated P and S traveltimes were examined and modeled as a function of epicentral distance for different groups of focal depths, using data from the CYCNET network in order to obtain the expected arrival information when original arrival times were not available. Two approaches to assess the spectral-decay were adopted for t* determination. Initially, an automated approach was used, where t* was automatically calculated from the slope of the acceleration spectrum, assuming an ω2 source model for frequencies above the corner frequency, fc. Estimation of t* was performed in the frequency band of 0.2 to 25 Hz, using only spectra with a signal-to-noise ratio larger than 3 for a frequency range of at least 4Hz for P-waves and 1Hz for S-waves, respectively. In the second approach, the selection of the linearly-decaying part of the spectra where t* was calculated, was carried out manually, after a visual inspection by the user for optimal spectral fitting. The observed t* data from both approaches were examined against hypocentral distance. In general, no significant linear trend, revealing dependence of t* with distance, could be observed on the original data, clearly a result of the significant spatial and depth variations of the anelastic attenuation structure that superimposes the distance effect. In order to further investigate this issue, a spatial variation of t* values for different hypocentral-depth groups was performed. The obtained results show that along-arc stations exhibit very low values of t*, while back-arc stations present much larger values. The observed t* along-arc/back-arc differences becomes more significant as the depth of the earthquakes increases, indicating the effect of the high-attenuation (low-Q) mantle wedge beneath the volcanic arc. For a more detailed view of the spatial variations of the whole path attenuation operator, we performed preliminary spatial interpolation of t* values for different hypocentral depth ranges. For "shallower" hypocentral depths, low values of t*, appear to be sparsely observed mainly in the back-arc area, but as hypocentral depths increase, a much larger area with higher attenuation is identified along the volcanic arc. This work has been partly supported by the 3D-SEGMENTS project #1337 funded by EC European Social Fund and the Operational Programme "Education and Lifelong Learning" of the ARISTEIA-I call of the Greek Secretariat of Research and Technology.

  9. Smoke over haze: Comparative analysis of satellite, surface radiometer, and airborne in situ measurements of aerosol optical properties and radiative forcing over the eastern United States

    NASA Astrophysics Data System (ADS)

    Vant-Hull, Brian; Li, Zhanqing; Taubman, Brett F.; Levy, Robert; Marufu, Lackson; Chang, Fu-Lung; Doddridge, Bruce G.; Dickerson, Russell R.

    2005-05-01

    In July 2002 Canadian forest fires produced a major smoke episode that blanketed the east coast of the United States. Properties of the smoke aerosol were measured in situ from aircraft, complementing operational Aerosol Robotic Network (AERONET), and Moderate Resolution Imaging Spectroradiometer (MODIS) remotely sensed aerosol retrievals. This study compares single scattering albedo and phase function derived from the in situ measurements and AERONET retrievals in order to evaluate their consistency for application to satellite retrievals of optical depth and radiative forcing. These optical properties were combined with MODIS reflectance observations to calculate optical depth. The use of AERONET optical properties yielded optical depths 2-16% lower than those directly measured by AERONET. The use of in situ-derived optical properties resulted in optical depths 22-43% higher than AERONET measurements. These higher optical depths are attributed primarily to the higher absorption measured in situ, which is roughly twice that retrieved by AERONET. The resulting satellite retrieved optical depths were in turn used to calculate integrated radiative forcing at both the surface and top of atmosphere. Comparisons to surface (Surface Radiation Budget Network (SURFRAD) and ISIS) and to satellite (Clouds and Earth Radiant Energy System CERES) broadband radiometer measurements demonstrate that the use of optical properties derived from the aircraft measurements provided a better broadband forcing estimate (21% error) than those derived from AERONET (33% error). Thus AERONET-derived optical properties produced better fits to optical depth measurements, while in situ properties resulted in better fits to forcing measurements. These apparent inconsistencies underline the significant challenges facing the aerosol community in achieving column closure between narrow and broadband measurements and calculations.

  10. Mapping the spatial distribution and activity of (226)Ra at legacy sites through Machine Learning interpretation of gamma-ray spectrometry data.

    PubMed

    Varley, Adam; Tyler, Andrew; Smith, Leslie; Dale, Paul; Davies, Mike

    2016-03-01

    Radium ((226)Ra) contamination derived from military, industrial, and pharmaceutical products can be found at a number of historical sites across the world posing a risk to human health. The analysis of spectral data derived using gamma-ray spectrometry can offer a powerful tool to rapidly estimate and map the activity, depth, and lateral distribution of (226)Ra contamination covering an extensive area. Subsequently, reliable risk assessments can be developed for individual sites in a fraction of the timeframe compared to traditional labour-intensive sampling techniques: for example soil coring. However, local heterogeneity of the natural background, statistical counting uncertainty, and non-linear source response are confounding problems associated with gamma-ray spectral analysis. This is particularly challenging, when attempting to deal with enhanced concentrations of a naturally occurring radionuclide such as (226)Ra. As a result, conventional surveys tend to attribute the highest activities to the largest total signal received by a detector (Gross counts): an assumption that tends to neglect higher activities at depth. To overcome these limitations, a methodology was developed making use of Monte Carlo simulations, Principal Component Analysis and Machine Learning based algorithms to derive depth and activity estimates for (226)Ra contamination. The approach was applied on spectra taken using two gamma-ray detectors (Lanthanum Bromide and Sodium Iodide), with the aim of identifying an optimised combination of detector and spectral processing routine. It was confirmed that, through a combination of Neural Networks and Lanthanum Bromide, the most accurate depth and activity estimates could be found. The advantage of the method was demonstrated by mapping depth and activity estimates at a case study site in Scotland. There the method identified significantly higher activity (<3 Bq g(-1)) occurring at depth (>0.4m), that conventional gross counting algorithms failed to identify. It was concluded that the method could easily be employed to identify areas of high activity potentially occurring at depth, prior to intrusive investigation using conventional sampling techniques. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Insights into mountain precipitation and snowpack from a basin-scale wireless-sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Glaser, S.; Bales, R.; Conklin, M.; Rice, R.; Marks, D.

    2017-08-01

    A spatially distributed wireless-sensor network, installed across the 2154 km2 portion of the 5311 km2 American River basin above 1500 m elevation, provided spatial measurements of temperature, relative humidity, and snow depth in the Sierra Nevada, California. The network consisted of 10 sensor clusters, each with 10 measurement nodes, distributed to capture the variability in topography and vegetation cover. The sensor network captured significant spatial heterogeneity in rain versus snow precipitation for water-year 2014, variability that was not apparent in the more limited operational data. Using daily dew-point temperature to track temporal elevational changes in the rain-snow transition, the amount of snow accumulation at each node was used to estimate the fraction of rain versus snow. This resulted in an underestimate of total precipitation below the 0°C dew-point elevation, which averaged 1730 m across 10 precipitation events, indicating that measuring snow does not capture total precipitation. We suggest blending lower elevation rain gauge data with higher-elevation sensor-node data for each event to estimate total precipitation. Blended estimates were on average 15-30% higher than using either set of measurements alone. Using data from the current operational snow-pillow sites gives even lower estimates of basin-wide precipitation. Given the increasing importance of liquid precipitation in a warming climate, a strategy that blends distributed measurements of both liquid and solid precipitation will provide more accurate basin-wide precipitation estimates, plus spatial and temporal patters of snow accumulation and melt in a basin.

  12. Monitoring microearthquakes with the San Andreas fault observatory at depth

    USGS Publications Warehouse

    Oye, V.; Ellsworth, W.L.

    2007-01-01

    In 2005, the San Andreas Fault Observatory at Depth (SAFOD) was drilled through the San Andreas Fault zone at a depth of about 3.1 km. The borehole has subsequently been instrumented with high-frequency geophones in order to better constrain locations and source processes of nearby microearthquakes that will be targeted in the upcoming phase of SAFOD. The microseismic monitoring software MIMO, developed by NORSAR, has been installed at SAFOD to provide near-real time locations and magnitude estimates using the high sampling rate (4000 Hz) waveform data. To improve the detection and location accuracy, we incorporate data from the nearby, shallow borehole (???250 m) seismometers of the High Resolution Seismic Network (HRSN). The event association algorithm of the MIMO software incorporates HRSN detections provided by the USGS real time earthworm software. The concept of the new event association is based on the generalized beam forming, primarily used in array seismology. The method requires the pre-computation of theoretical travel times in a 3D grid of potential microearthquake locations to the seismometers of the current station network. By minimizing the differences between theoretical and observed detection times an event is associated and the location accuracy is significantly improved.

  13. Satellite rainfall monitoring over Africa using multi-spectral MSG data in an artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Chadwick, Robin; Grimes, David

    2010-05-01

    Rainfall monitoring over Africa is crucial for a variety of humanitarian and agricultural purposes, and satellites have been used for some time to provide real-time rainfall estimates over the region. Several recent applications of satellite rainfall estimates, such as flash-flood warning systems and crop-yield models, require accurate rainfall totals at daily timescales or below. Multi-spectral Meteosat Second Generation (MSG) data provide information on cloud properties such as optical depth and cloud particle size and phase. These parameters are all relevant to the probability of rainfall occurring from a cloud and the likely intensity of that rainfall, so the use of MSG data should lead to improved satellite rainfall estimates. An artificial neural network (ANN) using multi-spectral inputs from MSG has been trained to provide daily rainfall estimates over Ethiopia, using daily rain-gauge data for calibration. Although ANN methods have previously been applied to the problem of producing rainfall estimates from multi-spectral satellite data, in general precipitation radar data have been used for calibration. The advantage of using rain-gauge data is that gauges are far more widespread over Africa than radar networks, so this method can be easily transferred and if necessary re-calibrated in different climatological regions of the continent. The ANN estimates have been validated against independent Ethiopian gauge data at a variety of time and space scales. The ANN shows an improvement in accuracy at daily timescale when compared to rainfall estimates from the TAMSAT algorithm, which uses only single channel MSG data.

  14. Balanced bilinguals favor lexical processing in their opaque language and conversion system in their shallow language.

    PubMed

    Buetler, Karin A; de León Rodríguez, Diego; Laganaro, Marina; Müri, René; Nyffeler, Thomas; Spierer, Lucas; Annoni, Jean-Marie

    2015-11-01

    Referred to as orthographic depth, the degree of consistency of grapheme/phoneme correspondences varies across languages from high in shallow orthographies to low in deep orthographies. The present study investigates the impact of orthographic depth on reading route by analyzing evoked potentials to words in a deep (French) and shallow (German) language presented to highly proficient bilinguals. ERP analyses to German and French words revealed significant topographic modulations 240-280 ms post-stimulus onset, indicative of distinct brain networks engaged in reading over this time window. Source estimations revealed that these effects stemmed from modulations of left insular, inferior frontal and dorsolateral regions (German>French) previously associated to phonological processing. Our results show that reading in a shallow language was associated to a stronger engagement of phonological pathways than reading in a deep language. Thus, the lexical pathways favored in word reading are reinforced by phonological networks more strongly in the shallow than deep orthography. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Hand pose estimation in depth image using CNN and random forest

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Cao, Zhiguo; Xiao, Yang; Fang, Zhiwen

    2018-03-01

    Thanks to the availability of low cost depth cameras, like Microsoft Kinect, 3D hand pose estimation attracted special research attention in these years. Due to the large variations in hand`s viewpoint and the high dimension of hand motion, 3D hand pose estimation is still challenging. In this paper we propose a two-stage framework which joint with CNN and Random Forest to boost the performance of hand pose estimation. First, we use a standard Convolutional Neural Network (CNN) to regress the hand joints` locations. Second, using a Random Forest to refine the joints from the first stage. In the second stage, we propose a pyramid feature which merges the information flow of the CNN. Specifically, we get the rough joints` location from first stage, then rotate the convolutional feature maps (and image). After this, for each joint, we map its location to each feature map (and image) firstly, then crop features at each feature map (and image) around its location, put extracted features to Random Forest to refine at last. Experimentally, we evaluate our proposed method on ICVL dataset and get the mean error about 11mm, our method is also real-time on a desktop.

  16. Accessibility assessment of Houston's roadway network during Harvey through integration of observed flood impacts and hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Gidaris, I.; Gori, A.; Panakkal, P.; Padgett, J.; Bedient, P. B.

    2017-12-01

    The record-breaking rainfall produced over the Houston region by Hurricane Harvey resulted in catastrophic and unprecedented impacts on the region's infrastructure. Notably, Houston's transportation network was crippled, with almost every major highway flooded during the five-day event. Entire neighborhoods and subdivisions were inundated, rendering them completely inaccessible to rescue crews and emergency services. Harvey has tragically highlighted the vulnerability of major thoroughfares, as well as neighborhood roads, to severe inundation during extreme precipitation events. Furthermore, it has emphasized the need for detailed accessibility characterization of road networks under extreme event scenarios in order to determine which areas of the city are most vulnerable. This analysis assesses and tracks the accessibility of Houston's major highways during Harvey's evolution by utilizing road flood/closure data from the Texas DOT. In the absence of flooded/closure data for local roads, a hybrid approach is adopted that utilizes a physics-based hydrologic model to produce high-resolution inundation estimates for selected urban watersheds in the Houston area. In particular, hydrologic output in the form of inundation depths is used to estimate the operability of local roads. Ultimately, integration of hydrologic-based estimation of road conditions with observed data from DOT supports a network accessibility analysis of selected urban neighborhoods. This accessibility analysis can identify operable routes for emergency response (rescue crews, medical services, etc.) during the storm event.

  17. Creating a water depth map from Earth Observation-derived flood extent and topography data

    NASA Astrophysics Data System (ADS)

    Matgen, Patrick; Giustarini, Laura; Chini, Marco; Hostache, Renaud; Pelich, Ramona; Schlaffer, Stefan

    2017-04-01

    Enhanced methods for monitoring temporal and spatial variations of water depth in rivers and floodplains are very important in operational water management. Currently, variations of water elevation can be estimated indirectly at the land-water interface using sequences of satellite EO imagery in combination with topographic data. In recent years high-resolution digital elevation models (DEM) and satellite EO data have become more readily available at global scale. This study introduces an approach for efficiently converting remote sensing-derived flood extent maps into water depth maps using a floodplain's topography information. For this we make the assumption of uniform flow, that is the depth of flow with respect to the drainage network is considered to be the same at every section of the floodplain. In other words, the depth of water above the nearest drainage is expected to be constant for a given river reach. To determine this value we first need the Height Above Nearest Drainage (HAND) raster obtained by using the area of interest's DEM as source topography and a shapefile of the river network. The HAND model normalizes the topography with respect to the drainage network. Next, the HAND raster is thresholded in order to generate a binary mask that optimally fits, over the entire region of study, the flood extent map obtained from SAR or any other remote sensing product, including aerial photographs. The optimal threshold value corresponds to the height of the water line above the nearest drainage, termed HANDWATER, and is considered constant for a given subreach. Once the HANDWATER has been optimized, a water depth map can be generated by subtracting the value of the HAND raster at the each location from this parameter value. These developments enable large scale and near real-time applications and only require readily available EO data, a DEM and the river network as input data. The approach is based on a hierarchical split-based approach that subdivides a drainage network into segments of variable length with evidence of uniform flow. The method has been tested with remote sensing data and DEM data that differ in terms of spatial resolution and accuracy. A comprehensive evaluation of the obtained water depth maps with hydrodynamic modelling results and in situ measured water level recordings was carried out on a reach of the river Severn located in the United Kingdom. First results show that the obtained root mean squared difference is 10 cm when using high resolution high precision data sets (i.e. aerial photographs of flood extent and a LiDAR-derived DEM) and amount to 50 cm when using as inputs moderate resolution SAR imagery from ENVISAT and a SRTM-derived DEM.

  18. Detection of Coal Fires: A Case Study Conducted on Indian Coal Seams Using Neural Network and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Singh, B. B.

    2016-12-01

    India produces majority of its electricity from coal but a huge quantity of coal burns every day due to coal fires and also poses a threat to the environment as severe pollutants. In the present study we had demonstrated the usage of Neural Network based approach with an integrated Particle Swarm Optimization (PSO) inversion technique. The Self Potential (SP) data set is used for the early detection of coal fires. The study was conducted over the East Basuria colliery, Jharia Coal Field, Jharkhand, India. The causative source was modelled as an inclined sheet like anomaly and the synthetic data was generated. Neural Network scheme consists of an input layer, hidden layers and an output layer. The input layer corresponds to the SP data and the output layer is the estimated depth of the coal fire. A synthetic dataset was modelled with some of the known parameters such as depth, conductivity, inclination angle, half width etc. associated with causative body and gives a very low misfit error of 0.0032%. Therefore, the method was found accurate in predicting the depth of the source body. The technique was applied to the real data set and the model was trained until a very good correlation of determination `R2' value of 0.98 is obtained. The depth of the source body was found to be 12.34m with a misfit error percentage of 0.242%. The inversion results were compared with the lithologs obtained from a nearby well which corresponds to the L3 coal seam. The depth of the coal fire had exactly matched with the half width of the anomaly which suggests that the fire is widely spread. The inclination angle of the anomaly was 135.510 which resembles the development of the geometrically complex fracture planes. These fractures may be developed due to anisotropic weakness of the ground which acts as passage for the air. As a result coal fires spreads along these fracture planes. The results obtained from the Neural Network was compared with PSO inversion results and were found in complete agreement. PSO technique had already been found a well-established technique to model SP anomalies. Therefore for successful control and mitigation, SP surveys coupled with Neural Network and PSO technique proves to be novel and economical approach along with other existing geophysical techniques. Keywords: PSO, Coal fire, Self-Potential, Inversion, Neural Network

  19. Artificial neural network (ANN)-based prediction of depth filter loading capacity for filter sizing.

    PubMed

    Agarwal, Harshit; Rathore, Anurag S; Hadpe, Sandeep Ramesh; Alva, Solomon J

    2016-11-01

    This article presents an application of artificial neural network (ANN) modelling towards prediction of depth filter loading capacity for clarification of a monoclonal antibody (mAb) product during commercial manufacturing. The effect of operating parameters on filter loading capacity was evaluated based on the analysis of change in the differential pressure (DP) as a function of time. The proposed ANN model uses inlet stream properties (feed turbidity, feed cell count, feed cell viability), flux, and time to predict the corresponding DP. The ANN contained a single output layer with ten neurons in hidden layer and employed a sigmoidal activation function. This network was trained with 174 training points, 37 validation points, and 37 test points. Further, a pressure cut-off of 1.1 bar was used for sizing the filter area required under each operating condition. The modelling results showed that there was excellent agreement between the predicted and experimental data with a regression coefficient (R 2 ) of 0.98. The developed ANN model was used for performing variable depth filter sizing for different clarification lots. Monte-Carlo simulation was performed to estimate the cost savings by using different filter areas for different clarification lots rather than using the same filter area. A 10% saving in cost of goods was obtained for this operation. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1436-1443, 2016. © 2016 American Institute of Chemical Engineers.

  20. Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.

    2018-04-01

    RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.

  1. Annual Hanford Seismic Report for Fiscal Year 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohay, Alan C.; Sweeney, Mark D.; Hartshorn, Donald C.

    2009-12-31

    The Hanford Seismic Assessment Program (HSAP) provides an uninterrupted collection of high-quality raw and processed seismic data from the Hanford Seismic Network for the U.S. Department of Energy and its contractors. The HSAP is responsible for locating and identifying sources of seismic activity and monitoring changes in the historical pattern of seismic activity at the Hanford Site. The data are compiled, archived, and published for use by the Hanford Site for waste management, natural phenomena hazards assessments, and engineering design and construction. In addition, the HSAP works with the Hanford Site Emergency Services Organization to provide assistance in the eventmore » of a significant earthquake on the Hanford Site. The Hanford Seismic Network and the Eastern Washington Regional Network consist of 44 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Assessment Team. During FY 2009, the Hanford Seismic Network recorded nearly 3000 triggers on the seismometer system, which included over 1700 seismic events in the southeast Washington area and an additional 370 regional and teleseismic events. There were 1648 events determined to be local earthquakes relevant to the Hanford Site. Nearly all of these earthquakes were detected in the vicinity of Wooded Island, located about eight miles north of Richland just west of the Columbia River. Recording of the Wooded Island events began in January with over 250 events per month through June 2009. The frequency of events decreased starting in July 2009 to approximately 10-15 events per month through September 2009. Most of the events were considered minor (coda-length magnitude [Mc] less than 1.0) with 47 events in the 2.0-3.0 range. The estimated depths of the Wooded Island events are shallow (averaging less than 1.0 km deep) with a maximum depth estimated at 2.3 km. This places the Wooded Island events within the Columbia River Basalt Group (CRBG). The highest-magnitude event (3.0Mc) occurred on May 13, 2009 within the Wooded Island swarm at depth 1.8 km. With regard to the depth distribution, 1613 earthquakes were located at shallow depths (less than 4 km, most likely in the Columbia River basalts), 18 earthquakes were located at intermediate depths (between 4 and 9 km, most likely in the pre-basalt sediments), and 17 earthquakes were located at depths greater than 9 km, within the crystalline basement. Geographically, 1630 earthquakes were located in swarm areas and 18 earthquakes were classified as random events. The low magnitude of the Wooded Island events has made them undetectable to all but local area residents. However, some Hanford employees working within a few miles of the area of highest activity and individuals living in homes directly across the Columbia River from the swarm center have reported feeling many of the larger magnitude events. The Hanford Strong Motion Accelerometer (SMA) network was triggered numerous times by the Wooded Island swarm events. The maximum acceleration value recorded by the SMA network was approximately 3 times lower than the reportable action level for Hanford facilities (2% g) and no action was required. The swarming is likely due to pressure that has built up, cracking the brittle basalt layers within the Columbia River Basalt Formation (CRBG). Similar earthquake “swarms” have been recorded near this same location in 1970, 1975 and 1988. Prior to the 1970s, swarming may have occurred, but equipment was not in place to record those events. Quakes of this limited magnitude do not pose a risk to Hanford cleanup efforts or waste storage facilities. Since swarms of the past did not intensify in magnitude, seismologists do not expect that these events will increase in intensity. However, Pacific Northwest National Laboratory (PNNL) will continue to monitor the activity.« less

  2. The Everglades Depth Estimation Network (EDEN) surface-water model, version 2

    USGS Publications Warehouse

    Telis, Pamela A.; Xie, Zhixiao; Liu, Zhongwei; Li, Yingru; Conrads, Paul

    2015-01-01

    Three applications of the EDEN-modeled water surfaces and other EDEN datasets are presented in the report to show how scientists and resource managers are using EDEN datasets to analyze biological and ecological responses to hydrologic changes in the Everglades. The biological responses of two important Everglades species, alligators and wading birds, to changes in hydrology are described. The effects of hydrology on fire dynamics in the Everglades are also discussed.

  3. Application of RBFN network and GM (1, 1) for groundwater level simulation

    NASA Astrophysics Data System (ADS)

    Li, Zijun; Yang, Qingchun; Wang, Luchen; Martín, Jordi Delgado

    2017-10-01

    Groundwater is a prominent resource of drinking and domestic water in the world. In this context, a feasible water resources management plan necessitates acceptable predictions of groundwater table depth fluctuations, which can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. Due to the difficulties of identifying non-linear model structure and estimating the associated parameters, in this study radial basis function neural network (RBFNN) and GM (1, 1) models are used for the prediction of monthly groundwater level fluctuations in the city of Longyan, Fujian Province (South China). The monthly groundwater level data monitored from January 2003 to December 2011 are used in both models. The error criteria are estimated using the coefficient of determination ( R 2), mean absolute error (E) and root mean squared error (RMSE). The results show that both the models can forecast the groundwater level with fairly high accuracy, but the RBFN network model can be a promising tool to simulate and forecast groundwater level since it has a relatively smaller RMSE and MAE.

  4. Mapping snow depth return levels: smooth spatial modeling versus station interpolation

    NASA Astrophysics Data System (ADS)

    Blanchet, J.; Lehning, M.

    2010-12-01

    For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.

  5. Crustal structure of north Peru from analysis of teleseismic receiver functions

    NASA Astrophysics Data System (ADS)

    Condori, Cristobal; França, George S.; Tavera, Hernando J.; Albuquerque, Diogo F.; Bishop, Brandon T.; Beck, Susan L.

    2017-07-01

    In this study, we present results from teleseismic receiver functions, in order to investigate the crustal thickness and Vp/Vs ratio beneath northern Peru. A total number of 981 receiver functions were analyzed, from data recorded by 28 broadband seismic stations from the Peruvian permanent seismic network, the regional temporary SisNort network and one CTBTO station. The Moho depth and average crustal Vp/Vs ratio were determined at each station using the H-k stacking technique to identify the arrival times of primary P to S conversion and crustal reverberations (PpPms, PpSs + PsPms). The results show that the Moho depth correlates well with the surface topography and varies significantly from west to east, showing a shallow depth of around 25 km near the coast, a maximum depth of 55-60 km beneath the Andean Cordillera, and a depth of 35-40 km further to the east in the Amazonian Basin. The bulk crustal Vp/Vs ratio ranges between 1.60 and 1.88 with the mean of 1.75. Higher values between 1.75 and 1.88 are found beneath the Eastern and Western Cordilleras, consistent with a mafic composition in the lower crust. In contrast values vary from 1.60 to 1.75 in the extreme flanks of the Eastern and Western Cordillera indicating a felsic composition. We find a positive relationship between crustal thickness, Vp/Vs ratio, the Bouguer anomaly, and topography. These results are consistent with previous studies in other parts of Peru (central and southern regions) and provide the first crustal thickness estimates for the high cordillera in northern Peru.

  6. EDOVE: Energy and Depth Variance-Based Opportunistic Void Avoidance Scheme for Underwater Acoustic Sensor Networks

    PubMed Central

    Eun, Yongsoon

    2017-01-01

    Underwater Acoustic Sensor Network (UASN) comes with intrinsic constraints because it is deployed in the aquatic environment and uses the acoustic signals to communicate. The examples of those constraints are long propagation delay, very limited bandwidth, high energy cost for transmission, very high signal attenuation, costly deployment and battery replacement, and so forth. Therefore, the routing schemes for UASN must take into account those characteristics to achieve energy fairness, avoid energy holes, and improve the network lifetime. The depth based forwarding schemes in literature use node’s depth information to forward data towards the sink. They minimize the data packet duplication by employing the holding time strategy. However, to avoid void holes in the network, they use two hop node proximity information. In this paper, we propose the Energy and Depth variance-based Opportunistic Void avoidance (EDOVE) scheme to gain energy balancing and void avoidance in the network. EDOVE considers not only the depth parameter, but also the normalized residual energy of the one-hop nodes and the normalized depth variance of the second hop neighbors. Hence, it avoids the void regions as well as balances the network energy and increases the network lifetime. The simulation results show that the EDOVE gains more than 15% packet delivery ratio, propagates 50% less copies of data packet, consumes less energy, and has more lifetime than the state of the art forwarding schemes. PMID:28954395

  7. EDOVE: Energy and Depth Variance-Based Opportunistic Void Avoidance Scheme for Underwater Acoustic Sensor Networks.

    PubMed

    Bouk, Safdar Hussain; Ahmed, Syed Hassan; Park, Kyung-Joon; Eun, Yongsoon

    2017-09-26

    Underwater Acoustic Sensor Network (UASN) comes with intrinsic constraints because it is deployed in the aquatic environment and uses the acoustic signals to communicate. The examples of those constraints are long propagation delay, very limited bandwidth, high energy cost for transmission, very high signal attenuation, costly deployment and battery replacement, and so forth. Therefore, the routing schemes for UASN must take into account those characteristics to achieve energy fairness, avoid energy holes, and improve the network lifetime. The depth based forwarding schemes in literature use node's depth information to forward data towards the sink. They minimize the data packet duplication by employing the holding time strategy. However, to avoid void holes in the network, they use two hop node proximity information. In this paper, we propose the Energy and Depth variance-based Opportunistic Void avoidance (EDOVE) scheme to gain energy balancing and void avoidance in the network. EDOVE considers not only the depth parameter, but also the normalized residual energy of the one-hop nodes and the normalized depth variance of the second hop neighbors. Hence, it avoids the void regions as well as balances the network energy and increases the network lifetime. The simulation results show that the EDOVE gains more than 15 % packet delivery ratio, propagates 50 % less copies of data packet, consumes less energy, and has more lifetime than the state of the art forwarding schemes.

  8. Estimating the Depth of Stratigraphic Units from Marine Seismic Profiles Using Nonstationary Geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chihi, Hayet; Galli, Alain; Ravenne, Christian

    2000-03-15

    The object of this study is to build a three-dimensional (3D) geometric model of the stratigraphic units of the margin of the Rhone River on the basis of geophysical investigations by a network of seismic profiles at sea. The geometry of these units is described by depth charts of each surface identified by seismic profiling, which is done by geostatistics. The modeling starts by a statistical analysis by which we determine the parameters that enable us to calculate the variograms of the identified surfaces. After having determined the statistical parameters, we calculate the variograms of the variable Depth. By analyzingmore » the behavior of the variogram we then can deduce whether the situation is stationary and if the variable has an anisotropic behavior. We tried the following two nonstationary methods to obtain our estimates: (a) The method of universal kriging if the underlying variogram was directly accessible. (b) The method of increments if the underlying variogram was not directly accessible. After having modeled the variograms of the increments and of the variable itself, we calculated the surfaces by kriging the variable Depth on a small-mesh estimation grid. The two methods then are compared and their respective advantages and disadvantages are discussed, as well as their fields of application. These methods are capable of being used widely in earth sciences for automatic mapping of geometric surfaces or for variables such as a piezometric surface or a concentration, which are not 'stationary,' that is, essentially, possess a gradient or a tendency to develop systematically in space.« less

  9. Temperature and electrical conductivity of the lunar interior from magnetic transient measurements in the geomagnetic tail

    NASA Technical Reports Server (NTRS)

    Dyal, P.; Parkin, C. W.; Daily, W. D.

    1974-01-01

    Magnetometers were deployed at four Apollo sites on the moon to measure remanent and induced lunar magnetic fields. Measurements from this network of instruments were used to calculate the electrical conductivity, temperature, magnetic permeability, and iron abundance of the lunar interior. Global lunar fields due to eddy currents, induced in the lunar interior by magnetic transients in the geomagnetic tail field, were analyzed to calculate an electrical conductivity profile for the moon: the conductivity increases rapidly with depth from 10 to the minus 9 power mhos/meter at the lunar surface to .0001 mhos/meter at 200 km depth, then less rapidly to .02 mhos/meter at 1000 km depth. A temperature profile is calculated from conductivity: temperature rises rapidly with depth to 1100 K at 200 km depth, then less rapidly to 1800 K at 1000 km depth. Velocities and thicknesses of the earth's magnetopause and bow shock are estimated from simultaneous magnetometer measurements. Average speeds are determined to be about 50 km/sec for the magnetopause and 70 km/sec for the bow shock, although there are large variations in the measurements for any particular boundary crossing.

  10. Quality control of the RMS US flood model

    NASA Astrophysics Data System (ADS)

    Jankowfsky, Sonja; Hilberts, Arno; Mortgat, Chris; Li, Shuangcai; Rafique, Farhat; Rajesh, Edida; Xu, Na; Mei, Yi; Tillmanns, Stephan; Yang, Yang; Tian, Ye; Mathur, Prince; Kulkarni, Anand; Kumaresh, Bharadwaj Anna; Chaudhuri, Chiranjib; Saini, Vishal

    2016-04-01

    The RMS US flood model predicts the flood risk in the US with a 30 m resolution for different return periods. The model is designed for the insurance industry to estimate the cost of flood risk for a given location. Different statistical, hydrological and hydraulic models are combined to develop the flood maps for different return periods. A rainfall-runoff and routing model, calibrated with observed discharge data, is run with 10 000 years of stochastic simulated precipitation to create time series of discharge and surface runoff. The 100, 250 and 500 year events are extracted from these time series as forcing for a two-dimensional pluvial and fluvial inundation model. The coupling of all the different models which are run on the large area of the US implies a certain amount of uncertainty. Therefore, special attention is paid to the final quality control of the flood maps. First of all, a thorough quality analysis of the Digital Terrain model and the river network was done, as the final quality of the flood maps depends heavily on the DTM quality. Secondly, the simulated 100 year discharge in the major river network (600 000 km) is compared to the 100 year discharge derived using extreme value distribution of all USGS gauges with more than 20 years of peak values (around 11 000 gauges). Thirdly, for each gauge the modelled flood depth is compared to the depth derived from the USGS rating curves. Fourthly, the modelled flood depth is compared to the base flood elevation given in the FEMA flood maps. Fifthly, the flood extent is compared to the FEMA flood extent. Then, for historic events we compare flood extents and flood depths at given locations. Finally, all the data and spatial layers are uploaded on geoserver to facilitate the manual investigation of outliers. The feedback from the quality control is used to improve the model and estimate its uncertainty.

  11. Conceptual Design of the Everglades Depth Estimation Network (EDEN) Grid

    USGS Publications Warehouse

    Jones, John W.; Price, Susan D.

    2007-01-01

    INTRODUCTION The Everglades Depth Estimation Network (EDEN) offers a consistent and documented dataset that can be used to guide large-scale field operations, to integrate hydrologic and ecological responses, and to support biological and ecological assessments that measure ecosystem responses to the Comprehensive Everglades Restoration Plan (Telis, 2006). Ground elevation data for the greater Everglades and the digital ground elevation models derived from them form the foundation for all EDEN water depth and associated ecologic/hydrologic modeling (Jones, 2004, Jones and Price, 2007). To use EDEN water depth and duration information most effectively, it is important to be able to view and manipulate information on elevation data quality and other land cover and habitat characteristics across the Everglades region. These requirements led to the development of the geographic data layer described in this techniques and methods report. Relying on extensive experience in GIS data development, distribution, and analysis, a great deal of forethought went into the design of the geographic data layer used to index elevation and other surface characteristics for the Greater Everglades region. To allow for simplicity of design and use, the EDEN area was broken into a large number of equal-sized rectangles ('Cells') that in total are referred to here as the 'grid'. Some characteristics of this grid, such as the size of its cells, its origin, the area of Florida it is designed to represent, and individual grid cell identifiers, could not be changed once the grid database was developed. Therefore, these characteristics were selected to design as robust a grid as possible and to ensure the grid's long-term utility. It is desirable to include all pertinent information known about elevation and elevation data collection as grid attributes. Also, it is very important to allow for efficient grid post-processing, sub-setting, analysis, and distribution. This document details the conceptual design of the EDEN grid spatial parameters and cell attribute-table content.

  12. Assessing the monitoring performance using a synthetic microseismic catalogue for hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Ángel López Comino, José; Kriegerowski, Marius; Cesca, Simone; Dahm, Torsten; Mirek, Janusz; Lasocki, Stanislaw

    2016-04-01

    Hydraulic fracturing is considered among the human operations which could induce or trigger seismicity or microseismic activity. The influence of hydraulic fracturing operations is typically expected in terms of weak magnitude events. However, the sensitivity of the rock mass to trigger seismicity varies significantly for different sites and cannot be easily predicted prior to operations. In order to assess the sensitivity of microseismity to hydraulic fracturing operations, we perform a seismic monitoring at a shale gas exploration/exploitation site in the central-western part of the Peribaltic synclise at Pomerania (Poland). The monitoring will be continued before, during and after the termination of hydraulic fracturing operations. The fracking operations are planned in April 2016 at a depth 4000 m. A specific network setup has been installed since summer 2015, including a distributed network of broadband stations and three small-scale arrays. The network covers a region of 60 km2. The aperture of small scale arrays is between 450 and 950 m. So far no fracturing operations have been performed, but seismic data can already be used to assess the seismic noise and background microseismicity, and to investigate and assess the detection performance of our monitoring setup. Here we adopt a recently developed tool to generate a synthetic catalogue and waveform dataset, which realistically account for the expected microseismicity. Synthetic waveforms are generated for a local crustal model, considering a realistic distribution of hypocenters, magnitudes, moment tensors, and source durations. Noise free synthetic seismograms are superposed to real noise traces, to reproduce true monitoring conditions at the different station locations. We estimate the detection probability for different magnitudes, source-receiver distances, and noise conditions. This information is used to estimate the magnitude of completeness at the depth of the hydraulic fracturing horizontal wells. Our technique is useful to evaluate the efficiency of the seismic network and validate detection and location algorithms, taking into account the signal to noise ratio. The same dataset may be used at a later time, to assess the performance of other seismological analysis, such as hypocentral location, magnitude estimation and source parameters inversion. This work is funded by the EU H2020 SHEER project.

  13. Site response in the eastern United States: A comparison of Vs30 measurements with estimates from horizontal:vertical spectral ratios

    USGS Publications Warehouse

    McNamara, Daniel E.; Stephenson, William J.; Odum, Jackson K.; Williams, Robert; Gee, Lind

    2014-01-01

    Earthquake damage is often increased due to local ground-motion amplification caused by soft soils, thick basin sediments, topographic effects, and liquefaction. A critical factor contributing to the assessment of seismic hazard is detailed information on local site response. In order to address and quantify the site response at seismograph stations in the eastern United States, we investigate the regional spatial variation of horizontal:vertical spectral ratios (HVSR) using ambient noise recorded at permanent regional and national network stations as well as temporary seismic stations deployed in order to record aftershocks of the 2011 Mineral, Virginia, earthquake. We compare the HVSR peak frequency to surface measurements of the shear-wave seismic velocity to 30 m depth (Vs30) at 21 seismograph stations in the eastern United States and find that HVSR peak frequency increases with increasing Vs30. We use this relationship to estimate the National Earthquake Hazards Reduction Program soil class at 218 ANSS (Advanced National Seismic System), GSN (Global Seismographic Network), and RSN (Regional Seismograph Networks) locations in the eastern United States, and suggest that this seismic station–based HVSR proxy could potentially be used to calibrate other site response characterization methods commonly used to estimate shaking hazard.

  14. Improved depth estimation with the light field camera

    NASA Astrophysics Data System (ADS)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  15. Air Pollution Measurements by Citizen Scientists and NASA Satellites: Data Integration and Analysis

    NASA Astrophysics Data System (ADS)

    Gupta, P.; Maibach, J.; Levy, R. C.; Doraiswamy, P.; Pikelnaya, O.; Feenstra, B.; Polidori, A.

    2017-12-01

    PM2.5, or fine particulate matter, is a category of air pollutant consisting of solid particles with effective aerodynamic diameter of less than 2.5 microns. These particles are hazardous to human health, as their small size allows them to penetrate deep into the lungs. Since the late 1990's, the US Environmental Protection Agency has been monitoring PM2.5 using a network of ground-level sensors. Due to cost and space restrictions, the EPA monitoring network remains spatially sparse. That is, while the network spans the extent of the US, the distance between sensors is large enough that significant spatial variation in PM concentration can go undetected. To increase the spatial resolution of monitoring, previous studies have used satellite data to estimate ground-level PM concentrations. From imagery, one can create a measure of haziness due to aerosols, called aerosol optical depth (AOD), which then can be used to estimate PM concentrations using statistical and physical modeling. Additionally, previous research has identified a number of meteorological variables, such as relative humidity and mixing height, which aide in estimating PM concentrations from AOD. Although the high spatial resolution of satellite data is valuable alone for forecasting air quality, higher resolution ground-level data is needed to effectively study the relationship between PM2.5 concentrations and AOD. To this end, we discuss a citizen-science PM monitoring network deployed in California. Using low-cost PM sensors, this network achieves higher spatial resolution. We additionally discuss a software pipeline for integrating resulting PM measurements with satellite data, as well as initial data analysis.

  16. Effective Vehicle-Based Kangaroo Detection for Collision Warning Systems Using Region-Based Convolutional Networks.

    PubMed

    Saleh, Khaled; Hossny, Mohammed; Nahavandi, Saeid

    2018-06-12

    Traffic collisions between kangaroos and motorists are on the rise on Australian roads. According to a recent report, it was estimated that there were more than 20,000 kangaroo vehicle collisions that occurred only during the year 2015 in Australia. In this work, we are proposing a vehicle-based framework for kangaroo detection in urban and highway traffic environment that could be used for collision warning systems. Our proposed framework is based on region-based convolutional neural networks (RCNN). Given the scarcity of labeled data of kangaroos in traffic environments, we utilized our state-of-the-art data generation pipeline to generate 17,000 synthetic depth images of traffic scenes with kangaroo instances annotated in them. We trained our proposed RCNN-based framework on a subset of the generated synthetic depth images dataset. The proposed framework achieved a higher average precision (AP) score of 92% over all the testing synthetic depth image datasets. We compared our proposed framework against other baseline approaches and we outperformed it with more than 37% in AP score over all the testing datasets. Additionally, we evaluated the generalization performance of the proposed framework on real live data and we achieved a resilient detection accuracy without any further fine-tuning of our proposed RCNN-based framework.

  17. Citizen-Enabled Aerosol Measurements for Satellites (CEAMS): A Network for High-Resolution Measurements of PM2.5 and Aerosol Optical Depth

    NASA Astrophysics Data System (ADS)

    Pierce, J. R.; Volckens, J.; Ford, B.; Jathar, S.; Long, M.; Quinn, C.; Van Zyl, L.; Wendt, E.

    2017-12-01

    Atmospheric particulate matter with diameter smaller than 2.5 μm (PM2.5) is a pollutant that contributes to the development of human disease. Satellite-derived estimates of surface-level PM2.5 concentrations have the potential to contribute greatly to our understanding of how particulate matter affects health globally. However, these satellite-derived PM2.5 estimates are often uncertain due to a lack of information about the ratio of surface PM2.5 to aerosol optical depth (AOD), which is the primary aerosol retrieval made by satellite instruments. While modelling and statistical analyses have improved estimates of PM2.5:AOD, large uncertainties remain in situations of high PM2.5 exposure (such as urban areas and in wildfire-smoke plumes) where the health impacts of PM2.5 may be the greatest. Surface monitoring networks for co-incident PM2.5 and AOD measurements are extremely rare, even in the North America. To provide constraints for the PM2.5:AOD relationship, we have developed a relatively low-cost (<$1000) monitor for citizen use that provides sun-photometer AOD measurements and filter-based PM2.5 measurements. The instrument is solar-powered, lightweight (< 1kg), and operated wirelessly via smartphone application (iOS and Android). Sun photometry is performed across 4 discrete wavelengths that match those reported by the Aerosol Robotic Network (AERONET). Aerosol concentration is reported using both time-integrated filter mass (analyzed in an academic laboratory and reported as a 24-48hr average) and a continuous PM sensor within the instrument. Citizen scientists use the device to report daily AOD and PM2.5 measurements made in their backyards to a central server for data display and download. In this presentation, we provide an overview of (1) AOD and PM2.5 measurement calibration; (2) citizen recruiting and training efforts; and (3) results from our pilot citizen-science measurement campaign.

  18. Total Volcanic Stratospheric Aerosol Optical Depths and Implications for Global Climate Change

    NASA Technical Reports Server (NTRS)

    Ridley, D. A.; Solomon, S.; Barnes, J. E.; Burlakov, V. D.; Deshler, T.; Dolgii, S. I.; Herber, A. B.; Nagai, T.; Neely, R. R., III; Nevzorov, A. V.; hide

    2014-01-01

    Understanding the cooling effect of recent volcanoes is of particular interest in the context of the post-2000 slowing of the rate of global warming. Satellite observations of aerosol optical depth above 15 km have demonstrated that small-magnitude volcanic eruptions substantially perturb incoming solar radiation. Here we use lidar, Aerosol Robotic Network, and balloon-borne observations to provide evidence that currently available satellite databases neglect substantial amounts of volcanic aerosol between the tropopause and 15 km at middle to high latitudes and therefore underestimate total radiative forcing resulting from the recent eruptions. Incorporating these estimates into a simple climate model, we determine the global volcanic aerosol forcing since 2000 to be 0.19 +/- 0.09W/sq m. This translates into an estimated global cooling of 0.05 to 0.12 C. We conclude that recent volcanic events are responsible for more post-2000 cooling than is implied by satellite databases that neglect volcanic aerosol effects below 15 km.

  19. Distribution of Attenuation Factor Beneath the Japanese Islands

    NASA Astrophysics Data System (ADS)

    Fujihara, S.; Hashimoto, M.

    2001-12-01

    In this research, we tried to estimate the distribution of attenuation factor of seismic wave, which is closely related to the above-mentioned inelastic parameters. Here the velocity records of events from the Freesia network and the J-array network were used. The events were selected based on the following criteria: (a) events with JMA magnitudes from 3.8 to 5.0 and hypocentral distance from 20km to 200km, (b) events with JMA magnitudes from 5.1 to 6.8 and hypocentral distance from 200km to 10_?, (c) Depth of all events is greater than 30km with S/N ratio greater than 2. After correcting the instrument response, P-wave spectra were estimated. Following Boatwright (1991), the observed spectra were modeled by the theoretical spectra by assuming the following relation; Aij(f) = Si(f) Pij(f) Cj(f). Brune's model (1970) was assumed for the source model. Aij(f), Si(f), Pij(f), and Cj(f) are defined as observed spectrum, source spectrum, propagation effect, and site effect, respectively. Frequency dependence of attenuation factor was not assumed here. The global standard velocity model (AK135) is used for ray tracing. Ellipticity corrections and station elevation corrections are also done. The block sizes are 50km by 50km laterally and increase vertically. As the results of analysis, the attenuation structure beneath Japanese Islands up to the depth of 180km was reconstructed with relatively good resolution. The low Q distribution is clearly seen in central Hokkaido, western Hokkaido, Tohoku region, Hida region, Izu region, and southern Kyushu. The relatively sharp decrease in Q associated with asthenosphere can be seen below the depth of 70km.

  20. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    PubMed

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  1. Methods and Systems for Characterization of an Anomaly Using Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M. (Inventor)

    2013-01-01

    A method for characterizing an anomaly in a material comprises (a) extracting contrast data; (b) measuring a contrast evolution; (c) filtering the contrast evolution; (d) measuring a peak amplitude of the contrast evolution; (d) determining a diameter and a depth of the anomaly, and (e) repeating the step of determining the diameter and the depth of the anomaly until a change in the estimate of the depth is less than a set value. The step of determining the diameter and the depth of the anomaly comprises estimating the depth using a diameter constant C.sub.D equal to one for the first iteration of determining the diameter and the depth; estimating the diameter; and comparing the estimate of the depth of the anomaly after each iteration of estimating to the prior estimate of the depth to calculate the change in the estimate of the depth of the anomaly.

  2. Remote sensing of low visibility over otopeni airport

    NASA Astrophysics Data System (ADS)

    Buzdugan, Livius; Urlea, Denisa; Bugeac, Paul; Stefan, Sabina

    2018-04-01

    The paper is focused on the study of atmospheric conditions determining low vertical visibility over Henri Coanda airport. A network of ceilometers and a Sodar were used to detect fog and low level cloud layers. In our study, vertical visibility from ceilometers and acoustic reflectivity from Sodar for November 2016 were used to estimate fog depth and top of fog layers, respectively. The correlation between fog and low cloud occurrence and the wind direction and speed is also investigated.

  3. A deep learning approach for pose estimation from volumetric OCT data.

    PubMed

    Gessert, Nils; Schlüter, Matthias; Schlaefer, Alexander

    2018-05-01

    Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images' 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label's resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. The influence of the depth of k-core layers on the robustness of interdependent networks against cascading failures

    NASA Astrophysics Data System (ADS)

    Dong, Zhengcheng; Fang, Yanjun; Tian, Meng; Kong, Zhengmin

    The hierarchical structure, k-core, is common in various complex networks, and the actual network always has successive layers from 1-core layer (the peripheral layer) to km-core layer (the core layer). The nodes within the core layer have been proved to be the most influential spreaders, but there is few work about how the depth of k-core layers (the value of km) can affect the robustness against cascading failures, rather than the interdependent networks. First, following the preferential attachment, a novel method is proposed to generate the scale-free network with successive k-core layers (KCBA network), and the KCBA network is validated more realistic than the traditional BA network. Then, with KCBA interdependent networks, the effect of the depth of k-core layers is investigated. Considering the load-based model, the loss of capacity on nodes is adopted to quantify the robustness instead of the number of functional nodes in the end. We conduct two attacking strategies, i.e. the RO-attack (Randomly remove only one node) and the RF-attack (Randomly remove a fraction of nodes). Results show that the robustness of KCBA networks not only depends on the depth of k-core layers, but also is slightly influenced by the initial load. With RO-attack, the networks with less k-core layers are more robust when the initial load is small. With RF-attack, the robustness improves with small km, but the improvement is getting weaker with the increment of the initial load. In a word, the lower the depth is, the more robust the networks will be.

  5. Estimation of slip distribution using an inverse method based on spectral decomposition of Green's function utilizing Global Positioning System (GPS) data

    NASA Astrophysics Data System (ADS)

    Jin, Honglin; Kato, Teruyuki; Hori, Muneo

    2007-07-01

    An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.

  6. Location Performance and Detection Threshold of the Spanish National Seismic Network

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Antonino; Badal, José; D'Anna, Giuseppe; Papanastassiou, Dimitris; Baskoutas, Ioannis; Özel, Nurcan M.

    2013-11-01

    Spain is a low-to-moderate seismicity area with relatively low seismic hazard. However, several strong shallow earthquakes have shaken the country causing casualties and extensive damage. Regional seismicity is monitored and surveyed by means of the Spanish National Seismic Network, maintenance and control of which are entrusted to the Instituto Geográfico Nacional. This array currently comprises 120 seismic stations distributed throughout Spanish territory (mainland and islands). Basically, we are interested in checking the noise conditions, reliability, and seismic detection capability of the Spanish network by analyzing the background noise level affecting the array stations, errors in hypocentral location, and detection threshold, which provides knowledge about network performance. It also enables testing of the suitability of the velocity model used in the routine process of earthquake location. To perform this study we use a method that relies on P and S wave travel times, which are computed by simulation of seismic rays from virtual seismic sources placed at the nodes of a regular grid covering the study area. Given the characteristics of the seismicity of Spain, we drew maps for M L magnitudes 2.0, 2.5, and 3.0, at a focal depth of 10 km and a confidence level 95 %. The results relate to the number of stations involved in the hypocentral location process, how these stations are distributed spatially, and the uncertainties of focal data (errors in origin time, longitude, latitude, and depth). To assess the extent to which principal seismogenic areas are well monitored by the network, we estimated the average error in the location of a seismic source from the semiaxes of the ellipsoid of confidence by calculating the radius of the equivalent sphere. Finally, the detection threshold was determined as the magnitude of the smallest seismic event detected at least by four stations. The northwest of the peninsula, the Pyrenees, especially the westernmost segment, the Betic Cordillera, and Tenerife Island are the best-monitored zones. Origin time and focal depth are data that are far from being constrained by regional events. The two Iberian areas with moderate seismicity and the highest seismic hazard, the Pyrenees and Betic Cordillera, and the northwestern quadrant of the peninsula, are the areas wherein the focus of an earthquake is determined with an approximate error of 3 km. For M L 2.5 and M L 3.0 this error is common for almost the whole peninsula and the Canary Islands. In general, errors in epicenter latitude and longitude are small for near-surface earthquakes, increasing gradually as the depth increases, but remaining close to 5 km even at a depth of 60 km. The hypocentral depth seems to be well constrained to a depth of 40 km beneath the zones with the highest density of stations, with an error of less than 5 km. The M L magnitude detection threshold of the network is approximately 2.0 for most of Spain and still less, almost 1.0, for the western sector of the Pyrenean region and the Canary Islands.

  7. Channel-parameter estimation for satellite-to-submarine continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Xie, Cailang; Huang, Peng; Li, Jiawei; Zhang, Ling; Huang, Duan; Zeng, Guihua

    2018-05-01

    This paper deals with a channel-parameter estimation for continuous-variable quantum key distribution (CV-QKD) over a satellite-to-submarine link. In particular, we focus on the channel transmittances and the excess noise which are affected by atmospheric turbulence, surface roughness, zenith angle of the satellite, wind speed, submarine depth, etc. The estimation method is based on proposed algorithms and is applied to low-Earth orbits using the Monte Carlo approach. For light at 550 nm with a repetition frequency of 1 MHz, the effects of the estimated parameters on the performance of the CV-QKD system are assessed by a simulation by comparing the secret key bit rate in the daytime and at night. Our results show the feasibility of satellite-to-submarine CV-QKD, providing an unconditionally secure approach to achieve global networks for underwater communications.

  8. 3D printed biomimetic vascular phantoms for assessment of hyperspectral imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Jianting; Ghassemi, Pejhman; Melchiorri, Anthony; Ramella-Roman, Jessica; Mathews, Scott A.; Coburn, James; Sorg, Brian; Chen, Yu; Pfefer, Joshua

    2015-03-01

    The emerging technique of three-dimensional (3D) printing provides a revolutionary way to fabricate objects with biologically realistic geometries. Previously we have performed optical and morphological characterization of basic 3D printed tissue-simulating phantoms and found them suitable for use in evaluating biophotonic imaging systems. In this study we assess the potential for printing phantoms with irregular, image-defined vascular networks that can be used to provide clinically-relevant insights into device performance. A previously acquired fundus camera image of the human retina was segmented, embedded into a 3D matrix, edited to incorporate the tubular shape of vessels and converted into a digital format suitable for printing. A polymer with biologically realistic optical properties was identified by spectrophotometer measurements of several commercially available samples. Phantoms were printed with the retinal vascular network reproduced as ~1.0 mm diameter channels at a range of depths up to ~3 mm. The morphology of the printed vessels was verified by volumetric imaging with μ-CT. Channels were filled with hemoglobin solutions at controlled oxygenation levels, and the phantoms were imaged by a near-infrared hyperspectral reflectance imaging system. The effect of vessel depth on hemoglobin saturation estimates was studied. Additionally, a phantom incorporating the vascular network at two depths was printed and filled with hemoglobin solution at two different saturation levels. Overall, results indicated that 3D printed phantoms are useful for assessing biophotonic system performance and have the potential to form the basis of clinically-relevant standardized test methods for assessment of medical imaging modalities.

  9. A fusion network for semantic segmentation using RGB-D data

    NASA Astrophysics Data System (ADS)

    Yuan, Jiahui; Zhang, Kun; Xia, Yifan; Qi, Lin; Dong, Junyu

    2018-04-01

    Semantic scene parsing is considerable in many intelligent field, including perceptual robotics. For the past few years, pixel-wise prediction tasks like semantic segmentation with RGB images has been extensively studied and has reached very remarkable parsing levels, thanks to convolutional neural networks (CNNs) and large scene datasets. With the development of stereo cameras and RGBD sensors, it is expected that additional depth information will help improving accuracy. In this paper, we propose a semantic segmentation framework incorporating RGB and complementary depth information. Motivated by the success of fully convolutional networks (FCN) in semantic segmentation field, we design a fully convolutional networks consists of two branches which extract features from both RGB and depth data simultaneously and fuse them as the network goes deeper. Instead of aggregating multiple model, our goal is to utilize RGB data and depth data more effectively in a single model. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and achieve competitive results with the state-of-the-art methods.

  10. Flood forecasting within urban drainage systems using NARX neural network.

    PubMed

    Abou Rjeily, Yves; Abbas, Oras; Sadek, Marwan; Shahrour, Isam; Hage Chehade, Fadi

    2017-11-01

    Urbanization activity and climate change increase the runoff volumes, and consequently the surcharge of the urban drainage systems (UDS). In addition, age and structural failures of these utilities limit their capacities, and thus generate hydraulic operation shortages, leading to flooding events. The large increase in floods within urban areas requires rapid actions from the UDS operators. The proactivity in taking the appropriate actions is a key element in applying efficient management and flood mitigation. Therefore, this work focuses on developing a flooding forecast system (FFS), able to alert in advance the UDS managers for possible flooding. For a forecasted storm event, a quick estimation of the water depth variation within critical manholes allows a reliable evaluation of the flood risk. The Nonlinear Auto Regressive with eXogenous inputs (NARX) neural network was chosen to develop the FFS as due to its calculation nature it is capable of relating water depth variation in manholes to rainfall intensities. The campus of the University of Lille is used as an experimental site to test and evaluate the FFS proposed in this paper.

  11. Multilinear approach to the precipitation-lightning relationship: a case study of summer local electrical storms in the northern part of Spain during 2002-2009 period

    NASA Astrophysics Data System (ADS)

    Herrero, I.; Ezcurra, A.; Areitio, J.; Diaz-Argandoña, J.; Ibarra-Berastegi, G.; Saenz, J.

    2013-11-01

    Storms developed under local instability conditions are studied in the Spanish Basque region with the aim of establishing precipitation-lightning relationships. Those situations may produce, in some cases, flash flood. Data used correspond to daily rain depth (mm) and the number of CG flashes in the area. Rain and lightning are found to be weakly correlated on a daily basis, a fact that seems related to the existence of opposite gradients in their geographical distribution. Rain anomalies, defined as the difference between observed and estimated rain depth based on CG flashes, are analysed by PCA method. Results show a first EOF explaining 50% of the variability that linearly relates the rain anomalies observed each day and that confirms their spatial structure. Based on those results, a multilinear expression has been developed to estimate the rain accumulated daily in the network based on the CG flashes registered in the area. Moreover, accumulates and maximum values of rain are found to be strongly correlated, therefore making the multilinear expression a useful tool to estimate maximum precipitation during those kind of storms.

  12. Technique for estimating depth of floods in Tennessee

    USGS Publications Warehouse

    Gamble, C.R.

    1983-01-01

    Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)

  13. Structure of the Korean Peninsula from Waveform Travel-Time Analysis

    DTIC Science & Technology

    2008-09-01

    Bondár’s criteria (Bondár et al., 2004) to the database of 230 KMA events with depth locations requiring that each potential GT5 event is located...hypocenter database . They are well located within the dense network of KMA stations as required by Bondár’s criteria. Estimation of 3-D Moho...However, not all of these phase picks can be utilized during the velocity inversion as the implemented ray tracing is based on the eikonal solver

  14. Optimal Estimation of Glider’s Underwater Trajectory with Depth-Dependent Correction Using the Navy Coastal Ocean Model with Application to Antisubmarine Warfare

    DTIC Science & Technology

    2014-09-01

    deployed simultaneously. For example, a fleet of gliders would be able to act as an intelligence network by gathering underwater target information ...and to verify our novel method, a glider’s real underwater trajectory information must be obtained by using additional sensors like ADCP or DVL (see...lacks of inexpensive and efficient localization sensors during its subsurface mission. Therefore, knowing its precise underwater position is a

  15. Second Quarter Hanford Seismic Report for Fiscal Year 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohay, Alan C.; Sweeney, Mark D.; Hartshorn, Donald C.

    2009-07-31

    The Hanford Seismic Assessment Program (HSAP) provides an uninterrupted collection of high-quality raw and processed seismic data from the Hanford Seismic Network for the U.S. Department of Energy and its contractors. The HSAP is responsible for locating and identifying sources of seismic activity and monitoring changes in the historical pattern of seismic activity at the Hanford Site. The data are compiled, archived, and published for use by the Hanford Site for waste management, natural phenomena hazards assessments, and engineering design and construction. In addition, the HSAP works with the Hanford Site Emergency Services Organization to provide assistance in the eventmore » of a significant earthquake on the Hanford Site. The Hanford Seismic Network and the Eastern Washington Regional Network consist of 44 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Assessment Team. The Hanford Seismic Network recorded over 800 local earthquakes during the second quarter of FY 2009. Nearly all of these earthquakes were detected in the vicinity of Wooded Island, located about eight miles north of Richland just west of the Columbia River. Most of the events were considered minor (magnitude (Mc) less than 1.0) with 19 events in the 2.0-2.9 range. The estimated depths of the Wooded Island events are shallow (averaging less than 1.0 km deep) with a maximum depth estimated at 1.9 km. This places the Wooded Island events within the Columbia River Basalt Group (CRBG). The low magnitude and the shallowness of the Wooded Island events have made them undetectable to most area residents. However, some Hanford employees working within a few miles of the area of highest activity, and individuals living in homes directly across the Columbia River from the swarm center, have reported feeling some movement. The Hanford SMA network was triggered numerous times by the Wooded Island swarm events. The maximum acceleration values recorded by the SMA network were approximately 2-3 times lower than the reportable action level for Hanford facilities (2% g) and no action was required. The swarming is likely due to pressures that have built up, cracking the brittle basalt layers within the Columbia River Basalt Formation (CRBG). Similar earthquake “swarms” have been recorded near this same location in 1970, 1975 and 1988. Prior to the 1970s, swarming may have occurred, but equipment was not in place to record those events. Quakes of this limited magnitude do not pose a risk to Hanford cleanup efforts or waste storage facilities. Since swarms of the past did not intensify in magnitude, seismologists do not expect that these events will increase in intensity. However, PNNL will continue to monitor the activity continuously. Outside of the Wooded Island swarm, four earthquakes were recorded. Three earthquakes were classified as minor and one event registered 2.3 Mc. One earthquake was located at intermediate depth (between 4 and 9 km, most likely in the pre-basalt sediments) and three earthquakes at depths greater than 9 km, within the basement. Geographically, two earthquakes were located in known swarm areas and two earthquakes were classified as random events.« less

  16. Efficient wetland surface water detection and monitoring via Landsat: Comparison with in situ data from the Everglades Depth Estimation Network

    USGS Publications Warehouse

    Jones, John W.

    2015-01-01

    The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE), is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN) was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral instruments. Although no other sites have such an extensive in situ network or long-term records, the broader applicability of this and other candidate DSWE algorithms is being evaluated in other wetlands using this work as a guide. Continued interaction among DSWE producers and potential users will help determine whether the measured accuracies are adequate for practical utility in resource management.

  17. Estimation of grazing-induced erosion through remote-sensing technologies in the Autonomous Province of Trento, Northern Italy

    NASA Astrophysics Data System (ADS)

    Torresani, Loris; Prosdocimi, Massimo; Masin, Roberta; Penasa, Mauro; Tarolli, Paolo

    2017-04-01

    Grassland and pasturelands cover a vast portion of the Earth surface and are vital for biodiversity richness, environmental protection and feed resources for livestock. Overgrazing is considered one of the major causes of soil degradation worldwide, mainly in pasturelands grazed by domestic animals. Therefore, an in-depth investigation to better quantify the effects of overgrazing in terms of soil loss is needed. At this regard, this work aims to estimate the volume of eroded materials caused by mismanagement of grazing areas in the whole Autonomous Province of Trento (Northern Italy). To achieve this goal, the first step dealt with the analysis of the entire provincial area by means of freely available aerial images, which allowed the identification and accurate mapping of every eroded area caused by grazing animals. The terrestrial digital photogrammetric technique, namely Structure from Motion (SfM), was then applied to obtain high-resolution Digital Surface Models (DSMs) of two representative eroded areas. By having the pre-event surface conditions, DSMs of difference, namely DoDs, was computed to estimate the erosion volume and the average depth of erosion for both areas. The average depths obtained from the DoDs were compared and validated by measures taken in the field. A large amount of depth measures from different sites were then collected to obtain a reference value for the whole province. This value was used as reference depth for calculating the eroded volume in the whole province. In the final stage, the Connectivity Index (CI) was adopted to analyse the existing connection between the eroded areas and the channel network. This work highlighted that SfM can be a solid low-cost technique for the low-cost and fast quantification of eroded soil due to grazing. It can also be used as a strategic instrument for improving the grazing management system at large scales, with the goal of reducing the risk of pastureland degradation.

  18. Sensor networks for optimal target localization with bearings-only measurements in constrained three-dimensional scenarios.

    PubMed

    Moreno-Salinas, David; Pascoal, Antonio; Aranda, Joaquin

    2013-08-12

    In this paper, we address the problem of determining the optimal geometric configuration of an acoustic sensor network that will maximize the angle-related information available for underwater target positioning. In the set-up adopted, a set of autonomous vehicles carries a network of acoustic units that measure the elevation and azimuth angles between a target and each of the receivers on board the vehicles. It is assumed that the angle measurements are corrupted by white Gaussian noise, the variance of which is distance-dependent. Using tools from estimation theory, the problem is converted into that of minimizing, by proper choice of the sensor positions, the trace of the inverse of the Fisher Information Matrix (also called the Cramer-Rao Bound matrix) to determine the sensor configuration that yields the minimum possible covariance of any unbiased target estimator. It is shown that the optimal configuration of the sensors depends explicitly on the intensity of the measurement noise, the constraints imposed on the sensor configuration, the target depth and the probabilistic distribution that defines the prior uncertainty in the target position. Simulation examples illustrate the key results derived.

  19. Fluvial valleys in the heavily cratered terrains of Mars: Evidence for paleoclimatic change?

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Baker, V. R.

    1993-01-01

    Whether the formation of the Martian valley networks provides unequivocal evidence for drastically different climatic conditions remains debatable. Recent theoretical climate modeling precludes the existence of a temperate climate early in Mars' geological history. An alternative hypothesis suggests that Mars had a globally higher heat flow early in its geological history, bringing water tables to within 350 m of the surface. While a globally higher heat flow would initiate ground water circulation at depth, the valley networks probably required water tables to be even closer to the surface. Additionally, it was previously reported that the clustered distribution of the valley networks within terrain types, particularly in the heavily cratered highlands, suggests regional hydrological processes were important. The case for localized hydrothermal systems is summarized and estimates of both erosion volumes and of the implied water volumes for several Martian valley systems are presented.

  20. Aerosol profiling using the ceilometer network of the German Meteorological Service

    NASA Astrophysics Data System (ADS)

    Flentje, H.; Heese, B.; Reichardt, J.; Thomas, W.

    2010-08-01

    The German Meteorological Service (DWD) operates about 52 lidar ceilometers within its synoptic observations network, covering Germany. These affordable low-power lidar systems provide spatially and temporally high resolved aerosol backscatter profiles which can operationally provide quasi 3-D distributions of particle backscatter intensity. Intentionally designed for cloud height detection, recent significant improvements allow following the development of the boundary layer and to detect denser particle plumes in the free tropospere like volcanic ash, Saharan dust or fire smoke. Thus the network builds a powerful aerosol plume alerting and tracking system. If auxiliary aerosol information is available, the particle backscatter coefficient, the extinction coefficient and even particle mass concentrations may be estimated, with however large uncertainties. Therefore, large synergistic benefit is achieved if the ceilometers are linked to existing lidar networks like EARLINET or integrated into WMO's envisioined Global Aerosol Lidar Observation Network GALION. To this end, we demonstrate the potential and limitations of ceilometer networks by means of three representative aerosol episodes over Europe, namely Sahara dust, Mediterranean fire smoke and, more detailed, the Icelandic Eyjafjoll volcano eruption from mid April 2010 onwards. The DWD (Jenoptik CHM15k) lidar ceilometer network tracked the Eyjafjoll ash layers over Germany and roughly estimated peak extinction coefficients and mass concentrations on 17 April of 4-6(± 2) 10-4 m-1 and 500-750(± 300) μg/m-3, respectively, based on co-located aerosol optical depth, nephelometer (scattering coefficient) and particle mass concentration measurements. Though large, the uncertainties are small enough to let the network suit for example as aviation advisory tool, indicating whether the legal flight ban threshold of presently 2 mg/m3 is imminent to be exceeded.

  1. Nitrate removal in deep sediments of a nitrogen-rich river network: A test of a conceptual model

    USGS Publications Warehouse

    Stelzer, Robert S.; Bartsch, Lynn

    2012-01-01

    Many estimates of nitrogen removal in streams and watersheds do not include or account for nitrate removal in deep sediments, particularly in gaining streams. We developed and tested a conceptual model for nitrate removal in deep sediments in a nitrogen-rich river network. The model predicts that oxic, nitrate-rich groundwater will become depleted in nitrate as groundwater upwelling through sediments encounters a zone that contains buried particulate organic carbon, which promotes redox conditions favorable for nitrate removal. We tested the model at eight sites in upwelling reaches of lotic ecosystems in the Waupaca River Watershed that varied by three orders of magnitude in groundwater nitrate concentration. We measured denitrification potential in sediment core sections to 30 cm and developed vertical nitrate profiles to a depth of about 1 m with peepers and piezometer nests. Denitrification potential was higher, on average, in shallower core sections. However, core sections deeper than 5 cm accounted for 70%, on average, of the depth-integrated denitrification potential. Denitrification potential increased linearly with groundwater nitrate concentration up to 2 mg NO3-N/L but the relationship broke down at higher concentrations (> 5 mg NO3-N/L), a pattern that suggests nitrate saturation. At most sites groundwater nitrate declined from high concentrations at depth to much lower concentrations prior to discharge into the surface water. The profiles suggested that nitrate removal occurred at sediment depths between 20 and 40 cm. Dissolved oxygen concentrations were much higher in deep sediments than in pore water at 5 cm sediment depth at most locations. The substantial denitrification potential in deep sediments coupled with the declines in nitrate and dissolved oxygen concentrations in upwelling groundwater suggest that our conceptual model for nitrate removal in deep sediments is applicable to this river network. Our results suggest that nitrate removal rates can be high in deep sediments of upwelling stream reaches, which may have implications for efforts to understand and quantify nitrogen transport and removal at larger scales.

  2. Observational needs for estimating Alaskan soil carbon stocks under current and future climate

    DOE PAGES

    Vitharana, U. W. A.; Mishra, U.; Jastrow, J. D.; ...

    2017-01-24

    Representing land surface spatial heterogeneity when designing observation networks is a critical scientific challenge. Here we present a geospatial approach that utilizes the multivariate spatial heterogeneity of soil-forming factors—namely, climate, topography, land cover types, and surficial geology—to identify observation sites to improve soil organic carbon (SOC) stock estimates across the State of Alaska, USA. Standard deviations in existing SOC samples indicated that 657, 870, and 906 randomly distributed pedons would be required to quantify the average SOC stocks for 0–1 m, 0–2 m, and whole-profile depths, respectively, at a confidence interval of 5 kg C m -2. Using the spatialmore » correlation range of existing SOC samples, we identified that 309, 446, and 484 new observation sites are needed to estimate current SOC stocks to 1 m, 2 m, and whole-profile depths, respectively. We also investigated whether the identified sites might change under future climate by using eight decadal (2020–2099) projections of precipitation, temperature, and length of growing season for three representative concentration pathway (RCP 4.5, 6.0, and 8.5) scenarios of the Intergovernmental Panel on Climate Change. These analyses determined that 12 to 41 additional sites (906 + 12 to 41; depending upon the emission scenarios) would be needed to capture the impact of future climate on Alaskan whole-profile SOC stocks by 2100. The identified observation sites represent spatially distributed locations across Alaska that captures the multivariate heterogeneity of soil-forming factors under current and future climatic conditions. This information is needed for designing monitoring networks and benchmarking of Earth system model results.« less

  3. Observational needs for estimating Alaskan soil carbon stocks under current and future climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitharana, U. W. A.; Mishra, U.; Jastrow, J. D.

    Representing land surface spatial heterogeneity when designing observation networks is a critical scientific challenge. Here we present a geospatial approach that utilizes the multivariate spatial heterogeneity of soil-forming factors—namely, climate, topography, land cover types, and surficial geology—to identify observation sites to improve soil organic carbon (SOC) stock estimates across the State of Alaska, USA. Standard deviations in existing SOC samples indicated that 657, 870, and 906 randomly distributed pedons would be required to quantify the average SOC stocks for 0–1 m, 0–2 m, and whole-profile depths, respectively, at a confidence interval of 5 kg C m -2. Using the spatialmore » correlation range of existing SOC samples, we identified that 309, 446, and 484 new observation sites are needed to estimate current SOC stocks to 1 m, 2 m, and whole-profile depths, respectively. We also investigated whether the identified sites might change under future climate by using eight decadal (2020–2099) projections of precipitation, temperature, and length of growing season for three representative concentration pathway (RCP 4.5, 6.0, and 8.5) scenarios of the Intergovernmental Panel on Climate Change. These analyses determined that 12 to 41 additional sites (906 + 12 to 41; depending upon the emission scenarios) would be needed to capture the impact of future climate on Alaskan whole-profile SOC stocks by 2100. The identified observation sites represent spatially distributed locations across Alaska that captures the multivariate heterogeneity of soil-forming factors under current and future climatic conditions. This information is needed for designing monitoring networks and benchmarking of Earth system model results.« less

  4. Estimation of the geothermal potential of the Caldara di Manziana site in the Mts Sabatini Volcanic District (Central Italy) by integrating geochemical data and 3D-GIS modelling.

    NASA Astrophysics Data System (ADS)

    Ranaldi, Massimo; Lelli, Matteo; Tarchini, Luca; Carapezza, Maria Luisa; Patera, Antonio

    2016-04-01

    High-enthalpy geothermal fields of Central Italy are hosted in deeply fractured carbonate reservoirs occurring in thermally anomalous and seismically active zones. However, the Mts. Sabatini volcanic district, located north of Rome, has an interesting deep temperatures (T), but it is characterized by low to very low seismicity and permeability in the reservoir rocks (mostly because of hydrothermal self-sealing processes). Low PCO2 facilitates the complete sealing of the reservoir fractures, preventing hot fluids rising and, determining a low CO2 flux at the surface. Conversely, high CO2 flux generally reflects a high pressure of CO2, suggesting that an active geothermal reservoir is present at depth. In Mts. Sabatini district, the Caldara of Manziana (CM) is the only zone characterized by a very high CO2 flux (188 tons/day) from a surface of 0.15 km2) considering both the diffuse and viscous CO2 emission. This suggests the likely presence of an actively degassing geothermal reservoir at depth. Emitted gas is dominated by CO2 (>97 vol.%). Triangular irregular networks (TINs) have been used to represent the morphology of the bottom of the surficial volcanic deposits, the thickness of the impervious formation and the top of the geothermal reservoir. The TINs, integrated by T-gradient and deep well data, allowed to estimate the depth and the temperature of the top of the geothermal reservoir, respectively to ~-1000 m from the surface and to ~130°C. These estimations are fairly in agreement with those obtained by gas chemistry (818

  5. Recurrent Neural Network Applications for Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  6. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  7. Regional correlations of VS30 averaged over depths less than and greater than 30 meters

    USGS Publications Warehouse

    Boore, David M.; Thompson, Eric M.; Cadet, Héloïse

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (VS30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (VSz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that VSz is systematically larger for a given VSz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating VS30 to VSz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate VS30 from VSz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in logVS30 of ±1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to VS30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that VS30 is correlated with VSz for z as great as 400 m for sites of the KiK-net network, providing some justification for using VS30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  8. Attenuation of coda waves in the Aswan Reservoir area, Egypt

    NASA Astrophysics Data System (ADS)

    Mohamed, H. H.; Mukhopadhyay, S.; Sharma, J.

    2010-09-01

    Coda attenuation characteristics of Aswan Reservoir area of Egypt were analyzed using data recorded by a local earthquake network operated around the reservoir. 330 waveforms obtained from 28 earthquakes recorded by a network of 13 stations were used for this analysis. Magnitude of these earthquakes varied between 1.4 and 2.5. The maximum epicentral distance and depth of focus of these earthquakes were 45 km and 16 km respectively. Single back-scattering method was used for estimation of coda Q ( Qc). The Q0 values ( Qc at 1 Hz) vary between 54 and 100 and frequency dependence parameter " n" values vary between 1 and 1.2 for lapse time varying between 15 s and 60 s. It is observed that coda Q ( Qc) and related parameters are similar at similar lapse times to those observed for those for Koyna, India, where reservoir induced seismicity is also observed. For both regions these parameters are also similar to those observed for tectonically active regions of the world, although Aswan is located in a moderately active region and Koyna is located in a tectonically stable region. However, Qc does not increase uniformly with increasing lapse time, as is observed for several parts of the world. Converting lapse time to depth/distance it is observed that Qc becomes lower or remains almost constant at around 70 to 90 km and 120 km depth/distance. This indicates presence of more attenuative material at those depth levels or distances compared to their immediate surroundings. It is proposed that this variation indicates presence of fluid filled fractures and/or partial melts at some depths/distance from the area of study. The Qc values are higher than those obtained for the Gulf of Suez and Al Dabbab region of Egypt at distances greater than 300 km from the study area by other workers. The turbidity decreases with depth in the study area.

  9. Time functions of deep earthquakes from broadband and short-period stacks

    USGS Publications Warehouse

    Houston, H.; Benz, H.M.; Vidale, J.E.

    1998-01-01

    To constrain dynamic source properties of deep earthquakes, we have systematically constructed broadband time functions of deep earthquakes by stacking and scaling teleseismic P waves from U.S. National Seismic Network, TERRAscope, and Berkeley Digital Seismic Network broadband stations. We examined 42 earthquakes with depths from 100 to 660 km that occurred between July 1, 1992 and July 31, 1995. To directly compare time functions, or to group them by size, depth, or region, it is essential to scale them to remove the effect of moment, which varies by more than 3 orders of magnitude for these events. For each event we also computed short-period stacks of P waves recorded by west coast regional arrays. The comparison of broadband with short-period stacks yields a considerable advantage, enabling more reliable measurement of event duration. A more accurate estimate of the duration better constrains the scaling procedure to remove the effect of moment, producing scaled time functions with both correct timing and amplitude. We find only subtle differences in the broadband time-function shape with moment, indicating successful scaling and minimal effects of attenuation at the periods considered here. The average shape of the envelopes of the short-period stacks is very similar to the average broadband time function. The main variations seen with depth are (1) a mild decrease in duration with increasing depth, (2) greater asymmetry in the time functions of intermediate events compared to deep ones, and (3) unexpected complexity and late moment release for events between 350 and 550 km, with seven of the eight events in that depth interval displaying markedly more complicated time functions with more moment release late in the rupture than most events above or below. The first two results are broadly consistent with our previous studies, while the third is reported here for the first time. The greater complexity between 350 and 550 km suggests greater heterogeneity in the failure process in that depth range. Copyright 1998 by the American Geophysical Union.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohay, Alan C.; Clayton, Ray E.; Sweeney, Mark D.

    The Hanford Seismic Assessment Program (HSAP) provides an uninterrupted collection of high-quality raw and processed seismic data from the Hanford Seismic Network for the U.S. Department of Energy and its contractors. The HSAP is responsible for locating and identifying sources of seismic activity and monitoring changes in the historical pattern of seismic activity at the Hanford Site. The data are compiled, archived, and published for use by the Hanford Site for waste management, natural phenomena hazards assessments, and engineering design and construction. In addition, the HSAP works with the Hanford Site Emergency Services Organization to provide assistance in the eventmore » of a significant earthquake on the Hanford Site. The Hanford Seismic Network and the Eastern Washington Regional Network consist of 44 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Assessment Team. During FY 2010, the Hanford Seismic Network recorded 873 triggers on the seismometer system, which included 259 seismic events in the southeast Washington area and an additional 324 regional and teleseismic events. There were 210 events determined to be local earthquakes relevant to the Hanford Site. One hundred and fifty-five earthquakes were detected in the vicinity of Wooded Island, located about eight miles north of Richland just west of the Columbia River. The Wooded Island events recorded this fiscal year were a continuation of the swarm events observed during fiscal year 2009 and reported in previous quarterly and annual reports (Rohay et al. 2009a, 2009b, 2009c, 2010a, 2010b, and 2010c). Most events were considered minor (coda-length magnitude [Mc] less than 1.0) with the largest event recorded on February 4, 2010 (3.0Mc). The estimated depths of the Wooded Island events are shallow (averaging approximately 1.5 km deep) placing the swarm within the Columbia River Basalt Group. Based upon the last two quarters (Q3 and Q4) data, activity at the Wooded Island area swarm has largely subsided. Pacific Northwest National Laboratory will continue to monitor for activity at this location. The highest-magnitude events (3.0Mc) were recorded on February 4, 2010 within the Wooded Island swarm (depth 2.4 km) and May 8, 2010 on or near the Saddle Mountain anticline (depth 3.0 km). This latter event is not considered unusual in that earthquakes have been previously recorded at this location, for example, in October 2006 (Rohay et al. 2007). With regard to the depth distribution, 173 earthquakes were located at shallow depths (less than 4 km, most likely in the Columbia River basalts), 18 earthquakes were located at intermediate depths (between 4 and 9 km, most likely in the pre-basalt sediments), and 19 earthquakes were located at depths greater than 9 km, within the crystalline basement. Geographically, 178 earthquakes were located in known swarm areas, 4 earthquakes occurred on or near a geologic structure (Saddle Mountain anticline), and 28 earthquakes were classified as random events. The Hanford Strong Motion Accelerometer (SMA) network was triggered several times by the Wooded Island swarm events and the events located on or near the Saddle Mountain anticline. The maximum acceleration value recorded by the SMA network during fiscal year 2010 occurred February 4, 2010 (Wooded Island swarm event), approximately 2 times lower than the reportable action level for Hanford facilities (2% g) with no action required.« less

  11. Depth-Duration Frequency of Precipitation for Oklahoma

    USGS Publications Warehouse

    Tortorelli, Robert L.; Rea, Alan; Asquith, William H.

    1999-01-01

    A regional frequency analysis was conducted to estimate the depth-duration frequency of precipitation for 12 durations in Oklahoma (15, 30, and 60 minutes; 1, 2, 3, 6, 12, and 24 hours; and 1, 3, and 7 days). Seven selected frequencies, expressed as recurrence intervals, were investigated (2, 5, 10, 25, 50, 100, and 500 years). L-moment statistics were used to summarize depth-duration data and to determine the appropriate statistical distributions. Three different rain-gage networks provided the data (15minute, 1-hour, and 1-day). The 60-minute, and 1-hour; and the 24-hour, and 1-day durations were analyzed separately. Data were used from rain-gage stations with at least 10-years of record and within Oklahoma or about 50 kilometers into bordering states. Precipitation annual maxima (depths) were determined from the data for 110 15-minute, 141 hourly, and 413 daily stations. The L-moment statistics for depths for all durations were calculated for each station using unbiased L-mo-ment estimators for the mean, L-scale, L-coefficient of variation, L-skew, and L-kur-tosis. The relation between L-skew and L-kurtosis (L-moment ratio diagram) and goodness-of-fit measures were used to select the frequency distributions. The three-parameter generalized logistic distribution was selected to model the frequencies of 15-, 30-, and 60-minute annual maxima; and the three-parameter generalized extreme-value distribution was selected to model the frequencies of 1-hour to 7-day annual maxima. The mean for each station and duration was corrected for the bias associated with fixed interval recording of precipitation amounts. The L-scale and spatially averaged L-skew statistics were used to compute the location, scale, and shape parameters of the selected distribution for each station and duration. The three parameters were used to calculate the depth-duration-frequency relations for each station. The precipitation depths for selected frequencies were contoured from weighted depth surfaces to produce maps from which the precipitation depth-duration-frequency curve for selected storm durations can be determined for any site in Oklahoma.

  12. Deep convolutional neural network processing of aerial stereo imagery to monitor vulnerable zones near power lines

    NASA Astrophysics Data System (ADS)

    Qayyum, Abdul; Saad, Naufal M.; Kamel, Nidal; Malik, Aamir Saeed

    2018-01-01

    The monitoring of vegetation near high-voltage transmission power lines and poles is tedious. Blackouts present a huge challenge to power distribution companies and often occur due to tree growth in hilly and rural areas. There are numerous methods of monitoring hazardous overgrowth that are expensive and time-consuming. Accurate estimation of tree and vegetation heights near power poles can prevent the disruption of power transmission in vulnerable zones. This paper presents a cost-effective approach based on a convolutional neural network (CNN) algorithm to compute the height (depth maps) of objects proximal to power poles and transmission lines. The proposed CNN extracts and classifies features by employing convolutional pooling inputs to fully connected data layers that capture prominent features from stereo image patches. Unmanned aerial vehicle or satellite stereo image datasets can thus provide a feasible and cost-effective approach that identifies threat levels based on height and distance estimations of hazardous vegetation and other objects. Results were compared with extant disparity map estimation techniques, such as graph cut, dynamic programming, belief propagation, and area-based methods. The proposed method achieved an accuracy rate of 90%.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawase, Kazumasa; Uehara, Yasushi; Teramoto, Akinobu

    Silicon dioxide (SiO{sub 2}) films formed by chemical vapor deposition (CVD) were treated with oxygen radical oxidation using Ar/O{sub 2} plasma excited by microwave. The mass density depth profiles, carrier trap densities, and current-voltage characteristics of the radical-oxidized CVD-SiO{sub 2} films were investigated. The mass density depth profiles were estimated with x ray reflectivity measurement using synchrotron radiation of SPring-8. The carrier trap densities were estimated with x ray photoelectron spectroscopy time-dependent measurement. The mass densities of the radical-oxidized CVD-SiO{sub 2} films were increased near the SiO{sub 2} surface. The densities of the carrier trap centers in these films weremore » decreased. The leakage currents of the metal-oxide-semiconductor capacitors fabricated by using these films were reduced. It is probable that the insulation properties of the CVD-SiO{sub 2} film are improved by the increase in the mass density and the decrease in the carrier trap density caused by the restoration of the Si-O network with the radical oxidation.« less

  14. A teleseismic analysis of the New Brunswick earthquake of January 9, 1982.

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.; Dewey, J.W.; Sipkin, S.A.

    1983-01-01

    The analysis of the New Brunswick earthquake of January 9, 1982, has important implications for the evaluation of seismic hazards in eastern North America. Although moderate in size (mb, 5.7), it was well-recorded teleseismically. Source characteristics of this earthquake have been determined from analysis of data that were digitally recorded by the Global Digital Seismography Network. From broadband displacement and velocity records of P waves, we have obtained a dynamic description of the rupture process as well as conventional static properties of the source. The depth of the hypocenter is estimated to be 9km from depth phases. The focal mechanism determined from the broadband data corresponds to predominantly thrust faulting. From the variation in the waveforms the direction of slip is inferred to be updip on a west dipping NNE striking fault plane. The steep dip of the inferred fault plane suggests that the earthquake occurred on a preexisting fault that was at one time a normal fault. From an inversion of body wave pulse durations, the estimated rupture length is 5.5km.-from Authors

  15. Infiltration and solute transport experiments in unsaturated sand and gravel, Cape Cod, Massachusetts: Experimental design and overview of results

    USGS Publications Warehouse

    Rudolph, David L.; Kachanoski , R. Gary; Celia, Michael A.; LeBlanc, Denis R.; Stevens, Jonathon H.

    1996-01-01

    A series of infiltration and tracer experiments was conducted in unsaturated sand and gravel deposits on Cape Cod, Massachusetts. A network of 112 porous cup lysimeters and 168 time domain reflectometry (TDR) probes was deployed at depths from 0.25 to 2.0 m below ground surface along the centerline of a 2-m by 10-m test plot. The test plot was irrigated at rates ranging from 7.9 to 37.0 cm h−1 through a sprinkler system. Transient and steady state water content distributions were monitored with the TDR probes and spatial properties of water content distributions were determined from the TDR data. The spatial variance of the water content tended to increase as the average water content increased. In addition, estimated horizontal correlation length scales for water content were significantly smaller than those estimated by previous investigators for saturated hydraulic conductivity. Under steady state flow conditions at each irrigation rate, a sodium chloride solution was released as a tracer at ground surface and tracked with both the lysimeter and TDR networks. Transect-averaged breakthrough curves at each monitoring depth were constructed both from solute concentrations measured in the water samples and flux concentrations inferred from the TDR measurements. Transport properties, including apparent solute velocities, dispersion coefficients, and total mass balances, were determined independently from both sets of breakthrough curves. The dispersion coefficients tended to increase with depth, reaching a constant value with the lysimeter data and appearing to increase continually with the TDR data. The variations with depth of the solute transport parameters, along with observations of water and solute mass balance and spatial distributions of water content, provide evidence of significant three-dimensional flow during the irrigation experiments. The TDR methods are shown to efficiently provide dense spatial and temporal data sets for both flow and solute transport in unsaturated sediments with minimal sediment and flow field disturbance. Combined implementation of lysimeters and TDR probes can enhance data interpretation particularly when three-dimensional flow conditions are anticipated.

  16. Source mechanism of the 2006 M5.1 Wen'an Earthquake determined from a joint inversion of local and teleseismic broadband waveform data

    NASA Astrophysics Data System (ADS)

    Huang, J.; Ni, S.; Niu, F.; Fu, R.

    2007-12-01

    On July 4th, 2006, a magnitude 5.1 earthquake occurred at Wen'an, {~}100 km south of Beijing, which was felt at Beijing metropolitan area. To better understand the regional tectonics, we have inverted local and teleseismic broadband waveform data to determine the focal mechanism of this earthquake. We selected waveform data of 9 stations from the recently installed Beijing metropolitan digital Seismic Network (BSN). These stations are located within 600 km and cover a good azimuthal range to the earthquake. To better fit the lower amplitude P waveform, we employed two different weights for the P wave and surface wave arrivals, respectively. A grid search method was employed to find the strike, dip and slip of the earthquake that best fits the P and surface waveforms recorded at all the three components (the tangential component of the P-wave arrivals was not used). Synthetic waveforms were computed with an F-K method. Two crustal velocity models were used in the synthetic calculation to reflect a rapid east-west transition in crustal structure observed by seismic and geological studies in the study area. The 3D grid search results in reasonable constraints on the fault geometry and the slip vector with a less well determined focal depth. As such we combined teleseismic waveform data from 8 stations of the Global Seismic Network in a joint inversion. Clearly identifiable depth phases (pP, sP) recorded in the teleseismic stations obviously provided a better constraint on the resulting source depth. Results from the joint inversion indicate that the Wen'an earthquake is mainly a right-lateral strike slip event (-150°) which occurred at a near vertical (dip, 80° ) NNE trend (210°º) fault. The estimated focal depth is {~}14- 15km, and the moment magnitude is 5.1. The estimated fault geometry here agrees well with aftershock distribution and is consistent with the major fault systems in the area which were developed under a NNE-SSW oriented compressional stress field. Key word: waveform modeling method, source mechanism, grid search method, cut and paste method, aftershocks distribution

  17. Morphological analysis of pore size and connectivity in a thick mixed-culture biofilm.

    PubMed

    Rosenthal, Alex F; Griffin, James S; Wagner, Michael; Packman, Aaron I; Balogun, Oluwaseyi; Wells, George F

    2018-05-19

    Morphological parameters are commonly used to predict transport and metabolic kinetics in biofilms. Yet, quantification of biofilm morphology remains challenging due to imaging technology limitations and lack of robust analytical approaches. We present a novel set of imaging and image analysis techniques to estimate internal porosity, pore size distributions, and pore network connectivity to a depth of 1 mm at a resolution of 10 µm in a biofilm exhibiting both heterotrophic and nitrifying activity. Optical coherence tomography (OCT) scans revealed an extensive pore network with diameters as large as 110 µm directly connected to the biofilm surface and surrounding fluid. Thin section fluorescence in situ hybridization microscopy revealed ammonia oxidizing bacteria (AOB) distributed through the entire thickness of the biofilm. AOB were particularly concentrated in the biofilm around internal pores. Areal porosity values estimated from OCT scans were consistently lower than those estimated from multiphoton laser scanning microscopy, though the two imaging modalities showed a statistically significant correlation (r = 0.49, p<0.0001). Estimates of areal porosity were moderately sensitive to grey level threshold selection, though several automated thresholding algorithms yielded similar values to those obtained by manually thresholding performed by a panel of environmental engineering researchers (±25% relative error). These findings advance our ability to quantitatively describe the geometry of biofilm internal pore networks at length scales relevant to engineered biofilm reactors and suggest that internal pore structures provide crucial habitat for nitrifier growth. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. The Atmospheric Mercury Network: measurement and initial examination of an ongoing atmospheric mercury record across North America

    NASA Astrophysics Data System (ADS)

    Gay, D. A.; Schmeltz, D.; Prestbo, E.; Olson, M.; Sharac, T.; Tordon, R.

    2013-04-01

    The National Atmospheric Deposition Program (NADP) developed and operates a collaborative network of atmospheric mercury monitoring sites based in North America - the Atmospheric Mercury Network (AMNet). The justification for the network was growing interest and demand from many scientists and policy makers for a robust database of measurements to improve model development, assess policies and programs, and improve estimates of mercury dry deposition. Many different agencies and groups support the network, including federal, state, tribal, and international governments, academic institutions, and private companies. AMNet has added two high elevation sites outside of continental North America in Hawaii and Taiwan because of new partnerships forged within NADP. Network sites measure concentrations of atmospheric mercury fractions using automated, continuous mercury speciation systems. The procedures that NADP developed for field operations, data management, and quality assurance ensure that the network makes scientifically valid and consistent measurements. AMNet reports concentrations of hourly gaseous elemental mercury (GEM), two-hour gaseous oxidized mercury (GOM), and two-hour particulate-bound mercury less than 2.5 microns in size (PBM2.5). As of January 2012, over 450 000 valid observations are available from 30 stations. The AMNet also collects ancillary meteorological data and information on land-use and vegetation, when available. We present atmospheric mercury data comparisons by time (3 yr) at 22 unique site locations. Highlighted are contrasting values for site locations across the network: urban versus rural, coastal versus high-elevation and the range of maximum observations. The data presented should catalyze the formation of many scientific questions that may be answered through further in-depth analysis and modeling studies of the AMNet database. All data and methods are publically available through an online database on the NADP website (http://nadp.isws.illinois.edu/amn/). Future network directions are to foster new network partnerships and continue to collect, quality assure, and post data, including dry deposition estimates, for each fraction.

  19. The Atmospheric Mercury Network: measurement and initial examination of an ongoing atmospheric mercury record across North America

    NASA Astrophysics Data System (ADS)

    Gay, D. A.; Schmeltz, D.; Prestbo, E.; Olson, M.; Sharac, T.; Tordon, R.

    2013-11-01

    The National Atmospheric Deposition Program (NADP) developed and operates a collaborative network of atmospheric-mercury-monitoring sites based in North America - the Atmospheric Mercury Network (AMNet). The justification for the network was growing interest and demand from many scientists and policy makers for a robust database of measurements to improve model development, assess policies and programs, and improve estimates of mercury dry deposition. Many different agencies and groups support the network, including federal, state, tribal, and international governments, academic institutions, and private companies. AMNet has added two high-elevation sites outside of continental North America in Hawaii and Taiwan because of new partnerships forged within NADP. Network sites measure concentrations of atmospheric mercury fractions using automated, continuous mercury speciation systems. The procedures that NADP developed for field operations, data management, and quality assurance ensure that the network makes scientifically valid and consistent measurements. AMNet reports concentrations of hourly gaseous elemental mercury (GEM), two-hour gaseous oxidized mercury (GOM), and two-hour particulate-bound mercury less than 2.5 microns in size (PBM2.5). As of January 2012, over 450 000 valid observations are available from 30 stations. AMNet also collects ancillary meteorological data and information on land use and vegetation, when available. We present atmospheric mercury data comparisons by time (3 yr) at 21 individual sites and instruments. Highlighted are contrasting values for site locations across the network: urban versus rural, coastal versus high elevation and the range of maximum observations. The data presented should catalyze the formation of many scientific questions that may be answered through further in-depth analysis and modeling studies of the AMNet database. All data and methods are publically available through an online database on the NADP website (http://nadp.sws.uiuc.edu/amn/). Future network directions are to foster new network partnerships and continue to collect, quality assure, and post data, including dry deposition estimates, for each fraction.

  20. Contemporary horizontal crustal movement estimation for northwestern Vietnam inferred from repeated GPS measurements

    NASA Astrophysics Data System (ADS)

    Duong, Nguyen Anh; Sagiya, Takeshi; Kimata, Fumiaki; To, Tran Dinh; Hai, Vy Quoc; Cong, Duong Chi; Binh, Nguyen Xuan; Xuyen, Nguyen Dinh

    2013-12-01

    We present a horizontal velocity field determined from a GPS network with 22 sites surveyed from 2001 to 2012 in northwestern Vietnam. The velocity is accurately estimated at each site by fitting a linear trend to each coordinate time series, after accounting for coseismic displacements caused by the 2004 Sumatra and the 2011 Tohoku earthquakes using static fault models. Considering the coseismic effects of the earthquakes, the motion of northwestern Vietnam is 34.3 ± 0.7 mm/yr at an azimuth of N108° ± 0.7°E in ITRF2008. This motion is close to, but slightly different from, that of the South China block. The area is in a transition zone between this block, the Sundaland block, and the Baoshan sub-block. At the local scale, a detailed estimation of the crustal deformation across major fault zones is geodetically revealed for the first time. We identify a locking depth of 15.3 ± 9.8 km with an accumulating left-lateral slip rate of 1.8 ± 0.3 mm/yr for the Dien Bien Phu fault, and a shallow locking depth with a right-lateral slip rate of 1.0 ± 0.6 mm/yr for the Son La and Da River faults.

  1. The 2011 Tohoku-oki Earthquake related to a large velocity gradient within the Pacific plate

    NASA Astrophysics Data System (ADS)

    Matsubara, Makoto; Obara, Kazushige

    2015-04-01

    We conduct seismic tomography using arrival time data picked by the high sensitivity seismograph network (Hi-net) operated by National Research Institute for Earth Science and Disaster Prevention (NIED). We used earthquakes off the coast outside the seismic network around the source region of the 2011 Tohoku-oki Earthquake with the centroid depth estimated from moment tensor inversion by NIED F-net (broadband seismograph network) as well as earthquakes within the seismic network determined by Hi-net. The target region, 20-48N and 120-148E, covers the Japanese Islands from Hokkaido to Okinawa. A total of manually picked 4,622,346 P-wave and 3,062,846 S-wave arrival times for 100,733 earthquakes recorded at 1,212 stations from October 2000 to August 2009 is available for use in the tomographic method. In the final iteration, we estimate the P-wave slowness at 458,234 nodes and the S-wave slowness at 347,037 nodes. The inversion reduces the root mean square of the P-wave traveltime residual from 0.455 s to 0.187 s and that of the S-wave data from 0.692 s to 0.228 s after eight iterations (Matsubara and Obara, 2011). Centroid depths are determined using a Green's function approach (Okada et al., 2004) such as in NIED F-net. For the events distant from the seismic network, the centroid depth is more reliable than that determined by NIED Hi-net, since there are no stations above the hypocenter. We determine the upper boundary of the Pacific plate based on the velocity structure and earthquake hypocentral distribution. The upper boundary of the low-velocity (low-V) oceanic crust corresponds to the plate boundary where thrust earthquakes are expected to occur. Where we do not observe low-V oceanic crust, we determine the upper boundary of the upper layer of the double seismic zone within high-V Pacific plate. We assume the depth at the Japan Trench as 7 km. We can investigate the velocity structure within the Pacific plate such as 10 km beneath the plate boundary since the rays from the hypocenter around the coseismic region of the Tohoku-oki earthquake take off downward and pass through the Pacific plate. The landward low-V zone with a large anomaly corresponds to the western edge of the coseismic slip zone of the 2011 Tohoku-oki earthquake. The initial break point (hypocenter) is associated with the edge of a slightly low-V and low-Vp/Vs zone corresponding to the boundary of the low- and high-V zone. The trenchward low-V and low-Vp/Vs zone extending southwestward from the hypocenter may indicate the existence of a subducted seamount. The high-V zone and low-Vp/Vs zone might have accumulated the strain and resulted in the huge coseismic slip zone of the 2011 Tohoku earthquake. The low-V and low-Vp/Vs zone is a slight fluctuation within the high-V zone and might have acted as the initial break point of the 2011 Tohoku earthquake. Reference Matsubara, M. and K. Obara (2011) The 2011 Off the Pacific Coast of Tohoku earthquake related to a strong velocity gradient with the Pacific plate, Earth Planets Space, 63, 663-667. Okada, Y., K. Kasahara, S. Hori, K. Obara, S. Sekiguchi, H. Fujiwara, and A. Yamamoto (2004) Recent progress of seismic observation networks in Japan-Hi-net, F-net, K-NET and KiK-net, Research News Earth Planets Space, 56, xv-xxviii.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohay, Alan C.; Sweeney, Mark D.; Hartshorn, Donald C.

    The Hanford Seismic Assessment Program (HSAP) provides an uninterrupted collection of high-quality raw and processed seismic data from the Hanford Seismic Network for the U.S. Department of Energy and its contractors. The HSAP is responsible for locating and identifying sources of seismic activity and monitoring changes in the historical pattern of seismic activity at the Hanford Site. The data are compiled, archived, and published for use by the Hanford Site for waste management, natural phenomena hazards assessments, and engineering design and construction. In addition, the HSAP works with the Hanford Site Emergency Services Organization to provide assistance in the eventmore » of a significant earthquake on the Hanford Site. The Hanford Seismic Network and the Eastern Washington Regional Network consist of 44 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Assessment Team. The Hanford Seismic Network recorded 23 local earthquakes during the third quarter of FY 2010. Sixteen earthquakes were located at shallow depths (less than 4 km), five earthquakes at intermediate depths (between 4 and 9 km), most likely in the pre-basalt sediments, and two earthquakes were located at depths greater than 9 km, within the basement. Geographically, twelve earthquakes were located in known swarm areas, 3 earthquakes occurred near a geologic structure (Saddle Mountain anticline), and eight earthquakes were classified as random events. The highest magnitude event (3.0 Mc) was recorded on May 8, 2010 at depth 3.0 km with epicenter located near the Saddle Mountain anticline. Later in the quarter (May 24 and June 28) two additional earthquakes were also recorded nearly at the same location. These events are not considered unusual in that earthquakes have been previously recorded at this location, for example, in October 2006 (Rohay et al; 2007). Six earthquakes were detected in the vicinity of Wooded Island, located about eight miles north of Richland just west of the Columbia River. The Wooded Island events recorded this quarter were a continuation of the swarm events observed during the 2009 and 2010 fiscal years and reported in previous quarterly and annual reports (Rohay et al; 2009a, 2009b, 2009c, 2010a, and 2010b). All events were considered minor (coda-length magnitude [Mc] less than 1.0) with a maximum depth estimated at 1.7 km. Based upon this quarters activity it is likely that the Wooded Island swarm has subsided. Pacific Northwest National Laboratory (PNNL) will continue to monitor for activity at this location.« less

  3. Spatial analysis of storm depths from an Arizona raingage network

    NASA Technical Reports Server (NTRS)

    Fennessey, N. M.; Eagleson, P. S.; Qinliang, W.; Rodriguez-Iturbe, I.

    1986-01-01

    Eight years of summer rainstorm observations are analyzed by a dense network of 93 raingages operated by the U.S. Department of Agriculture, Agricultural Research Service, in the 150 km Walnut Gulch experimental catchment near Tucson, Arizona. Storms are defined by the total depths collected at each raingage during the noon-to-noon period for which there was depth recorded at any of the gages. For each of the resulting 428 storm days, the gage depths are interpolated onto a dense grid and the resulting random field analyzed to obtain moments, isohyetal plots, spatial correlation function, variance function, and the spatial distribution of storm depth.

  4. On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.

    PubMed

    Schmitt, Michael

    2004-09-01

    We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.

  5. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenneth D. Luff

    2002-06-30

    Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). Luff Exploration Company is applying these tools for analysis of carbonate reservoirs in the southern Williston Basin. The integrated software programs are designed to be used by small team consisting of an engineer, geologist and geophysicist. The software tools are flexible and robust, allowing application in many environments for hydrocarbon reservoirs. Keystone elements of the software tools include clustering and neural-network techniques. The tools are used to transform seismic attribute data to reservoir characteristics such as storage (phi-h), probable oil-water contacts, structural depths andmore » structural growth history. When these reservoir characteristics are combined with neural network or fuzzy logic solvers, they can provide a more complete description of the reservoir. This leads to better estimates of hydrocarbons in place, areal limits and potential for infill or step-out drilling. These tools were developed and tested using seismic, geologic and well data from the Red River Play in Bowman County, North Dakota and Harding County, South Dakota. The geologic setting for the Red River Formation is shallow-shelf carbonate at a depth from 8000 to 10,000 ft.« less

  6. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenneth D. Luff

    2002-09-30

    Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). Luff Exploration Company is applying these tools for analysis of carbonate reservoirs in the southern Williston Basin. The integrated software programs are designed to be used by small team consisting of an engineer, geologist and geophysicist. The software tools are flexible and robust, allowing application in many environments for hydrocarbon reservoirs. Keystone elements of the software tools include clustering and neural-network techniques. The tools are used to transform seismic attribute data to reservoir characteristics such as storage (phi-h), probable oil-water contacts, structural depths andmore » structural growth history. When these reservoir characteristics are combined with neural network or fuzzy logic solvers, they can provide a more complete description of the reservoir. This leads to better estimates of hydrocarbons in place, areal limits and potential for infill or step-out drilling. These tools were developed and tested using seismic, geologic and well data from the Red River Play in Bowman County, North Dakota and Harding County, South Dakota. The geologic setting for the Red River Formation is shallow-shelf carbonate at a depth from 8000 to 10,000 ft.« less

  7. Accessibility, searchability, transparency and engagement of soil carbon data: The International Soil Carbon Network

    NASA Astrophysics Data System (ADS)

    Harden, Jennifer W.; Hugelius, Gustaf; Koven, Charlie; Sulman, Ben; O'Donnell, Jon; He, Yujie

    2016-04-01

    Soils are capacitors for carbon and water entering and exiting through land-atmosphere exchange. Capturing the spatiotemporal variations in soil C exchange through monitoring and modeling is difficult in part because data are reported unevenly across spatial, temporal, and management scales and in part because the unit of measure generally involves destructive harvest or non-recurrent measurements. In order to improve our fundamental basis for understanding soil C exchange, a multi-user, open source, searchable database and network of scientists has been formed. The International Soil Carbon Network (ISCN) is a self-chartered, member-based and member-owned network of scientists dedicated to soil carbon science. Attributes of the ISCN include 1) Targeted ISCN Action Groups which represent teams of motivated researchers that propose and pursue specific soil C research questions with the aim of synthesizing seminal articles regarding soil C fate. 2) Datasets to date contributed by institutions and individuals to a comprehensive, searchable open-access database that currently includes over 70,000 geolocated profiles for which soil C and other soil properties. 3) Derivative products resulting from the database, including depth attenuation attributes for C concentration and storage; C storage maps; and model-based assessments of emission/sequestration for future climate scenarios. Several examples illustrate the power of such a database and its engagement with the science community. First, a simplified, data-constrained global ecosystem model estimated a global sensitivity of permafrost soil carbon to climate change (g sensitivity) of -14 to -19 Pg C °C-1 of warming on a 100 years time scale. Second, using mathematical characterizations of depth profiles for organic carbon storage, C at the soil surface reflects Net Primary Production (NPP) and its allotment as moss or litter, while e-folding depths are correlated to rooting depth. Third, storage of deep C is highly correlated with bulk density and porosity of the rock/sediment matrix. Thus C storage is most stable at depth, yet is susceptible to changes in tillage, rooting depths, and erosion/sedimentation. Fourth, current ESMs likely overestimate the turnover time of soil organic carbon and subsequently overestimate soil carbon sequestration, thus datasets combined with other soil properties will help constrain the ESM predictions. Last, analysis of soil horizon and carbon data showed that soils with a history of tillage had significantly lower carbon concentrations in both near-surface and deep layers, and that the effect persisted even in reforested areas. In addition to the opportunities for empirical science using a large database, the database has great promise for evaluation of biogeochemical and earth system models. The preservation of individual soil core measurements avoids issues with spatial averaging while facilitating evaluation of advanced model processes such as depth distributions of soil carbon, land use impacts, and spatial heterogeneity.

  8. A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-05-18

    This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.

  9. Fault zone characteristics and basin complexity in the southern Salton Trough, California

    USGS Publications Warehouse

    Persaud, Patricia; Ma, Yiran; Stock, Joann M.; Hole, John A.; Fuis, Gary S.; Han, Liang

    2016-01-01

    Ongoing oblique slip at the Pacific–North America plate boundary in the Salton Trough produced the Imperial Valley (California, USA), a seismically active area with deformation distributed across a complex network of exposed and buried faults. To better understand the shallow crustal structure in this region and the connectivity of faults and seismicity lineaments, we used data primarily from the Salton Seismic Imaging Project to construct a three-dimensional P-wave velocity model down to 8 km depth and a velocity profile to 15 km depth, both at 1 km grid spacing. A VP = 5.65–5.85 km/s layer of possibly metamorphosed sediments within, and crystalline basement outside, the valley is locally as thick as 5 km, but is thickest and deepest in fault zones and near seismicity lineaments, suggesting a causative relationship between the low velocities and faulting. Both seismicity lineaments and surface faults control the structural architecture of the western part of the larger wedge-shaped basin, where two deep subbasins are located. We estimate basement depths, and show that high velocities at shallow depths and possible basement highs characterize the geothermal areas.

  10. Advantages of measuring the Q Stokes parameter in addition to the total radiance I in the detection of absorbing aerosols

    NASA Astrophysics Data System (ADS)

    Stamnes, Snorre; Fan, Yongzhen; Chen, Nan; Li, Wei; Tanikawa, Tomonori; Lin, Zhenyi; Liu, Xu; Burton, Sharon; Omar, Ali; Stamnes, Jakob J.; Cairns, Brian; Stamnes, Knut

    2018-05-01

    A simple but novel study was conducted to investigate whether an imager-type spectroradiometer instrument like MODIS, currently flying on board the Aqua and Terra satellites, or MERIS, which flew on board Envisat, could detect absorbing aerosols if they could measure the Q Stokes parameter in addition to the total radiance I, that is if they could also measure the linear polarization of the light. Accurate radiative transfer calculations were used to train a fast neural network forward model, which together with a simple statistical optimal estimation scheme was used to retrieve three aerosol parameters: aerosol optical depth at 869 nm, optical depth fraction of fine mode (absorbing) aerosols at 869 nm, and aerosol vertical location. The aerosols were assumed to be bimodal, each with a lognormal size distribution, located either between 0 and 2 km or between 2 and 4 km in the Earth's atmosphere. From simulated data with 3% random Gaussian measurement noise added for each Stokes parameter, it was found that by itself the total radiance I at the nine MODIS VIS channels was generally insufficient to accurately retrieve all three aerosol parameters (˜ 15% to 37% successful), but that together with the Q Stokes component it was possible to retrieve values of aerosol optical depth at 869 nm to ± 0.03, single-scattering albedo at 869 nm to ± 0.04, and vertical location in ˜ 65% of the cases. This proof-of-concept retrieval algorithm uses neural networks to overcome the computational burdens of using vector radiative transfer to accurately simulate top-of-atmosphere (TOA) total and polarized radiances, enabling optimal estimation techniques to exploit information from multiple channels. Therefore such an algorithm could, in concept, be readily implemented for operational retrieval of aerosol and ocean products from moderate or hyperspectral spectroradiometers.

  11. Depth-estimation-enabled compound eyes

    NASA Astrophysics Data System (ADS)

    Lee, Woong-Bi; Lee, Heung-No

    2018-04-01

    Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.

  12. Potential-scour assessments and estimates of scour depth using different techniques at selected bridge sites in Missouri

    USGS Publications Warehouse

    Huizinga, Richard J.; Rydlund, Jr., Paul H.

    2004-01-01

    The evaluation of scour at bridges throughout the state of Missouri has been ongoing since 1991 in a cooperative effort by the U.S. Geological Survey and Missouri Department of Transportation. A variety of assessment methods have been used to identify bridges susceptible to scour and to estimate scour depths. A potential-scour assessment (Level 1) was used at 3,082 bridges to identify bridges that might be susceptible to scour. A rapid estimation method (Level 1+) was used to estimate contraction, pier, and abutment scour depths at 1,396 bridge sites to identify bridges that might be scour critical. A detailed hydraulic assessment (Level 2) was used to compute contraction, pier, and abutment scour depths at 398 bridges to determine which bridges are scour critical and would require further monitoring or application of scour countermeasures. The rapid estimation method (Level 1+) was designed to be a conservative estimator of scour depths compared to depths computed by a detailed hydraulic assessment (Level 2). Detailed hydraulic assessments were performed at 316 bridges that also had received a rapid estimation assessment, providing a broad data base to compare the two scour assessment methods. The scour depths computed by each of the two methods were compared for bridges that had similar discharges. For Missouri, the rapid estimation method (Level 1+) did not provide a reasonable conservative estimate of the detailed hydraulic assessment (Level 2) scour depths for contraction scour, but the discrepancy was the result of using different values for variables that were common to both of the assessment methods. The rapid estimation method (Level 1+) was a reasonable conservative estimator of the detailed hydraulic assessment (Level 2) scour depths for pier scour if the pier width is used for piers without footing exposure and the footing width is used for piers with footing exposure. Detailed hydraulic assessment (Level 2) scour depths were conservatively estimated by the rapid estimation method (Level 1+) for abutment scour, but there was substantial variability in the estimates and several substantial underestimations.

  13. Fusion of Kinect depth data with trifocal disparity estimation for near real-time high quality depth maps generation

    NASA Astrophysics Data System (ADS)

    Boisson, Guillaume; Kerbiriou, Paul; Drazic, Valter; Bureller, Olivier; Sabater, Neus; Schubert, Arno

    2014-03-01

    Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.

  14. Development of a robust analytical framework for assessing landbird trends, dynamics and relationships with environmental covariates in the North Coast and Cascades Network

    USGS Publications Warehouse

    Ray, Chris; Saracco, James; Jenkins, Kurt J.; Huff, Mark; Happe, Patricia J.; Ransom, Jason I.

    2017-01-01

    During 2015-2016, we completed development of a new analytical framework for landbird population monitoring data from the National Park Service (NPS) North Coast and Cascades Inventory and Monitoring Network (NCCN). This new tool for analysis combines several recent advances in modeling population status and trends using point-count data and is designed to supersede the approach previously slated for analysis of trends in the NCCN and other networks, including the Sierra Nevada Network (SIEN). Advances supported by the new model-based approach include 1) the use of combined data on distance and time of detection to estimate detection probability without assuming perfect detection at zero distance, 2) seamless accommodation of variation in sampling effort and missing data, and 3) straightforward estimation of the effects of downscaled climate and other local habitat characteristics on spatial and temporal trends in landbird populations. No changes in the current field protocol are necessary to facilitate the new analyses. We applied several versions of the new model to data from each of 39 species recorded in the three mountain parks of the NCCN, estimating trends and climate relationships for each species during 2005-2014. Our methods and results are also reported in a manuscript in revision for the journal Ecosphere (hereafter, Ray et al.). Here, we summarize the methods and results outlined in depth by Ray et al., discuss benefits of the new analytical framework, and provide recommendations for its application to synthetic analyses of long-term data from the NCCN and SIEN. All code necessary for implementing the new analyses is provided within the Appendices to this report, in the form of fully annotated scripts written in the open-access programming languages R and JAGS.

  15. The P wavespeed structure in the mantle to 800 km depth below the Philippines region: geodynamic implications

    NASA Astrophysics Data System (ADS)

    Wright, C.

    2009-03-01

    P waves from earthquakes south of Taiwan, recorded by the BATS seismic array and CWB seismic network, were used define the P wavespeed structure between depths of 100 and 800 km below the Philippines region. The presence of a low wavespeed zone in the upper mantle is inferred, although the details are unclear. Wavespeeds in the uppermost mantle are low, as expected for seismic energy propagating within an oceanic plate. The estimated depths of the 410- and 660-km discontinuities are 325 and 676 km respectively. The unusually shallow depth of the upper discontinuity below and to the east of Luzon is inferred by clearly resolving the travel-time branch produced by refraction through the transition zone. A possible explanation for the northern part of the region covered is that seismic energy reaches its maximum depth within or close to the cool, subducted oceanic South China Sea slab where subduction has been slow and relatively recent. Further south, however, the presence of a broken remnant of the South China Sea slab, formed during a period of shallower subduction, is suggested at depths below 300 km to explain the broad extent of the elevated 410-km discontinuity. The 660-km discontinuity is slightly deeper than usual, implying that low temperatures persist to lower mantle depths. The wavespeed gradients within the transition zone between depths of 450 and 610 km are higher than those predicted by both the pyrolite and piclogite models of the mantle, possibly due to the presence of water in the transition zone.

  16. Mass balances of dissolved gases at river network scales across biomes.

    NASA Astrophysics Data System (ADS)

    Wollheim, W. M.; Stewart, R. J.; Sheehan, K.

    2016-12-01

    Estimating aquatic metabolism and gas fluxes at broad spatial scales is needed to evaluate the role of aquatic ecosystems in continental carbon cycles. We applied a river network model, FrAMES, to quantify the mass balances of dissolved oxygen at river network scales across five river networks in different biomes. The model accounts for hydrology; spatially varying re-aeration rates due to flow, slope, and water temperature; gas inputs via terrestrial runoff; variation in light due to canopy cover and water depth; benthic gross primary production; and benthic respiration. The model was parameterized using existing groundwater information and empirical relationships of GPP, R, and re-aeration, and was tested using dissolved oxygen patterns measured throughout river networks. We found that during summers, internal aquatic production dominates the river network mass balance of Kings Cr., Konza Prairie, KS (16.3 km2), whereas terrestrial inputs and aeration dominate the network mass balance at Coweeta Cr., Coweeta Forest, NC (15.7 km2). At network scales, both river networks are net heterotrophic, with Coweeta more so than Kings Cr. (P:R 0.6 vs. 0.7, respectively). The river network of Kings Creek showed higher network-scale GPP and R compared to Coweeta, despite having a lower drainage density because streams are on average wider so cumulative benthic surface areas are similar. Our findings suggest that the role of aquatic systems in watershed carbon balances will depend on interactions of drainage density, channel hydraulics, terrestrial vegetation, and biological activity.

  17. Recommendations for a wind profiling network to support Space Shuttle launches

    NASA Technical Reports Server (NTRS)

    Zamora, R. J.

    1992-01-01

    The feasibility is examined of a network of clear air radar wind profilers to forecast wind conditions before Space Shuttle launches during winter. Currently, winds are measured only in the vicinity of the shuttle launch site and wind loads on the launch vehicle are estimated using these measurements. Wind conditions upstream of the Cape are not monitored. Since large changes in the wind shear profile can be associated with weather systems moving over the Cape, it may be possible to improve wind forecasts over the launch site if wind measurements are made upstream. A radar wind profiling system is in use at the Space Shuttle launch site. This system can monitor the wind profile continuously. The existing profiler could be combined with a number of radars located upstream of the launch site. Thus, continuous wind measurements would be available upstream and at the Cape. NASA-Marshall representatives have set the requirements for radar wind profiling network. The minimum vertical resolution of the network must be set so that the wind shears over the depths greater than or = 1 km will be detected. The network should allow scientists and engineers to predict the wind profile over the Cape 6 hours before a Space Shuttle launch.

  18. Reconstruction of sub-surface archaeological remains from magnetic data using neural computing.

    NASA Astrophysics Data System (ADS)

    Bescoby, D. J.; Cawley, G. C.; Chroston, P. N.

    2003-04-01

    The remains of a former Roman colonial settlement, once part of the classical city of Butrint in southern Albania have been the subject of a high resolution magnetic survey using a caesium-vapour magnetometer. The survey revealed the surviving remains of an extensive planned settlement and a number of outlying buildings, today buried beneath over 0.5 m of alluvial deposits. The aim of the current research is to derive a sub-surface model from the magnetic survey measurements, allowing an enhanced archaeological interpretation of the data. Neural computing techniques are used to perform the non-linear mapping between magnetic data and corresponding sub-surface model parameters. The adoption of neural computing paradigms potentially holds several advantages over other modelling techniques, allowing fast solutions for complex data, while having a high tolerance to noise. A multi-layer perceptron network with a feed-forward architecture is trained to estimate the shape and burial depth of wall foundations using a series of representative models as training data. Parameters used to forward model the training data sets are derived from a number of trial trench excavations targeted over features identified by the magnetic survey. The training of the network was optimized by first applying it to synthetic test data of known source parameters. Pre-processing of the network input data, including the use of a rotationally invariant transform, enhanced network performance and the efficiency of the training data. The approach provides good results when applied to real magnetic data, accurately predicting the depths and layout of wall foundations within the former settlement, verified by subsequent excavation. The resulting sub-surface model is derived from the averaged outputs of a ‘committee’ of five networks, trained with individualized training sets. Fuzzy logic inference has also been used to combine individual network outputs through correlation with data from a second geophysical technique, allowing the integration of data from a separate series of measurements.

  19. American River Hydrologic Observatory

    NASA Astrophysics Data System (ADS)

    Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2016-12-01

    We have set up fourteen large wireless sensor networks to measure hydrologic parameters over physiographical representative regions of the snow-dominated portion of the river basin. This is perhaps the largest wireless sensor network in the world. Each network covers about a 1 km2 area and consists of about 45 elements. We measure snow depth, temperature humidity soil moisture and temperature, and solar radiation in real time at ten locations per site, as opposed to the traditional once-a-month snow course. As part of the multi-PI SSCZO, we have installed a 62-node wireless sensor network to measure snow depth, temperature humidity soil moisture and temperature, and solar radiation in real time. This network has been operating for approximately six years. We are now installing four large wireless sensor networks to measure snow depth, temperature humidity soil moisture and temperature, and solar radiation in East Branch of the North Fork of the Feather River, CA. The presentation will discuss the planning and operation of the networks as well as some unique results. It will also present information about the networking hardware designed for these installations, which has resulted in a start-up, Metronome Systems.

  20. Estimating wetland methane emissions from the northern high latitudes from 1990 to 2009 using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zhu, Xudong; Zhuang, Qianlai; Qin, Zhangcai; Glagolev, Mikhail; Song, Lulu

    2013-04-01

    Methane (CH4) emissions from wetland ecosystems in nothern high latitudes provide a potentially positive feedback to global climate warming. Large uncertainties still remain in estimating wetland CH4 emisions at regional scales. Here we develop a statistical model of CH4 emissions using an artificial neural network (ANN) approach and field observations of CH4 fluxes. Six explanatory variables (air temperature, precipitation, water table depth, soil organic carbon, soil total porosity, and soil pH) are included in the development of ANN models, which are then extrapolated to the northern high latitudes to estimate monthly CH4 emissions from 1990 to 2009. We estimate that the annual wetland CH4 source from the northern high latitudes (north of 45°N) is 48.7 Tg CH4 yr-1 (1 Tg = 1012 g) with an uncertainty range of 44.0 53.7 Tg CH4 yr-1. The estimated wetland CH4 emissions show a large spatial variability over the northern high latitudes, due to variations in hydrology, climate, and soil conditions. Significant interannual and seasonal variations of wetland CH4 emissions exist in the past 2 decades, and the emissions in this period are most sensitive to variations in water table position. To improve future assessment of wetland CH4 dynamics in this region, research priorities should be directed to better characterizing hydrological processes of wetlands, including temporal dynamics of water table position and spatial dynamics of wetland areas.

  1. Neural Models: An Option to Estimate Seismic Parameters of Accelerograms

    NASA Astrophysics Data System (ADS)

    Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.

    2014-12-01

    Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.

  2. Depth interval estimates from motion parallax and binocular disparity beyond interaction space.

    PubMed

    Gillam, Barbara; Palmisano, Stephen A; Govan, Donovan G

    2011-01-01

    Static and dynamic observers provided binocular and monocular estimates of the depths between real objects lying well beyond interaction space. On each trial, pairs of LEDs were presented inside a dark railway tunnel. The nearest LED was always 40 m from the observer, with the depth separation between LED pairs ranging from 0 up to 248 m. Dynamic binocular viewing was found to produce the greatest (ie most veridical) estimates of depth magnitude, followed next by static binocular viewing, and then by dynamic monocular viewing. (No significant depth was seen with static monocular viewing.) We found evidence that both binocular and monocular dynamic estimates of depth were scaled for the observation distance when the ground plane and walls of the tunnel were visible up to the nearest LED. We conclude that both motion parallax and stereopsis provide useful long-distance depth information and that motion-parallax information can enhance the degree of stereoscopic depth seen.

  3. A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector

    PubMed Central

    2018-01-01

    This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644

  4. Estimating Marine Aerosol Particle Volume and Number from Maritime Aerosol Network Data

    NASA Technical Reports Server (NTRS)

    Sayer, A. M.; Smirnov, A.; Hsu, N. C.; Munchak, L. A.; Holben, B. N.

    2012-01-01

    As well as spectral aerosol optical depth (AOD), aerosol composition and concentration (number, volume, or mass) are of interest for a variety of applications. However, remote sensing of these quantities is more difficult than for AOD, as it is more sensitive to assumptions relating to aerosol composition. This study uses spectral AOD measured on Maritime Aerosol Network (MAN) cruises, with the additional constraint of a microphysical model for unpolluted maritime aerosol based on analysis of Aerosol Robotic Network (AERONET) inversions, to estimate these quantities over open ocean. When the MAN data are subset to those likely to be comprised of maritime aerosol, number and volume concentrations obtained are physically reasonable. Attempts to estimate surface concentration from columnar abundance, however, are shown to be limited by uncertainties in vertical distribution. Columnar AOD at 550 nm and aerosol number for unpolluted maritime cases are also compared with Moderate Resolution Imaging Spectroradiometer (MODIS) data, for both the present Collection 5.1 and forthcoming Collection 6. MODIS provides a best-fitting retrieval solution, as well as the average for several different solutions, with different aerosol microphysical models. The average solution MODIS dataset agrees more closely with MAN than the best solution dataset. Terra tends to retrieve lower aerosol number than MAN, and Aqua higher, linked with differences in the aerosol models commonly chosen. Collection 6 AOD is likely to agree more closely with MAN over open ocean than Collection 5.1. In situations where spectral AOD is measured accurately, and aerosol microphysical properties are reasonably well-constrained, estimates of aerosol number and volume using MAN or similar data would provide for a greater variety of potential comparisons with aerosol properties derived from satellite or chemistry transport model data.

  5. Environmental Public Health Surveillance for Exposure to Respiratory Health Hazards: A Joint NASA/CDC Project to Use Remote Sensing Data for Estimating Airborne Particulate Matter Over the Atlanta, Georgia Metropolitan Area

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Al-Hamdan, Mohammad; Estes, Maurice; Crosson, William

    2007-01-01

    As part of the National Environmental Public Health Tracking Network (EPHTN) the National Center for Environmental Health (NCEH) at the Centers for Disease Control and Prevention (CDC) is leading a project called Health and Environment Linked for Information Exchange (HELiX-Atlanta). The goal of developing the National Environmental Public Health Tracking Network is to improve the health of communities. Currently, few systems exist at the state or national level to concurrently track many of the exposures and health effects that might be associated with environmental hazards. An additional challenge is estimating exposure to environmental hazards such as particulate matter whose aerodynamic diameter is less than or equal to 2.5 micrometers (PM2.5). HELIX-Atlanta's goal is to examine the feasibility of building an integrated electronic health and environmental data network in five counties of Metropolitan Atlanta, GA. NASA Marshall Space Flight Center (NASA/MSFC) is collaborating with CDC to combine NASA earth science satellite observations related to air quality and environmental monitoring data to model surface estimates of PM2.5 concentrations that can be linked with clinic visits for asthma. While use of the Air Quality System (AQS) PM2.5 data alone could meet HELIX-Atlanta specifications, there are only five AQS sites in the Atlanta area, thus the spatial coverage is not ideal. We are using NASA Moderate Resolution Imaging Spectroradiometer (MODIS) satellite Aerosol Optical Depth (AOD) data for estimating daily ground level PM2.5 at 10 km resolution over the metropolitan Atlanta area supplementing the AQS ground observations and filling their spatial and temporal gaps.

  6. Social networks and mental health in post-conflict Mitrovica, Kosova.

    PubMed

    Nakayama, Risa; Koyanagi, Ai; Stickley, Andrew; Kondo, Tetsuo; Gilmour, Stuart; Arenliu, Aliriza; Shibuya, Kenji

    2014-11-17

    To investigate the relation between social networks and mental health in the post-conflict municipality of Mitrovica, Kosovo. Using a three-stage stratified sampling method, 1239 respondents aged 16 years or above were recruited in the Greater Mitrovica region. Social network depth was measured by the frequency of contacts with friends, relatives and strangers. Depression and anxiety were measured using the Hospital Anxiety and Depression Scale (HADS). Multivariate logistic regression was used to examine the association between social network depth and mental health. The analytical sample consisted of 993 respondents. The prevalence of depression (54.3%) and anxiety (64.4%) were extremely high. In multiple regression analysis, a lower depth of social network (contact with friends) was associated with higher levels of both depression and anxiety. This study has shown that only one variety of social network--contact with friends--was important in terms of mental health outcomes in a population living in an area heavily affected by conflict. This suggests that the relation between social networks and mental health may be complex in that the effects of different forms of social network on mental health are not uniform and may depend on the way social networks are operationalised and the particular context in which the relationship is examined.

  7. Validation of new satellite aerosol optical depth retrieval algorithm using Raman lidar observations at radiative transfer laboratory in Warsaw

    NASA Astrophysics Data System (ADS)

    Zawadzka, Olga; Stachlewska, Iwona S.; Markowicz, Krzysztof M.; Nemuc, Anca; Stebel, Kerstin

    2018-04-01

    During an exceptionally warm September of 2016, the unique, stable weather conditions over Poland allowed for an extensive testing of the new algorithm developed to improve the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI) aerosol optical depth (AOD) retrieval. The development was conducted in the frame of the ESA-ESRIN SAMIRA project. The new AOD algorithm aims at providing the aerosol optical depth maps over the territory of Poland with a high temporal resolution of 15 minutes. It was tested on the data set obtained between 11-16 September 2016, during which a day of relatively clean atmospheric background related to an Arctic airmass inflow was surrounded by a few days with well increased aerosol load of different origin. On the clean reference day, for estimating surface reflectance the AOD forecast available on-line via the Copernicus Atmosphere Monitoring Service (CAMS) was used. The obtained AOD maps were validated against AODs available within the Poland-AOD and AERONET networks, and with AOD values obtained from the PollyXT-UW lidar. of the University of Warsaw (UW).

  8. SPARTAN: a global network to evaluate and enhance satellite-based estimates of ground-level particulate matter for global health applications

    NASA Astrophysics Data System (ADS)

    Snider, G.; Weagle, C. L.; Martin, R. V.; van Donkelaar, A.; Conrad, K.; Cunningham, D.; Gordon, C.; Zwicker, M.; Akoshile, C.; Artaxo, P.; Anh, N. X.; Brook, J.; Dong, J.; Garland, R. M.; Greenwald, R.; Griffith, D.; He, K.; Holben, B. N.; Kahn, R.; Koren, I.; Lagrosas, N.; Lestari, P.; Ma, Z.; Vanderlei Martins, J.; Quel, E. J.; Rudich, Y.; Salam, A.; Tripathi, S. N.; Yu, C.; Zhang, Q.; Zhang, Y.; Brauer, M.; Cohen, A.; Gibson, M. D.; Liu, Y.

    2015-01-01

    Ground-based observations have insufficient spatial coverage to assess long-term human exposure to fine particulate matter (PM2.5) at the global scale. Satellite remote sensing offers a promising approach to provide information on both short- and long-term exposure to PM2.5 at local-to-global scales, but there are limitations and outstanding questions about the accuracy and precision with which ground-level aerosol mass concentrations can be inferred from satellite remote sensing alone. A key source of uncertainty is the global distribution of the relationship between annual average PM2.5 and discontinuous satellite observations of columnar aerosol optical depth (AOD). We have initiated a global network of ground-level monitoring stations designed to evaluate and enhance satellite remote sensing estimates for application in health-effects research and risk assessment. This Surface PARTiculate mAtter Network (SPARTAN) includes a global federation of ground-level monitors of hourly PM2.5 situated primarily in highly populated regions and collocated with existing ground-based sun photometers that measure AOD. The instruments, a three-wavelength nephelometer and impaction filter sampler for both PM2.5 and PM10, are highly autonomous. Hourly PM2.5 concentrations are inferred from the combination of weighed filters and nephelometer data. Data from existing networks were used to develop and evaluate network sampling characteristics. SPARTAN filters are analyzed for mass, black carbon, water-soluble ions, and metals. These measurements provide, in a variety of regions around the world, the key data required to evaluate and enhance satellite-based PM2.5 estimates used for assessing the health effects of aerosols. Mean PM2.5 concentrations across sites vary by more than 1 order of magnitude. Our initial measurements indicate that the ratio of AOD to ground-level PM2.5 is driven temporally and spatially by the vertical profile in aerosol scattering. Spatially this ratio is also strongly influenced by the mass scattering efficiency.

  9. Estimating soil temperature using neighboring station data via multi-nonlinear regression and artificial neural network models.

    PubMed

    Bilgili, Mehmet; Sahin, Besir; Sangun, Levent

    2013-01-01

    The aim of this study is to estimate the soil temperatures of a target station using only the soil temperatures of neighboring stations without any consideration of the other variables or parameters related to soil properties. For this aim, the soil temperatures were measured at depths of 5, 10, 20, 50, and 100 cm below the earth surface at eight measuring stations in Turkey. Firstly, the multiple nonlinear regression analysis was performed with the "Enter" method to determine the relationship between the values of target station and neighboring stations. Then, the stepwise regression analysis was applied to determine the best independent variables. Finally, an artificial neural network (ANN) model was developed to estimate the soil temperature of a target station. According to the derived results for the training data set, the mean absolute percentage error and correlation coefficient ranged from 1.45% to 3.11% and from 0.9979 to 0.9986, respectively, while corresponding ranges of 1.685-3.65% and 0.9988-0.9991, respectively, were obtained based on the testing data set. The obtained results show that the developed ANN model provides a simple and accurate prediction to determine the soil temperature. In addition, the missing data at the target station could be determined within a high degree of accuracy.

  10. Exploring Thermal Shear Runaway as a triggering process for Intermediate-Depth Earthquakes: Overview of the Northern Chilean seismic nest.

    NASA Astrophysics Data System (ADS)

    Derode, B.; Riquelme, S.; Ruiz, J. A.; Leyton, F.; Campos, J. A.; Delouis, B.

    2014-12-01

    The intermediate depth earthquakes of high moment magnitude (Mw ≥ 8) in Chile have had a relative greater impact in terms of damage, injuries and deaths, than thrust type ones with similar magnitude (e.g. 1939, 1950, 1965, 1997, 2003, and 2005). Some of them have been studied in details, showing paucity of aftershocks, down-dip tensional focal mechanisms, high stress-drop and subhorizontal rupture. At present, their physical mechanism remains unclear because ambient temperatures and pressures are expected to lead to ductile, rather than brittle deformation. We examine source characteristics of more than 100 intraslab intermediate depth earthquakes using local and regional waveforms data obtained from broadband and accelerometers stations of IPOC network in northern Chile. With this high quality database, we estimated the total radiated energy from the energy flux carried by P and S waves integrating this flux in time and space, and evaluated their seismic moment directly from both spectral amplitude and near-field waveform inversion methods. We estimated the three parameters Ea, τa and M0 because their estimates entail no model dependence. Interestingly, the seismic nest studied using near-field re-location and only data from stations close to the source (D<250km) appears to not be homogeneous in terms of depths, displaying unusual seismic gaps along the Wadati-Benioff zone. Moreover, as confirmed by other studies of intermediate-depth earthquakes in subduction zones, very high stress drop ( >> 10MPa) and low radiation efficiency in this seismic nest were found. These unusual seismic parameter values can be interpreted as the expression of the loose of a big quantity of the emitted energy by heating processes during the rupture. Although it remains difficult to conclude about the processes of seismic nucleation, we present here results that seem to support a thermal weakening behavior of the fault zones and the existence of thermal stress processes like thermal shear runaway as a preferred mechanism for intermediate earthquake triggering. Despite the non-exhaustive aspect of this study, data presented here lead to the necessity of new systematic near-field studies to obtain valuable conclusions and constrain more accurately the physics of rupture mechanisms of these intermediate-depth seismic event.

  11. Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Gul, M. Shahzeb Khan; Gunturk, Bahadir K.

    2018-05-01

    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.

  12. Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.

    PubMed

    Gul, M Shahzeb Khan; Gunturk, Bahadir K

    2018-05-01

    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.

  13. Super-resolution photon-efficient imaging by nanometric double-helix point spread function localization of emitters (SPINDLE)

    PubMed Central

    Grover, Ginni; DeLuca, Keith; Quirin, Sean; DeLuca, Jennifer; Piestun, Rafael

    2012-01-01

    Super-resolution imaging with photo-activatable or photo-switchable probes is a promising tool in biological applications to reveal previously unresolved intra-cellular details with visible light. This field benefits from developments in the areas of molecular probes, optical systems, and computational post-processing of the data. The joint design of optics and reconstruction processes using double-helix point spread functions (DH-PSF) provides high resolution three-dimensional (3D) imaging over a long depth-of-field. We demonstrate for the first time a method integrating a Fisher information efficient DH-PSF design, a surface relief optical phase mask, and an optimal 3D localization estimator. 3D super-resolution imaging using photo-switchable dyes reveals the 3D microtubule network in mammalian cells with localization precision approaching the information theoretical limit over a depth of 1.2 µm. PMID:23187521

  14. Upper mantle Q and thermal structure beneath Tanzania, East Africa from teleseismic P wave spectra

    NASA Astrophysics Data System (ADS)

    Venkataraman, Anupama; Nyblade, Andrew A.; Ritsema, Jeroen

    2004-08-01

    We measure P wave spectral amplitude ratios from deep-focus earthquakes recorded at broadband seismic stations of the Tanzania network to estimate regional variation of sublithospheric mantle attenuation beneath the Tanzania craton and the eastern branch of the East African Rift. One-dimensional profiles of QP adequately explain the systematic variation of P wave attenuation in the sublithospheric upper mantle: QP ~ 175 beneath the cratonic lithosphere, while it is ~ 80 beneath the rifted lithosphere. By combining the QP values and a model of P wave velocity perturbations, we estimate that the temperature beneath the rifted lithosphere (100-400 km depth) is 140-280 K higher than ambient mantle temperatures, consistent with the observation that the 410 km discontinuity in this region is depressed by 30-40 km.

  15. Use of gene-expression programming to estimate Manning’s roughness coefficient for high gradient streams

    USGS Publications Warehouse

    Azamathulla, H. Md.; Jarrett, Robert D.

    2013-01-01

    Manning’s roughness coefficient (n) has been widely used in the estimation of flood discharges or depths of flow in natural channels. Therefore, the selection of appropriate Manning’s nvalues is of paramount importance for hydraulic engineers and hydrologists and requires considerable experience, although extensive guidelines are available. Generally, the largest source of error in post-flood estimates (termed indirect measurements) is due to estimates of Manning’s n values, particularly when there has been minimal field verification of flow resistance. This emphasizes the need to improve methods for estimating n values. The objective of this study was to develop a soft computing model in the estimation of the Manning’s n values using 75 discharge measurements on 21 high gradient streams in Colorado, USA. The data are from high gradient (S > 0.002 m/m), cobble- and boulder-bed streams for within bank flows. This study presents Gene-Expression Programming (GEP), an extension of Genetic Programming (GP), as an improved approach to estimate Manning’s roughness coefficient for high gradient streams. This study uses field data and assessed the potential of gene-expression programming (GEP) to estimate Manning’s n values. GEP is a search technique that automatically simplifies genetic programs during an evolutionary processes (or evolves) to obtain the most robust computer program (e.g., simplify mathematical expressions, decision trees, polynomial constructs, and logical expressions). Field measurements collected by Jarrett (J Hydraulic Eng ASCE 110: 1519–1539, 1984) were used to train the GEP network and evolve programs. The developed network and evolved programs were validated by using observations that were not involved in training. GEP and ANN-RBF (artificial neural network-radial basis function) models were found to be substantially more effective (e.g., R2 for testing/validation of GEP and RBF-ANN is 0.745 and 0.65, respectively) than Jarrett’s (J Hydraulic Eng ASCE 110: 1519–1539, 1984) equation (R2 for testing/validation equals 0.58) in predicting the Manning’s n.

  16. Genetic attack on neural cryptography.

    PubMed

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-03-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.

  17. Current-Sensitive Path Planning for an Underactuated Free-Floating Ocean Sensorweb

    NASA Technical Reports Server (NTRS)

    Dahl, Kristen P.; Thompson, David R.; McLaren, David; Chao, Yi; Chien, Steve

    2011-01-01

    This work investigates multi-agent path planning in strong, dynamic currents using thousands of highly under-actuated vehicles. We address the specific task of path planning for a global network of ocean-observing floats. These submersibles are typified by the Argo global network consisting of over 3000 sensor platforms. They can control their buoyancy to float at depth for data collection or rise to the surface for satellite communications. Currently, floats drift at a constant depth regardless of the local currents. However, accurate current forecasts have become available which present the possibility of intentionally controlling floats' motion by dynamically commanding them to linger at different depths. This project explores the use of these current predictions to direct float networks to some desired final formation or position. It presents multiple algorithms for such path optimization and demonstrates their advantage over the standard approach of constant-depth drifting.

  18. Stereoscopic depth increases intersubject correlations of brain networks.

    PubMed

    Gaebler, Michael; Biessmann, Felix; Lamke, Jan-Peter; Müller, Klaus-Robert; Walter, Henrik; Hetzer, Stefan

    2014-10-15

    Three-dimensional movies presented via stereoscopic displays have become more popular in recent years aiming at a more engaging viewing experience. However, neurocognitive processes associated with the perception of stereoscopic depth in complex and dynamic visual stimuli remain understudied. Here, we investigate the influence of stereoscopic depth on both neurophysiology and subjective experience. Using multivariate statistical learning methods, we compare the brain activity of subjects when freely watching the same movies in 2D and in 3D. Subjective reports indicate that 3D movies are more strongly experienced than 2D movies. On the neural level, we observe significantly higher intersubject correlations of cortical networks when subjects are watching 3D movies relative to the same movies in 2D. We demonstrate that increases in intersubject correlations of brain networks can serve as neurophysiological marker for stereoscopic depth and for the strength of the viewing experience. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Genetic attack on neural cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka

    2006-03-15

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold formore » the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.« less

  20. Genetic attack on neural cryptography

    NASA Astrophysics Data System (ADS)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-03-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.

  1. Regional correlations of V s30 and velocities averaged over depths less than and greater than 30 meters

    USGS Publications Warehouse

    Boore, D.M.; Thompson, E.M.; Cadet, H.

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (V S30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (V Sz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that V S30 is systematically larger for a given V Sz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating V S30 to V Sz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate V S30 from V Sz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in log V S30 of 1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to V S30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that V S30 is correlated with V Sz for z as great as 400 m for sites of the KiK-net network, providing some justification for using V S30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  2. Algorithms and uncertainties for the determination of multispectral irradiance components and aerosol optical depth from a shipborne rotating shadowband radiometer

    NASA Astrophysics Data System (ADS)

    Witthuhn, Jonas; Deneke, Hartwig; Macke, Andreas; Bernhard, Germar

    2017-03-01

    The 19-channel rotating shadowband radiometer GUVis-3511 built by Biospherical Instruments provides automated shipborne measurements of the direct, diffuse and global spectral irradiance components without a requirement for platform stabilization. Several direct sun products, including spectral direct beam transmittance, aerosol optical depth, Ångström exponent and precipitable water, can be derived from these observations. The individual steps of the data analysis are described, and the different sources of uncertainty are discussed. The total uncertainty of the observed direct beam transmittances is estimated to be about 4 % for most channels within a 95 % confidence interval for shipborne operation. The calibration is identified as the dominating contribution to the total uncertainty. A comparison of direct beam transmittance with those obtained from a Cimel sunphotometer at a land site and a manually operated Microtops II sunphotometer on a ship is presented. Measurements deviate by less than 3 and 4 % on land and on ship, respectively, for most channels and in agreement with our previous uncertainty estimate. These numbers demonstrate that the instrument is well suited for shipborne operation, and the applied methods for motion correction work accurately. Based on spectral direct beam transmittance, aerosol optical depth can be retrieved with an uncertainty of 0.02 for all channels within a 95 % confidence interval. The different methods to account for Rayleigh scattering and gas absorption in our scheme and in the Aerosol Robotic Network processing for Cimel sunphotometers lead to minor deviations. Relying on the cross calibration of the 940 nm water vapor channel with the Cimel sunphotometer, the column amount of precipitable water can be estimated with an uncertainty of ±0.034 cm.

  3. Estimating moisture transport over oceans using space-based observations

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Wenqing, Tang

    2005-01-01

    The moisture transport integrated over the depth of the atmosphere (0) is estimated over oceans using satellite data. The transport is the product of the precipitable water and an equivalent velocity (ue), which, by definition, is the depth-averaged wind velocity weighted by humidity. An artificial neural network is employed to construct a relation between the surface wind velocity measured by the spaceborne scatterometer and coincident ue derived using humidity and wind profiles measured by rawinsondes and produced by reanalysis of operational numerical weather prediction (NWP). On the basis of this relation, 0 fields are produced over global tropical and subtropical oceans (40_N- 40_S) at 0.25_ latitude-longitude and twice daily resolutions from August 1999 to December 2003 using surface wind vector from QuikSCAT and precipitable water from the Tropical Rain Measuring Mission. The derived ue were found to capture the major temporal variability when compared with radiosonde measurements. The average error over global oceans, when compared with NWP data, was comparable with the instrument accuracy specification of space-based scatterometers. The global distribution exhibits the known characteristics of, and reveals more detailed variability than in, previous data.

  4. Learning spatially coherent properties of the visual world in connectionist networks

    NASA Astrophysics Data System (ADS)

    Becker, Suzanna; Hinton, Geoffrey E.

    1991-10-01

    In the unsupervised learning paradigm, a network of neuron-like units is presented with an ensemble of input patterns from a structured environment, such as the visual world, and learns to represent the regularities in that input. The major goal in developing unsupervised learning algorithms is to find objective functions that characterize the quality of the network's representation without explicitly specifying the desired outputs of any of the units. The sort of objective functions considered cause a unit to become tuned to spatially coherent features of visual images (such as texture, depth, shading, and surface orientation), by learning to predict the outputs of other units which have spatially adjacent receptive fields. Simulations show that using an information-theoretic algorithm called IMAX, a network can be trained to represent depth by observing random dot stereograms of surfaces with continuously varying disparities. Once a layer of depth-tuned units has developed, subsequent layers are trained to perform surface interpolation of curved surfaces, by learning to predict the depth of one image region based on depth measurements in surrounding regions. An extension of the basic model allows a population of competing neurons to learn a distributed code for disparity, which naturally gives rise to a representation of discontinuities.

  5. Summary of the Georgia Agricultural Water Conservation and Metering Program and evaluation of methods used to collect and analyze irrigation data in the middle and lower Chattahoochee and Flint River basins, 2004-2010

    USGS Publications Warehouse

    Torak, Lynn J.; Painter, Jaime A.

    2011-01-01

    Since receiving jurisdiction from the State Legislature in June 2003 to implement the Georgia Agricultural Water Conservation and Metering Program, the Georgia Soil and Water Conservation Commission (Commission) by year-end 2010 installed more than 10,000 annually read water meters and nearly 200 daily reporting, satellite-transmitted, telemetry sites on irrigation systems located primarily in southern Georgia. More than 3,000 annually reported meters and 50 telemetry sites were installed during 2010 alone. The Commission monitored rates and volumes of agricultural irrigation supplied by groundwater, surface-water, and well-to-pond sources to inform water managers on the patterns and amounts of such water use and to determine effective and efficient resource utilization. Summary analyses of 4 complete years of irrigation data collected from annually read water meters in the middle and lower Chattahoochee and Flint River basins during 2007-2010 indicated that groundwater-supplied fields received slightly more irrigation depth per acre than surface-water-supplied fields. Year 2007 yielded the largest disparity between irrigation depth supplied by groundwater and surface-water sources as farmers responded to severe-to-exceptional drought conditions with increased irrigation. Groundwater sources (wells and well-to-pond systems) outnumbered surface-water sources by a factor of five; each groundwater source applied a third more irrigation volume than surface water; and, total irrigation volume from groundwater exceeded that of surface water by a factor of 6.7. Metered irrigation volume indicated a pattern of low-to-high water use from northwest to southeast that could point to relations between agricultural water use, water-resource potential and availability, soil type, and crop patterns. Normalizing metered irrigation-volume data by factoring out irrigated acres allowed irrigation water use to be expressed as an irrigation depth and nearly eliminated the disparity between volumes of applied irrigation derived from groundwater and surface water. Analysis of per-acre irrigation depths provided a commonality for comparing irrigation practices across the entire range of field sizes in southern Georgia and indicated underreporting of irrigated acres for some systems. Well-to-pond systems supplied irrigation at depths similar to groundwater and can be combined with groundwater irrigation data for subsequent analyses. Average irrigation depths during 2010 indicated an increase from average irrigation depths during 2008 and 2009, most likely the result of relatively dry conditions during 2010 compared to conditions in 2008 and 2009. Geostatistical models facilitated estimation of irrigation water use for unmetered systems and demonstrated usefulness in redesigning the telemetry network. Geospatial analysis evaluated the ability of the telemetry network to represent annually reported water-meter data and presented an objective, unbiased method for revising the network.

  6. Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona

    USGS Publications Warehouse

    Bultman, Mark W.

    2015-01-01

    Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.

  7. Bayesian depth estimation from monocular natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  8. Depth inpainting by tensor voting.

    PubMed

    Kulkarni, Mandar; Rajagopalan, Ambasamudram N

    2013-06-01

    Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data.

  9. Sierra Nevada snowpack and runoff prediction integrating basin-wide wireless-sensor network data

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Conklin, M. H.; Bales, R. C.; Zhang, Z.; Zheng, Z.; Glaser, S. D.

    2016-12-01

    We focus on characterizing snowpack and estimating runoff from snowmelt in high elevation area (>2100 m) in Sierra Nevada for daily (for use in, e.g. flood and hydropower forecasting), seasonal (supply prediction), and decadal (long-term planning) time scale. Here, basin-wide wireless-sensor network data (ARHO, http://glaser.berkeley.edu/wsn/) is integrated into the USGS Precipitation-Runoff Modeling System (PRMS), and a case study of the American River basin is presented. In the American River basin, over 140 wireless sensors have been planted in 14 sites considering elevation gradient, slope, aspect, and vegetation density, which provides spatially distributed snow depth, temperature, solar radiation, and soil moisture from 2013. 800 m daily gridded dataset (PRISM) is used as the climate input for the PRMS. Model parameters are obtained from various sources (e.g., NLCD 2011, SSURGO, and NED) with a regionalization method and GIS analysis. We use a stepwise framework for a model calibration to improve model performance and localities of estimates. For this, entire basin is divided into 12 subbasins that include full natural flow measurements. The study period is between 1982 and 2014, which contains three major storm events and recent severe drought. Simulated snow depth and snow water equivalent (SWE) are initially compared with the water year 2014 ARHO observations. The overall results show reasonable agreements having the Nash-Sutcliffe efficiency coefficient (NS) of 0.7, ranged from 0.3 to 0.86. However, the results indicate a tendency to underestimate the SWE in a high elevation area compared with ARHO observations, which is caused by the underestimated PRISM precipitation data. Precipitation at gauge-sparse regions (e.g., high elevation area), in general, cannot be well represented in gridded datasets. Streamflow estimates of the basin outlet have NS of 0.93, percent bias of 7.8%, and normalized root mean square error of 3.6% for the monthly time scale.

  10. Source Parameters of the 8 October, 2005 Mw7.6 Kashmir Earthquake

    NASA Astrophysics Data System (ADS)

    Mandal, Prantik; Chadha, R. K.; Kumar, N.; Raju, I. P.; Satyamurty, C.

    2007-12-01

    During the last six years, the National Geophysical Research Institute, Hyderabad has established a semi-permanent seismological network of 5 broadband seismographs and 10 accelerographs in the Kachchh seismic zone, Gujarat, with the prime objective to monitor the continued aftershock activity of the 2001 Mw7.7 Bhuj mainshock. The reliable and accurate broadband data for the Mw 7.6 (8 Oct., 2005) Kashmir earthquake and its aftershocks from this network, as well as from the Hyderabad Geoscope station, enabled us to estimate the group velocity dispersion characteristics and the one-dimensional regional shear-velocity structure of peninsular India. Firstly, we measure Rayleigh- and Love-wave group velocity dispersion curves in the range of 8 to 35 sec and invert these curves to estimate the crustal and upper mantle structure below the western part of peninsular India. Our best model suggests a two-layered crust: The upper crust is 13.8-km thick with a shear velocity (Vs) of 3.2 km/s; the corresponding values for the lower crust are 24.9 km and 3.7 km/sec. The shear velocity for the upper mantle is found to be 4.65 km/sec. Based on this structure, we perform a moment tensor (MT) inversion of the bandpass (0.05 0.02 Hz) filtered seismograms of the Kashmir earthquake. The best fit is obtained for a source located at a depth of 30 km, with a seismic moment, Mo, of 1.6 × 1027 dyne-cm, and a focal mechanism with strike 19.5°, dip 42°, and rake 167°. The long-period magnitude (MA ~ Mw) of this earthquake is estimated to be 7.31. An analysis of well-developed sPn and sSn regional crustal phases from the bandpassed (0.02 0.25 Hz) seismograms of this earthquake at four stations in Kachchh suggests a focal depth of 30.8 km.

  11. Salient object detection based on multi-scale contrast.

    PubMed

    Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long

    2018-05-01

    Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Technique for estimating depth of 100-year floods in Tennessee

    USGS Publications Warehouse

    Gamble, Charles R.; Lewis, James G.

    1977-01-01

    Preface: A method is presented for estimating the depth of the loo-year flood in four hydrologic areas in Tennessee. Depths at 151 gaging stations on streams that were not significantly affected by man made changes were related to basin characteristics by multiple regression techniques. Equations derived from the analysis can be used to estimate the depth of the loo-year flood if the size of the drainage basin is known.

  13. Combining binary decision tree and geostatistical methods to estimate snow distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, Benjamin; Elder, Kelly

    2000-01-01

    We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.

  14. Optical depth measurements by shadow-band radiometers and their uncertainties.

    PubMed

    Alexandrov, Mikhail D; Kiedron, Peter; Michalsky, Joseph J; Hodges, Gary; Flynn, Connor J; Lacis, Andrew A

    2007-11-20

    Shadow-band radiometers in general, and especially the Multi-Filter Rotating Shadow-band Radiometer (MFRSR), are widely used for atmospheric optical depth measurements. The major programs running MFRSR networks in the United States include the Department of Energy Atmospheric Radiation Measurement (ARM) Program, U.S. Department of Agriculture UV-B Monitoring and Research Program, National Oceanic and Atmospheric Administration Surface Radiation (SURFRAD) Network, and NASA Solar Irradiance Research Network (SIRN). We discuss a number of technical issues specific to shadow-band radiometers and their impact on the optical depth measurements. These problems include instrument tilt and misalignment, as well as some data processing artifacts. Techniques for data evaluation and automatic detection of some of these problems are described.

  15. Estimation of the Cloud condensation nuclei concentration(CCN) and aerosol optical depth(AOD) relation in the Arctic region

    NASA Astrophysics Data System (ADS)

    Jung, C. H.; Yoon, Y. J.; Ahn, S. H.; Kang, H. J.; Gim, Y. T.; Lee, B. Y.

    2017-12-01

    Information of the spatial and temporal variations of cloud condensation nuclei (CCN) concentrations is important in estimating aerosol indirect effects. Generally, CCN aerosol is difficult to estimate using remote sensing methods. Although there are many CCN measurements data, extensive measurements of CCN are not feasible because of the complex nature of the operation and high cost, especially in the Arctic region. Thus, there have been many attempts to estimate CCN concentrations from more easily obtainable parameters such as aerosol optical depth (AOD) because AOD has the advantage of being readily observed by remote sensing from space by several sensors. For example, some form of correlation was derived between AOD and the number concentration of cloud condensation nuclei (CCN) through the comparison results from AERONET network and CCN measurements (Andreae 2009). In this study, a parameterization of CCN concentration as a function of AOD at 500 nm is given in the Arctic region. CCN data was collected during the period 2007-2013 at the Zeppelin observatory (78.91° N, 11.89° E, 474 masl). The AERONET network and MODIS AOD data are compared with ground measured CCN measurement and the relations between AOD and CCN are parameterized. The seasonal characteristics as well as long term trends are also considered. Through the measurement, CCN concentration remains high during spring because of aerosol transportation from the mid-latitudes, known as Arctic Haze. Lowest CCN number densities were observed during Arctic autumn and early winter when aerosol long-range transport into the Arctic is not effective and new particle formation ceases. The results show that the relation between AOD and CCN shows a different parameter depending on the seasonal aerosol and CCN characteristics. This seasonal different CCN-AOD relation can be interpreted as many physico-chemical aerosol properties including aerosol size distribution, composition. ReferenceAndreae, M. O. (2009) Correlation between cloud condensation nuclei concentration and aerosol optical thickness in remote and polluted regions,2009, Atmos. Chem. Phys., 9, 543-556.

  16. Strong Motion Network of Medellín and Aburrá Valley: technical advances, seismicity records and micro-earthquake monitoring

    NASA Astrophysics Data System (ADS)

    Posada, G.; Trujillo, J. C., Sr.; Hoyos, C.; Monsalve, G.

    2017-12-01

    The tectonics setting of Colombia is determined by the interaction of Nazca, Caribbean and South American plates, together with the Panama-Choco block collision, which makes a seismically active region. Regional seismic monitoring is carried out by the National Seismological Network of Colombia and the Accelerometer National Network of Colombia. Both networks calculate locations, magnitudes, depths and accelerations, and other seismic parameters. The Medellín - Aburra Valley is located in the Northern segment of the Central Cordillera of Colombia, and according to the Colombian technical seismic norm (NSR-10), is a region of intermediate hazard, because of the proximity to seismic sources of the Valley. Seismic monitoring in the Aburra Valley began in 1996 with an accelerometer network which consisted of 38 instruments. Currently, the network consists of 26 stations and is run by the Early Warning System of Medellin and Aburra Valley (SIATA). The technical advances have allowed the real-time communication since a year ago, currently with 10 stations; post-earthquake data is processed through operationally near-real-time, obtaining quick results in terms of location, acceleration, spectrum response and Fourier analysis; this information is displayed at the SIATA web site. The strong motion database is composed by 280 earthquakes; this information is the basis for the estimation of seismic hazards and risk for the region. A basic statistical analysis of the main information was carried out, including the total recorded events per station, natural frequency, maximum accelerations, depths and magnitudes, which allowed us to identify the main seismic sources, and some seismic site parameters. With the idea of a more complete seismic monitoring and in order to identify seismic sources beneath the Valley, we are in the process of installing 10 low-cost shake seismometers for micro-earthquake monitoring. There is no historical record of earthquakes with a magnitude greater than 3.5 beneath the Aburra Valley, and the neotectonic evidence are limited, so it is expected that this network helps to characterize the seismic hazards.

  17. 3D depth-to-basement and density contrast estimates using gravity and borehole data

    NASA Astrophysics Data System (ADS)

    Barbosa, V. C.; Martins, C. M.; Silva, J. B.

    2009-05-01

    We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding the basin evolution and in detecting structural oil traps.

  18. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    NASA Astrophysics Data System (ADS)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  19. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  20. Quantifying the accuracy of snow water equivalent estimates using broadband radar signal phase

    NASA Astrophysics Data System (ADS)

    Deeb, E. J.; Marshall, H. P.; Lamie, N. J.; Arcone, S. A.

    2014-12-01

    Radar wave velocity in dry snow depends solely on density. Consequently, ground-based pulsed systems can be used to accurately measure snow depth and snow water equivalent (SWE) using signal travel-time, along with manual depth-probing for signal velocity calibration. Travel-time measurements require a large bandwidth pulse not possible in airborne/space-borne platforms. In addition, radar backscatter from snow cover is sensitive to grain size and to a lesser extent roughness of layers at current/proposed satellite-based frequencies (~ 8 - 18 GHz), complicating inversion for SWE. Therefore, accurate retrievals of SWE still require local calibration due to this sensitivity to microstructure and layering. Conversely, satellite radar interferometry, which senses the difference in signal phase between acquisitions, has shown a potential relationship with SWE at lower frequencies (~ 1 - 5 GHz) because the phase of the snow-refracted signal is sensitive to depth and dielectric properties of the snowpack, as opposed to its microstructure and stratigraphy. We have constructed a lab-based, experimental test bed to quantify the change in radar phase over a wide range of frequencies for varying depths of dry quartz sand, a material dielectrically similar to dry snow. We use a laboratory grade Vector Network Analyzer (0.01 - 25.6 GHz) and a pair of antennae mounted on a trolley over the test bed to measure amplitude and phase repeatedly/accurately at many frequencies. Using ground-based LiDAR instrumentation, we collect a coordinated high-resolution digital surface model (DSM) of the test bed and subsequent depth surfaces with which to compare the radar record of changes in phase. Our plans to transition this methodology to a field deployment during winter 2014-2015 using precision pan/tilt instrumentation will also be presented, as well as applications to airborne and space-borne platforms toward the estimation of SWE at high spatial resolution (on the order of meters) over large regions (> 100 square kilometers).

  1. Crustal velocity structure and earthquake processes of Garhwal-Kumaun Himalaya: Constraints from regional waveform inversion and array beam modeling

    NASA Astrophysics Data System (ADS)

    Negi, Sanjay S.; Paul, Ajay; Cesca, Simone; Kamal; Kriegerowski, Marius; Mahesh, P.; Gupta, Sandeep

    2017-08-01

    In order to understand present day earthquake kinematics at the Indian plate boundary, we analyse seismic broadband data recorded between 2007 and 2015 by the regional network in the Garhwal-Kumaun region, northwest Himalaya. We first estimate a local 1-D velocity model for the computation of reliable Green's functions, based on 2837 P-wave and 2680 S-wave arrivals from 251 well located earthquakes. The resulting 1-D crustal structure yields a 4-layer velocity model down to the depths of 20 km. A fifth homogeneous layer extends down to 46 km, constraining the Moho using travel-time distance curve method. We then employ a multistep moment tensor (MT) inversion algorithm to infer seismic moment tensors of 11 moderate earthquakes with Mw magnitude in the range 4.0-5.0. The method provides a fast MT inversion for future monitoring of local seismicity, since Green's functions database has been prepared. To further support the moment tensor solutions, we additionally model P phase beams at seismic arrays at teleseismic distances. The MT inversion result reveals the presence of dominant thrust fault kinematics persisting along the Himalayan belt. Shallow low and high angle thrust faulting is the dominating mechanism in the Garhwal-Kumaun Himalaya. The centroid depths for these moderate earthquakes are shallow between 1 and 12 km. The beam modeling result confirm hypocentral depth estimates between 1 and 7 km. The updated seismicity, constrained source mechanism and depth results indicate typical setting of duplexes above the mid crustal ramp where slip is confirmed along out-of-sequence thrusting. The involvement of Tons thrust sheet in out-of-sequence thrusting indicate Tons thrust to be the principal active thrust at shallow depth in the Himalayan region. Our results thus support the critical taper wedge theory, where we infer the microseismicity cluster as a result of intense activity within the Lesser Himalayan Duplex (LHD) system.

  2. Brittle fracture damage around the Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Williams, J. N.; Toy, V.; Smith, S. A. F.; Boulton, C. J.; Massiot, C.; Mcnamara, D. D.

    2017-12-01

    We use field and drill-core samples to characterize macro- to micro-scale brittle fracture networks within the hanging-wall of New Zealand's Alpine Fault, an active plate-boundary fault that is approaching the end of its seismic cycle. Fracture density in the hanging-wall is roughly constant for distances of up to 500 m from the principal slip zone gouges (PSZs). Fractures >160 m from the PSZs are typically open and parallel to the regional mylonitic foliation or host rock schistosity, and likely formed as unloading joints during rapid exhumation of the hanging-wall at shallow depths. Fractures within c. 160 m of the PSZs are broadly oriented shear-fractures filled with gouge or cataclasite, and are interpreted to constitute the hanging-wall damage zone of the Alpine Fault. This is comparable to the 60-200 m wide "geophysical damage zone" estimated from low seismic wave velocities surrounding the Alpine Fault. Veins are pervasive within the c. 20 m-thick hanging-wall cataclasites and are most commonly filled by calcite, chlorite, muscovite and K-feldspar. Notably, there is a set of intragranular clast-hosted veins, as well as a younger set of veins that cross-cut both clasts and cataclasite matrix. The intragranular veins formed prior to cataclasis or during synchronous cataclasis and calcite-silicate mineralisation. Broad estimates for the depth of vein formation indicate that the cataclasites formed a c. 20 m wide actively deforming zone at depths of c. 4-8 km. Conversely, the cross-cutting veins are interpreted to represent off-fault damage within relatively indurated cataclasites following slip localization onto the <10 cm wide smectite-bearing PSZ gouges at depths of <4 km. Our observations therefore highlight a strong depth-dependence of the width of the actively deforming zone within the brittle seismogenic crust around the Alpine Fault.

  3. An Automated Method of MFRSR Calibration for Aerosol Optical Depth Analysis with Application to an Asian Dust Outbreak over the United States.

    NASA Astrophysics Data System (ADS)

    Augustine, John A.; Cornwall, Christopher R.; Hodges, Gary B.; Long, Charles N.; Medina, Carlos I.; Deluisi, John J.

    2003-02-01

    Over the past decade, networks of Multifilter Rotating Shadowband Radiometers (MFRSR) and automated sun photometers have been established in the United States to monitor aerosol properties. The MFRSR alternately measures diffuse and global irradiance in six narrow spectral bands and a broadband channel of the solar spectrum, from which the direct normal component for each may be inferred. Its 500-nm channel mimics sun photometer measurements and thus is a source of aerosol optical depth information. Automatic data reduction methods are needed because of the high volume of data produced by the MFRSR. In addition, these instruments are often not calibrated for absolute irradiance and must be periodically calibrated for optical depth analysis using the Langley method. This process involves extrapolation to the signal the MFRSR would measure at the top of the atmosphere (I0). Here, an automated clear-sky identification algorithm is used to screen MFRSR 500-nm measurements for suitable calibration data. The clear-sky MFRSR measurements are subsequently used to construct a set of calibration Langley plots from which a mean I0 is computed. This calibration I0 may be subsequently applied to any MFRSR 500-nm measurement within the calibration period to retrieve aerosol optical depth. This method is tested on a 2-month MFRSR dataset from the Table Mountain NOAA Surface Radiation Budget Network (SURFRAD) station near Boulder, Colorado. The resultant I0 is applied to two Asian dust-related high air pollution episodes that occurred within the calibration period on 13 and 17 April 2001. Computed aerosol optical depths for 17 April range from approximately 0.30 to 0.40, and those for 13 April vary from background levels to >0.30. Errors in these retrievals were estimated to range from ±0.01 to ±0.05, depending on the solar zenith angle. The calculations are compared with independent MFRSR-based aerosol optical depth retrievals at the Pawnee National Grasslands, 85 km to the northeast of Table Mountain, and to sun-photometer-derived aerosol optical depths at the National Renewable Energy Laboratory in Golden, Colorado, 50 km to the south. Both the Table Mountain and Golden stations are situated within a few kilometers of the Front Range of the Rocky Mountains, whereas the Pawnee station is on the eastern plains of Colorado. Time series of aerosol optical depth from Pawnee and Table Mountain stations compare well for 13 April when, according to the Naval Aerosol Analysis and Prediction System, an upper-level Asian dust plume enveloped most of Colorado. Aerosol optical depths at the Golden station for that event are generally greater than those at Table Mountain and Pawnee, possibly because of the proximity of Golden to Denver's urban aerosol plume. The dust over Colorado was primarily surface based on 17 April. On that day, aerosol optical depths at Table Mountain and Golden are similar but are 2 times the magnitude of those at Pawnee. This difference is attributed to meteorological conditions that favored air stagnation in the planetary boundary layer along the Front Range, and a west-to-east gradient in aerosol concentration. The magnitude and timing of the aerosol optical depth measurements at Table Mountain for these events are found to be consistent with independent measurements made at NASA Aerosol Robotic Network (AERONET) stations at Missoula, Montana, and at Bondville, Illinois.

  4. Size matters: Perceived depth magnitude varies with stimulus height.

    PubMed

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2016-06-01

    Both the upper and lower disparity limits for stereopsis vary with the size of the targets. Recently, Tsirlin, Wilcox, and Allison (2012) suggested that perceived depth magnitude from stereopsis might also depend on the vertical extent of a stimulus. To test this hypothesis we compared apparent depth in small discs to depth in long bars with equivalent width and disparity. We used three estimation techniques: a virtual ruler, a touch-sensor (for haptic estimates) and a disparity probe. We found that depth estimates were significantly larger for the bar stimuli than for the disc stimuli for all methods of estimation and different configurations. In a second experiment, we measured perceived depth as a function of the height of the bar and the radius of the disc. Perceived depth increased with increasing bar height and disc radius suggesting that disparity is integrated along the vertical edges. We discuss size-disparity correlation and inter-neural excitatory connections as potential mechanisms that could account for these results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Uncertainty in cloud optical depth estimates made from satellite radiance measurements

    NASA Technical Reports Server (NTRS)

    Pincus, Robert; Szczodrak, Malgorzata; Gu, Jiujing; Austin, Philip

    1995-01-01

    The uncertainty in optical depths retrieved from satellite measurements of visible wavelength radiance at the top of the atmosphere is quantified. Techniques are briefly reviewed for the estimation of optical depth from measurements of radiance, and it is noted that these estimates are always more uncertain at greater optical depths and larger solar zenith angles. The lack of radiometric calibration for visible wavelength imagers on operational satellites dominates the uncertainty retrievals of optical depth. This is true for both single-pixel retrievals and for statistics calculated from a population of individual retrievals. For individual estimates or small samples, sensor discretization can also be significant, but the sensitivity of the retrieval to the specification of the model atmosphere is less important. The relative uncertainty in calibration affects the accuracy with which optical depth distributions measured by different sensors may be quantitatively compared, while the absolute calibration uncertainty, acting through the nonlinear mapping of radiance to optical depth, limits the degree to which distributions measured by the same sensor may be distinguished.

  6. Joint application of local earthquake tomography and Curie depth point analysis give evidence of magma presence below the geothermal field of Central Greece.

    NASA Astrophysics Data System (ADS)

    Karastathis, Vassilios; Papoulia, Joanna; di Fiore, Boris; Makris, Jannis; Tsambas, Anestis; Stampolidis, Alexandros; Papadopoulos, Gerassimos

    2010-05-01

    Along the coast of the North Evian Gulf, Central Greece, there are significant geothermal sites, thermal springs as Aedipsos, Yaltra, Lichades, Ilia, Kamena Vourla, Thermopylae etc. but also volcanoes of the Quaternary - Pleistocene age as Lichades and Vromolimni. Since for these local volcanoes and geothermal fields, their deep origin and their relation with the ones of the wider region have not been clarified yet in detail, we attempted a deep structure investigation by conducting a 3D local earthquake tomography study in combination with Curie Depth analysis from aeromagnetic data. A seismographic network of 23 portable land-stations and 7 OBS was deployed in the area of North Evian Gulf to record the microseismic activity for a 4-month period. Two thousand events were located with ML 0.7 to 4.5. To build the 3D seismic velocity structure for the investigation area, we implemented traveltime inversion with algorithm SIMULPS14 on the 540 best located events. The code performed simultaneous inversion of the model parameters Vp, Vp/Vs and hypocenter locations. In order to select a reliable 1D starting model for the tomography inversion, the seismic arrivals were inverted at first with the algorithm VELEST (minimum 1D velocity model). The values of the damping factor parameter were chosen with the aid of the trade-off curve between the model variance and data variance. Six horizontal slices of the 3D P-wave velocity model and the respective ones of the Poisson ratio are constructed. We also set a reliability limit on the sections based on the comparison between the graphical representations of the diagonal elements of the resolution matrix (RDE) and the recovery ability of "checkerboard" models. To estimate the Curie Depth Point we followed the centroid procedures so, the filtered residual dataset of the area was subdivided in 5 square subregions, named C1 up to C5, sized 90x90 km2 and overlapped each other by 70%. In each subregion the radially averaged power spectra was computed. The slope of the longest wavelength part for each subregion yield the centroid depth, zo, of the deepest layer of magnetic sources, while the slope of the second longest wavelength spectral segment yield the depth to the top, zt, for the some layer. Using the formula zb=2zo-zt the Curie Depth estimation was derived for each subregion C an assigned at its centre. The estimated depths are between 7 and 8.1 km below sea level. The results showed the existence of a low seismic velocity volume with high Poisson ratio at greater to 8 km depths. Since the Curie Depth Point analysis estimated the demagnetization of the material due to high temperatures at the top of this volume, we led to consider that this volume is related with the presence of a magma chamber. Below the sites of the quaternary volcanoes of Lichades, Vromolimni and Ag. Ioannis there is a local increase of the seismic velocity over the low velocity anomaly. This was attributed to a crystallized magma volume below the volcanoes. The coincidence of the spatial distribution of surface geothermal sites and volcanoes with the deep low velocity anomaly enhanced our consideration for magma presence at this anomaly. The seismic slices of 4 km depth showed that the supply of the thermal springs at the surface is related with the main faulted zones of the area.

  7. How Choice of Depth Horizon Influences the Estimated Spatial Patterns and Global Magnitude of Ocean Carbon Export Flux

    NASA Astrophysics Data System (ADS)

    Palevsky, Hilary I.; Doney, Scott C.

    2018-05-01

    Estimated rates and efficiency of ocean carbon export flux are sensitive to differences in the depth horizons used to define export, which often vary across methodological approaches. We evaluate sinking particulate organic carbon (POC) flux rates and efficiency (e-ratios) in a global earth system model, using a range of commonly used depth horizons: the seasonal mixed layer depth, the particle compensation depth, the base of the euphotic zone, a fixed depth horizon of 100 m, and the maximum annual mixed layer depth. Within this single dynamically consistent model framework, global POC flux rates vary by 30% and global e-ratios by 21% across different depth horizon choices. Zonal variability in POC flux and e-ratio also depends on the export depth horizon due to pronounced influence of deep winter mixing in subpolar regions. Efforts to reconcile conflicting estimates of export need to account for these systematic discrepancies created by differing depth horizon choices.

  8. WEPP and ANN models for simulating soil loss and runoff in a semi-arid Mediterranean region.

    PubMed

    Albaradeyia, Issa; Hani, Azzedine; Shahrour, Isam

    2011-09-01

    This paper presents the use of both the Water Erosion Prediction Project (WEPP) and the artificial neural network (ANN) for the prediction of runoff and soil loss in the central highland mountainous of the Palestinian territories. Analyses show that the soil erosion is highly dependent on both the rainfall depth and the rainfall event duration rather than on the rainfall intensity as mostly mentioned in the literature. The results obtained from the WEPP model for the soil loss and runoff disagree with the field data. The WEPP underestimates both the runoff and soil loss. Analyses conducted with the ANN agree well with the observation. In addition, the global network models developed using the data of all the land use type show a relatively unbiased estimation for both runoff and soil loss. The study showed that the ANN model could be used as a management tool for predicting runoff and soil loss.

  9. Global potential for wind-generated electricity

    PubMed Central

    Lu, Xi; McElroy, Michael B.; Kiviluoma, Juha

    2009-01-01

    The potential of wind power as a global source of electricity is assessed by using winds derived through assimilation of data from a variety of meteorological sources. The analysis indicates that a network of land-based 2.5-megawatt (MW) turbines restricted to nonforested, ice-free, nonurban areas operating at as little as 20% of their rated capacity could supply >40 times current worldwide consumption of electricity, >5 times total global use of energy in all forms. Resources in the contiguous United States, specifically in the central plain states, could accommodate as much as 16 times total current demand for electricity in the United States. Estimates are given also for quantities of electricity that could be obtained by using a network of 3.6-MW turbines deployed in ocean waters with depths <200 m within 50 nautical miles (92.6 km) of closest coastlines. PMID:19549865

  10. Updates to Enhanced Geothermal System Resource Potential Estimate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad

    The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less

  11. Update to Enhanced Geothermal System Resource Potential Estimate: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad

    2016-10-01

    The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less

  12. Calculating depths to shallow magnetic sources using aeromagnetic data from the Tucson Basin

    USGS Publications Warehouse

    Casto, Daniel W.

    2001-01-01

    Using gridded high-resolution aeromagnetic data, the performance of several automated 3-D depth-to-source methods was evaluated over shallow control sources based on how close their depth estimates came to the actual depths to the tops of the sources. For all three control sources, only the simple analytic signal method, the local wavenumber method applied to the vertical integral of the magnetic field, and the horizontal gradient method applied to the pseudo-gravity field provided median depth estimates that were close (-11% to +14% error) to the actual depths. Careful attention to data processing was required in order to calculate a sufficient number of depth estimates and to reduce the occurrence of false depth estimates. For example, to eliminate sampling bias, high-frequency noise and interference from deeper sources, it was necessary to filter the data before calculating derivative grids and subsequent depth estimates. To obtain smooth spatial derivative grids using finite differences, the data had to be gridded at intervals less than one percent of the anomaly wavelength. Before finding peak values in the derived signal grids, it was necessary to remove calculation noise by applying a low-pass filter in the grid-line directions and to re-grid at an interval that enabled the search window to encompass only the peaks of interest. Using the methods that worked best over the control sources, depth estimates over geologic sites of interest suggested the possible occurrence of volcanics nearly 170 meters beneath a city landfill. Also, a throw of around 2 kilometers was determined for a detachment fault that has a displacement of roughly 6 kilometers.

  13. Second Quarter Hanford Seismic Report for Fiscal Year 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohay, Alan C.; Sweeney, Mark D.; Hartshorn, Donald C.

    2010-06-30

    The Hanford Seismic Assessment Program (HSAP) provides an uninterrupted collection of high-quality raw and processed seismic data from the Hanford Seismic Network for the U.S. Department of Energy and its contractors. The HSAP is responsible for locating and identifying sources of seismic activity and monitoring changes in the historical pattern of seismic activity at the Hanford Site. The data are compiled, archived, and published for use by the Hanford Site for waste management, natural phenomena hazards assessments, and engineering design and construction. In addition, the HSAP works with the Hanford Site Emergency Services Organization to provide assistance in the eventmore » of a significant earthquake on the Hanford Site. The Hanford Seismic Network and the Eastern Washington Regional Network consist of 44 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Assessment Team. The Hanford Seismic Network recorded 90 local earthquakes during the second quarter of FY 2010. Eighty-one of these earthquakes were detected in the vicinity of Wooded Island, located about eight miles north of Richland just west of the Columbia River. The Wooded Island events recorded this quarter were a continuation of the swarm events observed during the 2009 and 2010 fiscal years and reported in previous quarterly and annual reports (Rohay et al; 2009a, 2009b, 2009c, and 2010). Most of the events were considered minor (coda-length magnitude [Mc] less than 1.0) with only 1 event in the 2.0-3.0 range; the maximum magnitude event (3.0 Mc) occurred February 4, 2010 at depth 2.4 km. The average depth of the Wooded Island events during the quarter was 1.6 km with a maximum depth estimated at 3.5 km. This placed the Wooded Island events within the Columbia River Basalt Group (CRBG). The low magnitude of the Wooded Island events has made them undetectable to all but local area residents. The Hanford Strong Motion Accelerometer (SMA) network was triggered several times by these events and the SMA recordings are discussed in section 6.0. During the last year some Hanford employees working within a few miles of the swarm area and individuals living directly across the Columbia River from the swarm center have reported feeling many of the larger magnitude events. Similar earthquake swarms have been recorded near this same location in 1970, 1975 and 1988 but not with SMA readings or satellite imagery. Prior to the 1970s, earthquake swarms may have occurred at this location or elsewhere in the Columbia Basin, but equipment was not in place to record those events. The Wooded Island swarm, due its location and the limited magnitude of the events, does not appear to pose any significant risk to Hanford waste storage facilities. Since swarms of the past did not intensify in magnitude, seismologists do not expect that these events will persist or increase in intensity. However, Pacific Northwest National Laboratory (PNNL) will continue to monitor the activity. Outside of the Wooded Island swarm, nine earthquakes were recorded, seven minor events plus two events with magnitude less than 2.0 Mc. Two earthquakes were located at shallow depths (less than 4 km), three earthquakes at intermediate depths (between 4 and 9 km), most likely in the pre-basalt sediments, and four earthquakes were located at depths greater than 9 km, within the basement. Geographically, six earthquakes were located in known swarm areas and three earthquakes were classified as random events.« less

  14. Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries (Open Access)

    DTIC Science & Technology

    2014-09-05

    RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS 1 Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries S. Hussain Raza1...electronic forms. ar X iv :1 51 0. 07 31 7v 1 [ cs .C V ] 2 5 O ct 2 01 5 2 RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS Frame Ground Truth Depth...temporal segmentation using the method proposed by Grundmann et al . [4]. estimation and triangulation to estimate depth maps [17, 27](see Figure 1). In

  15. Multi-decadal analysis of root-zone soil moisture applying the exponential filter across CONUS

    NASA Astrophysics Data System (ADS)

    Tobin, Kenneth J.; Torres, Roberto; Crow, Wade T.; Bennett, Marvin E.

    2017-09-01

    This study applied the exponential filter to produce an estimate of root-zone soil moisture (RZSM). Four types of microwave-based, surface satellite soil moisture were used. The core remotely sensed data for this study came from NASA's long-lasting AMSR-E mission. Additionally, three other products were obtained from the European Space Agency Climate Change Initiative (CCI). These datasets were blended based on all available satellite observations (CCI-active, CCI-passive, and CCI-combined). All of these products were 0.25° and taken daily. We applied the filter to produce a soil moisture index (SWI) that others have successfully used to estimate RZSM. The only unknown in this approach was the characteristic time of soil moisture variation (T). We examined five different eras (1997-2002; 2002-2005; 2005-2008; 2008-2011; 2011-2014) that represented periods with different satellite data sensors. SWI values were compared with in situ soil moisture data from the International Soil Moisture Network at a depth ranging from 20 to 25 cm. Selected networks included the US Department of Energy Atmospheric Radiation Measurement (ARM) program (25 cm), Soil Climate Analysis Network (SCAN; 20.32 cm), SNOwpack TELemetry (SNOTEL; 20.32 cm), and the US Climate Reference Network (USCRN; 20 cm). We selected in situ stations that had reasonable completeness. These datasets were used to filter out periods with freezing temperatures and rainfall using data from the Parameter elevation Regression on Independent Slopes Model (PRISM). Additionally, we only examined sites where surface and root-zone soil moisture had a reasonably high lagged r value (r > 0. 5). The unknown T value was constrained based on two approaches: optimization of root mean square error (RMSE) and calculation based on the normalized difference vegetation index (NDVI) value. Both approaches yielded comparable results; although, as to be expected, the optimization approach generally outperformed NDVI-based estimates. The best results were noted at stations that had an absolute bias within 10 %. SWI estimates were more impacted by the in situ network than the surface satellite product used to drive the exponential filter. The average Nash-Sutcliffe coefficients (NSs) for ARM ranged from -0. 1 to 0.3 and were similar to the results obtained from the USCRN network (0.2-0.3). NS values from the SCAN and SNOTEL networks were slightly higher (0.1-0.5). These results indicated that this approach had some skill in providing an estimate of RZSM. In terms of RMSE (in volumetric soil moisture), ARM values actually outperformed those from other networks (0.02-0.04). SCAN and USCRN RMSE average values ranged from 0.04 to 0.06 and SNOTEL average RMSE values were higher (0.05-0.07). These values were close to 0.04, which is the baseline value for accuracy designated for many satellite soil moisture missions.

  16. The Effect of Finite Thickness Extent on Estimating Depth to Basement from Aeromagnetic Data

    NASA Astrophysics Data System (ADS)

    Blakely, R. J.; Salem, A.; Green, C. M.; Fairhead, D.; Ravat, D.

    2014-12-01

    Depth to basement estimation methods using various components of the spectral content of magnetic anomalies are in common use by geophysicists. Examples of these are the Tilt-Depth and SPI methods. These methods use simple models having the base of the magnetic body at infinity. Recent publications have shown that this 'infinite depth' assumption causes underestimation of the depth to the top of sources, especially in areas where the bottom of the magnetic layer is shallow, as would occur in high heat-flow regions. This error has been demonstrated in both model studies and using real data with seismic or well control. To overcome the limitation of infinite depth this contribution presents the mathematics for a finite depth contact body in the Tilt depth and SPI methods and applies it to the central Red Sea where the Curie isotherm and Moho are shallow. The difference in the depth estimation between the infinite and finite contacts is such a case is significant and can exceed 200%.

  17. Flat Surface Damage Detection System (FSDDS)

    NASA Technical Reports Server (NTRS)

    Williams, Martha; Lewis, Mark; Gibson, Tracy; Lane, John; Medelius, Pedro; Snyder, Sarah; Ciarlariello, Dan; Parks, Steve; Carrejo, Danny; Rojdev, Kristina

    2013-01-01

    The Flat Surface Damage Detection system (FSDDS} is a sensory system that is capable of detecting impact damages to surfaces utilizing a novel sensor system. This system will provide the ability to monitor the integrity of an inflatable habitat during in situ system health monitoring. The system consists of three main custom designed subsystems: the multi-layer sensing panel, the embedded monitoring system, and the graphical user interface (GUI). The GUI LABVIEW software uses a custom developed damage detection algorithm to determine the damage location based on the sequence of broken sensing lines. It estimates the damage size, the maximum depth, and plots the damage location on a graph. Successfully demonstrated as a stand alone technology during 2011 D-RATS. Software modification also allowed for communication with HDU avionics crew display which was demonstrated remotely (KSC to JSC} during 2012 integration testing. Integrated FSDDS system and stand alone multi-panel systems were demonstrated remotely and at JSC, Mission Operations Test using Space Network Research Federation (SNRF} network in 2012. FY13, FSDDS multi-panel integration with JSC and SNRF network Technology can allow for integration with other complementary damage detection systems.

  18. Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frary, R.; Louie, J.; Pullammanappallil, S.

    Roxanna Frary, John N. Louie, Sathish Pullammanappallil, Amy Eisses, 2011, Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect: presented at American Geophysical Union Fall Meeting, San Francisco, Dec. 5-9, abstract T13G-07.

  19. Aerosol optical depth (AOD) and Angstrom exponent of aerosols observed by the Chinese Sun Hazemeter Network from August 2004 to September 2005

    Treesearch

    Jinyuan Xin; Yuesi Wang; Zhanqing Li; Pucai Wang; Wei Min Hao; Bryce L. Nordgren; Shigong Wang; Guangren Lui; Lili Wang; Tianxue Wen; Yang Sun; Bo Hu

    2007-01-01

    To reduce uncertainties in the quantitative assessment of aerosol effects on regional climate and environmental changes, extensive measurements of aerosol optical properties were made with handheld Sun photometers in the Chinese Sun Hazemeter Network (CSHNET) starting in August 2004. Regional characteristics of the aerosol optical depth (AOD) at 500 nm and Angstrom...

  20. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  1. Remote sensing of atmospheric optical depth using a smartphone sun photometer.

    PubMed

    Cao, Tingting; Thompson, Jonathan E

    2014-01-01

    In recent years, smart phones have been explored for making a variety of mobile measurements. Smart phones feature many advanced sensors such as cameras, GPS capability, and accelerometers within a handheld device that is portable, inexpensive, and consistently located with an end user. In this work, a smartphone was used as a sun photometer for the remote sensing of atmospheric optical depth. The top-of-the-atmosphere (TOA) irradiance was estimated through the construction of Langley plots on days when the sky was cloudless and clear. Changes in optical depth were monitored on a different day when clouds intermittently blocked the sun. The device demonstrated a measurement precision of 1.2% relative standard deviation for replicate photograph measurements (38 trials, 134 datum). However, when the accuracy of the method was assessed through using optical filters of known transmittance, a more substantial uncertainty was apparent in the data. Roughly 95% of replicate smart phone measured transmittances are expected to lie within ±11.6% of the true transmittance value. This uncertainty in transmission corresponds to an optical depth of approx. ±0.12-0.13 suggesting the smartphone sun photometer would be useful only in polluted areas that experience significant optical depths. The device can be used as a tool in the classroom to present how aerosols and gases effect atmospheric transmission. If improvements in measurement precision can be achieved, future work may allow monitoring networks to be developed in which citizen scientists submit acquired data from a variety of locations.

  2. Ship localization in Santa Barbara Channel using machine learning classifiers.

    PubMed

    Niu, Haiqiang; Ozanich, Emma; Gerstoft, Peter

    2017-11-01

    Machine learning classifiers are shown to outperform conventional matched field processing for a deep water (600 m depth) ocean acoustic-based ship range estimation problem in the Santa Barbara Channel Experiment when limited environmental information is known. Recordings of three different ships of opportunity on a vertical array were used as training and test data for the feed-forward neural network and support vector machine classifiers, demonstrating the feasibility of machine learning methods to locate unseen sources. The classifiers perform well up to 10 km range whereas the conventional matched field processing fails at about 4 km range without accurate environmental information.

  3. Node Deployment Algorithm Based on Connected Tree for Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Wang, Xingmin; Jiang, Lurong

    2015-01-01

    Designing an efficient deployment method to guarantee optimal monitoring quality is one of the key topics in underwater sensor networks. At present, a realistic approach of deployment involves adjusting the depths of nodes in water. One of the typical algorithms used in such process is the self-deployment depth adjustment algorithm (SDDA). This algorithm mainly focuses on maximizing network coverage by constantly adjusting node depths to reduce coverage overlaps between two neighboring nodes, and thus, achieves good performance. However, the connectivity performance of SDDA is irresolute. In this paper, we propose a depth adjustment algorithm based on connected tree (CTDA). In CTDA, the sink node is used as the first root node to start building a connected tree. Finally, the network can be organized as a forest to maintain network connectivity. Coverage overlaps between the parent node and the child node are then reduced within each sub-tree to optimize coverage. The hierarchical strategy is used to adjust the distance between the parent node and the child node to reduce node movement. Furthermore, the silent mode is adopted to reduce communication cost. Simulations show that compared with SDDA, CTDA can achieve high connectivity with various communication ranges and different numbers of nodes. Moreover, it can realize coverage as high as that of SDDA with various sensing ranges and numbers of nodes but with less energy consumption. Simulations under sparse environments show that the connectivity and energy consumption performances of CTDA are considerably better than those of SDDA. Meanwhile, the connectivity and coverage performances of CTDA are close to those depth adjustment algorithms base on connected dominating set (CDA), which is an algorithm similar to CTDA. However, the energy consumption of CTDA is less than that of CDA, particularly in sparse underwater environments. PMID:26184209

  4. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.

  5. Stereoscopic perception of real depths at large distances.

    PubMed

    Palmisano, Stephen; Gillam, Barbara; Govan, Donovan G; Allison, Robert S; Harris, Julie M

    2010-06-01

    There has been no direct examination of stereoscopic depth perception at very large observation distances and depths. We measured perceptions of depth magnitude at distances where it is frequently reported without evidence that stereopsis is non-functional. We adapted methods pioneered at distances up to 9 m by R. S. Allison, B. J. Gillam, and E. Vecellio (2009) for use in a 381-m-long railway tunnel. Pairs of Light Emitting Diode (LED) targets were presented either in complete darkness or with the environment lit as far as the nearest LED (the observation distance). We found that binocular, but not monocular, estimates of the depth between pairs of LEDs increased with their physical depths up to the maximum depth separation tested (248 m). Binocular estimates of depth were much larger with a lit foreground than in darkness and increased as the observation distance increased from 20 to 40 m, indicating that binocular disparity can be scaled for much larger distances than previously realized. Since these observation distances were well beyond the range of vertical disparity and oculomotor cues, this scaling must rely on perspective cues. We also ran control experiments at smaller distances, which showed that estimates of depth and distance correlate poorly and that our metric estimation method gives similar results to a comparison method under the same conditions.

  6. Convolution neural networks for real-time needle detection and localization in 2D ultrasound.

    PubMed

    Mwikirize, Cosmas; Nosher, John L; Hacihaliloglu, Ilker

    2018-05-01

    We propose a framework for automatic and accurate detection of steeply inserted needles in 2D ultrasound data using convolution neural networks. We demonstrate its application in needle trajectory estimation and tip localization. Our approach consists of a unified network, comprising a fully convolutional network (FCN) and a fast region-based convolutional neural network (R-CNN). The FCN proposes candidate regions, which are then fed to a fast R-CNN for finer needle detection. We leverage a transfer learning paradigm, where the network weights are initialized by training with non-medical images, and fine-tuned with ex vivo ultrasound scans collected during insertion of a 17G epidural needle into freshly excised porcine and bovine tissue at depth settings up to 9 cm and [Formula: see text]-[Formula: see text] insertion angles. Needle detection results are used to accurately estimate needle trajectory from intensity invariant needle features and perform needle tip localization from an intensity search along the needle trajectory. Our needle detection model was trained and validated on 2500 ex vivo ultrasound scans. The detection system has a frame rate of 25 fps on a GPU and achieves 99.6% precision, 99.78% recall rate and an [Formula: see text] score of 0.99. Validation for needle localization was performed on 400 scans collected using a different imaging platform, over a bovine/porcine lumbosacral spine phantom. Shaft localization error of [Formula: see text], tip localization error of [Formula: see text] mm, and a total processing time of 0.58 s were achieved. The proposed method is fully automatic and provides robust needle localization results in challenging scanning conditions. The accurate and robust results coupled with real-time detection and sub-second total processing make the proposed method promising in applications for needle detection and localization during challenging minimally invasive ultrasound-guided procedures.

  7. Sedimentary basins reconnaissance using the magnetic Tilt-Depth method

    USGS Publications Warehouse

    Salem, A.; Williams, S.; Samson, E.; Fairhead, D.; Ravat, D.; Blakely, R.J.

    2010-01-01

    We compute the depth to the top of magnetic basement using the Tilt-Depth method from the best available magnetic anomaly grids covering the continental USA and Australia. For the USA, the Tilt-Depth estimates were compared with sediment thicknesses based on drilling data and show a correlation of 0.86 between the datasets. If random data were used then the correlation value goes to virtually zero. There is little to no lateral offset of the depth of basinal features although there is a tendency for the Tilt-Depth results to be slightly shallower than the drill depths. We also applied the Tilt-Depth method to a local-scale, relatively high-resolution aeromagnetic survey over the Olympic Peninsula of Washington State. The Tilt-Depth method successfully identified a variety of important tectonic elements known from geological mapping. Of particular interest, the Tilt-Depth method illuminated deep (3km) contacts within the non-magnetic sedimentary core of the Olympic Mountains, where magnetic anomalies are subdued and low in amplitude. For Australia, the Tilt-Depth estimates also give a good correlation with known areas of shallow basement and sedimentary basins. Our estimates of basement depth are not restricted to regional analysis but work equally well at the micro scale (basin scale) with depth estimates agreeing well with drill hole and seismic data. We focus on the eastern Officer Basin as an example of basin scale studies and find a good level of agreement between previously-derived basin models. However, our study potentially reveals depocentres not previously mapped due to the sparse distribution of well data. This example thus shows the potential additional advantage of the method in geological interpretation. The success of this study suggests that the Tilt-Depth method is useful in estimating the depth to crystalline basement when appropriate quality aeromagnetic anomaly data are used (i.e. line spacing on the order of or less than the expected depth to basement). The method is especially valuable as a reconnaissance tool in regions where drillhole or seismic information are either scarce, lacking, or ambiguous.

  8. Broadband records of earthquakes in deep gold mines and a comparison with results from SAFOD, California

    USGS Publications Warehouse

    McGarr, Arthur F.; Boettcher, M.; Fletcher, Jon Peter B.; Sell, Russell; Johnston, Malcolm J.; Durrheim, R.; Spottiswoode, S.; Milev, A.

    2009-01-01

    For one week during September 2007, we deployed a temporary network of field recorders and accelerometers at four sites within two deep, seismically active mines. The ground-motion data, recorded at 200 samples/sec, are well suited to determining source and ground-motion parameters for the mining-induced earthquakes within and adjacent to our network. Four earthquakes with magnitudes close to 2 were recorded with high signal/noise at all four sites. Analysis of seismic moments and peak velocities, in conjunction with the results of laboratory stick-slip friction experiments, were used to estimate source processes that are key to understanding source physics and to assessing underground seismic hazard. The maximum displacements on the rupture surfaces can be estimated from the parameter , where  is the peak ground velocity at a given recording site, and R is the hypocentral distance. For each earthquake, the maximum slip and seismic moment can be combined with results from laboratory friction experiments to estimate the maximum slip rate within the rupture zone. Analysis of the four M 2 earthquakes recorded during our deployment and one of special interest recorded by the in-mine seismic network in 2004 revealed maximum slips ranging from 4 to 27 mm and maximum slip rates from 1.1 to 6.3 m/sec. Applying the same analyses to an M 2.1 earthquake within a cluster of repeating earthquakes near the San Andreas Fault Observatory at Depth site, California, yielded similar results for maximum slip and slip rate, 14 mm and 4.0 m/sec.

  9. ANN Surface Roughness Optimization of AZ61 Magnesium Alloy Finish Turning: Minimum Machining Times at Prime Machining Costs

    PubMed Central

    Erdakov, Ivan Nikolaevich; Taha, Mohamed~Adel; Soliman, Mahmoud Sayed; El Rayes, Magdy Mostafa

    2018-01-01

    Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth–Pareto optimization of an artificial neural network (ANN) is presented in this paper for surface roughness (Ra) prediction of one component in computer numerical control (CNC) turning over minimal machining time (Tm) and at prime machining costs (C). An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP), to predict Ra, Tm, and C, in relation to cutting speed, vc, depth of cut, ap, and feed per revolution, fr. For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values vc, ap, and fr. The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, Tm = 0.358 min/cm3, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed vc = 250 m/min, cutting depth ap = 1.0 mm, and feed per revolution fr = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness. PMID:29772670

  10. Towards Guided Underwater Survey Using Light Visual Odometry

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.

    2017-02-01

    A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.

  11. Induced dynamic nonlinear ground response at Gamer Valley, California

    USGS Publications Warehouse

    Lawrence, Z.; Bodin, P.; Langston, C.A.; Pearce, F.; Gomberg, J.; Johnson, P.A.; Menq, F.-Y.; Brackman, T.

    2008-01-01

    We present results from a prototype experiment in which we actively induce, observe, and quantify in situ nonlinear sediment response in the near surface. This experiment was part of a suite of experiments conducted during August 2004 in Garner Valley, California, using a large mobile shaker truck from the Network for Earthquake Engineering Simulation (NEES) facility. We deployed a dense accelerometer array within meters of the mobile shaker truck to replicate a controlled, laboratory-style soil dynamics experiment in order to observe wave-amplitude-dependent sediment properties. Ground motion exceeding 1g acceleration was produced near the shaker truck. The wave field was dominated by Rayleigh surface waves and ground motions were strong enough to produce observable nonlinear changes in wave velocity. We found that as the force load of the shaker increased, the Rayleigh-wave phase velocity decreased by as much as ???30% at the highest frequencies used (up to 30 Hz). Phase velocity dispersion curves were inverted for S-wave velocity as a function of depth using a simple isotropic elastic model to estimate the depth dependence of changes to the velocity structure. The greatest change in velocity occurred nearest the surface, within the upper 4 m. These estimated S-wave velocity values were used with estimates of surface strain to compare with laboratory-based shear modulus reduction measurements from the same site. Our results suggest that it may be possible to characterize nonlinear soil properties in situ using a noninvasive field technique.

  12. Comparison of artificial intelligence techniques for prediction of soil temperatures in Turkey

    NASA Astrophysics Data System (ADS)

    Citakoglu, Hatice

    2017-10-01

    Soil temperature is a meteorological data directly affecting the formation and development of plants of all kinds. Soil temperatures are usually estimated with various models including the artificial neural networks (ANNs), adaptive neuro-fuzzy inference system (ANFIS), and multiple linear regression (MLR) models. Soil temperatures along with other climate data are recorded by the Turkish State Meteorological Service (MGM) at specific locations all over Turkey. Soil temperatures are commonly measured at 5-, 10-, 20-, 50-, and 100-cm depths below the soil surface. In this study, the soil temperature data in monthly units measured at 261 stations in Turkey having records of at least 20 years were used to develop relevant models. Different input combinations were tested in the ANN and ANFIS models to estimate soil temperatures, and the best combination of significant explanatory variables turns out to be monthly minimum and maximum air temperatures, calendar month number, depth of soil, and monthly precipitation. Next, three standard error terms (mean absolute error (MAE, °C), root mean squared error (RMSE, °C), and determination coefficient ( R 2 )) were employed to check the reliability of the test data results obtained through the ANN, ANFIS, and MLR models. ANFIS (RMSE 1.99; MAE 1.09; R 2 0.98) is found to outperform both ANN and MLR (RMSE 5.80, 8.89; MAE 1.89, 2.36; R 2 0.93, 0.91) in estimating soil temperature in Turkey.

  13. Towards a first ground-based validation of aerosol optical depths from Sentinel-2 over the complex topography of the Alps

    NASA Astrophysics Data System (ADS)

    Marinelli, Valerio; Cremonese, Edoardo; Diémoz, Henri; Siani, Anna Maria

    2017-04-01

    The European Space Agency (ESA) is spending notable effort to put in operation a new generation of advanced Earth-observation satellites, the Sentinel constellation. In particular, the Sentinel-2 host an instrumental payload mainly consisting in a MultiSpectral Instrument (MSI) imaging sensor, capable of acquiring high-resolution imagery of the Earth surface and atmospheric reflectance at selected spectral bands, hence providing complementary measurements to ground-based radiometric stations. The latter can provide reference data for validating the estimates from spaceborne instruments such as Sentinel-2A (operating since October 2015), whose aerosol optical thickness (AOT) values, can be obtained from correcting SWIR (2190 nm) reflectance with an improved dense dark vegetation (DDV) algorithm. In the Northwestern European Alps (Saint-Christophe, 45.74°N, 7.36°E) a Prede POM-02 sun/sky aerosol photometer has been operating for several years within the EuroSkyRad network by the Environmental Protection Agency of Aosta Valley (ARPA Valle d'Aosta), gathering direct sun and diffuse sky radiance for retrieving columnar aerosol optical properties. This aerosol optical depth (AOD) dataset represents an optimal ground-truth for the corresponding Sentinel-2 estimates obtained with the Sen2cor processor in the challenging environment of the Alps (complex topography, snow-covered surfaces). We show the deviations between the two measurement series and propose some corrections to enhance the overall accuracy of satellite estimates.

  14. A Spectrally Selective Attenuation Mechanism-Based Kpar Algorithm for Biomass Heating Effect Simulation in the Open Ocean

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Zhang, Xiangguang; Xing, Xiaogang; Ishizaka, Joji; Yu, Zhifeng

    2017-12-01

    Quantifying the diffuse attenuation coefficient of the photosynthetically available radiation (Kpar) can improve our knowledge of euphotic depth (Zeu) and biomass heating effects in the upper layers of oceans. An algorithm to semianalytically derive Kpar from remote sensing reflectance (Rrs) is developed for the global open oceans. This algorithm includes the following two portions: (1) a neural network model for deriving the diffuse attention coefficients (Kd) that considers the residual error in satellite Rrs, and (2) a three band depth-dependent Kpar algorithm (TDKA) for describing the spectrally selective attenuation mechanism of underwater solar radiation in the open oceans. This algorithm is evaluated with both in situ PAR profile data and satellite images, and the results show that it can produce acceptable PAR profile estimations while clearly removing the impacts of satellite residual errors on Kpar estimations. Furthermore, the performance of the TDKA algorithm is evaluated by its applicability in Zeu derivation and mean temperature within a mixed layer depth (TML) simulation, and the results show that it can significantly decrease the uncertainty in both compared with the classical chlorophyll-a concentration-based Kpar algorithm. Finally, the TDKA algorithm is applied in simulating biomass heating effects in the Sargasso Sea near Bermuda, with new Kpar data it is found that the biomass heating effects can lead to a 3.4°C maximum positive difference in temperature in the upper layers but could result in a 0.67°C maximum negative difference in temperature in the deep layers.

  15. Dissipative Intraplate Faulting During the 2016 Mw 6.2 Tottori, Japan Earthquake

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Kanamori, Hiroo; Hauksson, Egill; Aso, Naofumi

    2018-02-01

    The 2016 Mw 6.2 Tottori earthquake occurred on 21 October 2016 and produced thousands of aftershocks. Here we analyze high-resolution-relocated seismicity together with source properties of the mainshock to better understand the rupture process and energy budget. We use a matched-filter algorithm to detect and precisely locate >10,000 previously unidentified aftershocks, which delineate a network of sharp subparallel lineations exhibiting significant branching and segmentation. Seismicity below 8 km depth forms highly localized fault structures subparallel to the mainshock strike. Shallow seismicity near the main rupture plane forms more diffuse clusters and lineations that often are at a high angle (in map view) to the mainshock strike. An empirical Green's function technique is used to derive apparent source time functions for the mainshock, which show a large amplitude pulse 2-4 s long. We invert the apparent source time functions for a slip distribution and observe a 16 km2 patch with average slip 3.2 m. 93% of the seismic moment is below 8 km depth, which is approximately the depth below which the seismicity becomes very localized. These observations suggest that the mainshock rupture area was entirely within the lower half of the seismogenic zone. The radiated seismic energy is estimated to be 5.7 × 1013 J, while the static stress drop is estimated to be 18-27 MPa. These values yield a radiation efficiency of 5-7%, which indicates that the Tottori mainshock was extremely dissipative. We conclude that this inefficiency in energy radiation is likely a product of the immature intraplate environment and the underlying geometric complexity.

  16. Neural network approach to the inverse problem of the crack-depth determination from ultrasonic backscattering data

    NASA Astrophysics Data System (ADS)

    Takadoya, M.; Notake, M.; Kitahara, M.; Achenbach, J. D.; Guo, Q. C.; Peterson, M. L.

    A neural network approach has been developed to determine the depth of a surface breaking crack in a steel plate from ultrasonic backscattering data. The network is trained by the use of a feedforward three-layered network together with a back-propagation algorithm for error corrections. Synthetic data are employed for network training. The signal used for crack isonification is a mode converted 45 deg transverse wave. The plate with a surface breaking crack is immersed in water, and the crack is insonified from the opposite uncracked side of the plate. A numerical analysis of the backscattered field is carried out based on the elastic wave theory by the use of the boundary element method. The numerical analysis provides synthetic data for the training of the network. The training data have been calculated for cracks with specific increments of the experimental data which are different from the training data.

  17. Modeling the ratio of photosynthetically active radiation to broadband global solar radiation using ground and satellite-based data in the tropics

    NASA Astrophysics Data System (ADS)

    Janjai, S.; Wattan, R.; Sripradit, A.

    2015-12-01

    Data from four stations in Thailand are used to model the ratio of photosynthetically active radiation (PAR) to broadband global solar radiation. The model expresses the ratio of PAR-to-broadband global solar radiation as a function of cloud index, aerosol optical depth, precipitable water, total ozone column and solar zenith angle. Data from the MTSAT-1R and OMI/AURA satellites are used to estimate the cloud index and total ozone column, respectively at each of the four stations, while aerosol optical depth and precipitable water are retrieved from Aerosol Robotic Network (AERONET) sunphotometer measurements, also available at each station. When tested against hourly measurements, the model exhibits a coefficient of variance (R2) equal to or better than 0.96, and root mean square difference (RMSD) in the range of 7.3-7.9% and mean bias difference (MBD) of -4.5% to 3.5%. The model compares favorably with other existing models.

  18. Soil specific re-calibration of water content sensors for a field-scale sensor network

    NASA Astrophysics Data System (ADS)

    Gasch, Caley K.; Brown, David J.; Anderson, Todd; Brooks, Erin S.; Yourek, Matt A.

    2015-04-01

    Obtaining accurate soil moisture data from a sensor network requires sensor calibration. Soil moisture sensors are factory calibrated, but multiple site specific factors may contribute to sensor inaccuracies. Thus, sensors should be calibrated for the specific soil type and conditions in which they will be installed. Lab calibration of a large number of sensors prior to installation in a heterogeneous setting may not be feasible, and it may not reflect the actual performance of the installed sensor. We investigated a multi-step approach to retroactively re-calibrate sensor water content data from the dielectric permittivity readings obtained by sensors in the field. We used water content data collected since 2009 from a sensor network installed at 42 locations and 5 depths (210 sensors total) within the 37-ha Cook Agronomy Farm with highly variable soils located in the Palouse region of the Northwest United States. First, volumetric water content was calculated from sensor dielectric readings using three equations: (1) a factory calibration using the Topp equation; (2) a custom calibration obtained empirically from an instrumented soil in the field; and (3) a hybrid equation that combines the Topp and custom equations. Second, we used soil physical properties (particle size and bulk density) and pedotransfer functions to estimate water content at saturation, field capacity, and wilting point for each installation location and depth. We also extracted the same reference points from the sensor readings, when available. Using these reference points, we re-scaled the sensor readings, such that water content was restricted to the range of values that we would expect given the physical properties of the soil. The re-calibration accuracy was assessed with volumetric water content measurements obtained from field-sampled cores taken on multiple dates. In general, the re-calibration was most accurate when all three reference points (saturation, field capacity, and wilting point) were represented in the sensor readings. We anticipate that obtaining water retention curves for field soils will improve the re-calibration accuracy by providing more precise estimates of saturation, field capacity, and wilting point. This approach may serve as an alternative method for sensor calibration in lieu of or to complement pre-installation calibration.

  19. Retrieval of aerosol optical depth from surface solar radiation measurements using machine learning algorithms, non-linear regression and a radiative transfer-based look-up table

    NASA Astrophysics Data System (ADS)

    Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti

    2016-07-01

    In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.

  20. Crustal strain near the Big Bend of the San Andreas Fault: Analysis of the Los Padres-Tehachapi Trilateration Networks, California

    NASA Astrophysics Data System (ADS)

    Eberhart-Phillips, Donna; Lisowski, Michael; Zoback, Mark D.

    1990-02-01

    In the region of the Los Padres-Tehachapi geodetic network, the San Andreas fault (SAF) changes its orientation by over 30° from N40°W, close to that predicted by plate motion for a transform boundary, to N73°W. The strain orientation near the SAF is consistent with right-lateral shear along the fault, with maximum shear rate of 0.38±0.01 μrad/yr at N63°W. In contrast, away from the SAF the strain orientations on both sides of the fault are consistent with the plate motion direction, with maximum shear rate of 0.19±0.01 μrad/yr at N44°W. The strain rate does not drop off rapidly away from the fault, and thus the area is fit by either a broad shear zone below the SAF or a single fault with a relatively deep locking depth. The fit to the line length data is poor for locking depth d less than 25 km. For d of 25 km a buried slip rate of 30 ± 6 mm/yr is estimated. We also estimated buried slip for models that included the Garlock and Big Pine faults, in addition to the SAF. Slip rates on other faults are poorly constrained by the Los Padres-Tehachapi network. The best fitting Garlock fault model had computed left-lateral slip rate of 11±2 mm/yr below 10 km. Buried left-lateral slip of 15±6 mm/yr on the Big Pine fault, within the Western Transverse Ranges, provides significant reduction in line length residuals; however, deformation there may be more complicated than a single vertical fault. A subhorizontal detachment on the southern side of the SAF cannot be well constrained by these data. We investigated the location of the SAF and found that a vertical fault below the surface trace fits the data much better than either a dipping fault or a fault zone located south of the surface trace.

  1. Variations in collagen fibrils network structure and surface dehydration of acid demineralized intertubular dentin: effect of dentin depth and air-exposure time.

    PubMed

    Fawzy, Amr S

    2010-01-01

    The aim was to characterize the variations in the structure and surface dehydration of acid demineralized intertubular dentin collagen network with the variations in dentin depth and time of air-exposure (3, 6, 9 and 12 min). In addition, to study the effect of these variations on the tensile bond strength (TBS) to dentin. Phosphoric acid demineralized superficial and deep dentin specimens were prepared. The structure of the dentin collagen network was characterized by AFM. The surface dehydration was characterized by probing the nano-scale adhesion force (F(ad)) between AFM tip and intertubular dentin surface as a new experimental approach. The TBS to dentin was evaluated using an alcohol-based dentin self-priming adhesive. AFM images revealed a demineralized open collagen network structure in both of superficial and deep dentin at 3 and 6 min of air-exposure. However, at 9 min, superficial dentin showed more collapsed network structure compared to deep dentin that partially preserved the open network structure. Total collapsed structure was found at 12 min for both of superficial and deep dentin. The value of the F(ad) is decreased with increasing the time of air-exposure and is increased with dentin depth at the same time of air-exposure. The TBS was higher for superficial dentin at 3 and 6 min, however, no difference was found at 9 and 12 min. The ability of the demineralized dentin collagen network to resist air-dehydration and to preserve the integrity of open network structure with the increase in air-exposure time is increased with dentin depth. Although superficial dentin achieves higher bond strength values, the difference in the bond strength is decreased by increasing the time of air-exposure. The AFM probed F(ad) showed to be sensitive approach to characterize surface dehydration, however, further researches are recommended regarding the validity of such approach.

  2. A Family of Algorithms for Computing Consensus about Node State from Network Data

    PubMed Central

    Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.

    2013-01-01

    Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes “form opinions” about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which individuals use the collective opinions of others to make decisions. PMID:23874167

  3. Shallow Crustal Structure in the Northern Salton Trough, California: Insights from a Detailed 3-D Velocity Model

    NASA Astrophysics Data System (ADS)

    Ajala, R.; Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2017-12-01

    The Coachella Valley is the northern extent of the Gulf of California-Salton Trough. It contains the southernmost segment of the San Andreas Fault (SAF) for which a magnitude 7.8 earthquake rupture was modeled to help produce earthquake planning scenarios. However, discrepancies in ground motion and travel-time estimates from the current Southern California Earthquake Center (SCEC) velocity model of the Salton Trough highlight inaccuracies in its shallow velocity structure. An improved 3-D velocity model that better defines the shallow basin structure and enables the more accurate location of earthquakes and identification of faults is therefore essential for seismic hazard studies in this area. We used recordings of 126 explosive shots from the 2011 Salton Seismic Imaging Project (SSIP) to SSIP receivers and Southern California Seismic Network (SCSN) stations. A set of 48,105 P-wave travel time picks constituted the highest-quality input to a 3-D tomographic velocity inversion. To improve the ray coverage, we added network-determined first arrivals at SCSN stations from 39,998 recently relocated local earthquakes, selected to a maximum focal depth of 10 km, to develop a detailed 3-D P-wave velocity model for the Coachella Valley with 1-km grid spacing. Our velocity model shows good resolution ( 50 rays/cubic km) down to a minimum depth of 7 km. Depth slices from the velocity model reveal several interesting features. At shallow depths ( 3 km), we observe an elongated trough of low velocity, attributed to sediments, located subparallel to and a few km SW of the SAF, and a general velocity structure that mimics the surface geology of the area. The persistence of the low-velocity sediments to 5-km depth just north of the Salton Sea suggests that the underlying basement surface, shallower to the NW, dips SE, consistent with interpretation from gravity studies (Langenheim et al., 2005). On the western side of the Coachella Valley, we detect depth-restricted regions of higher velocities ( 6.4 - 6.6 km/s) that may represent basement rocks from the Eastern Peninsular Ranges that extend beneath this area. Our results will contribute to the SCEC Community Modeling Environment (CME) for use in future ground shaking calculations and in producing more accurate seismic hazard maps for the Coachella Valley.

  4. Rock-Eval analysis of French forest soils: the influence of depth, soil and vegetation types on SOC thermal stability and bulk chemistry

    NASA Astrophysics Data System (ADS)

    Soucemarianadin, Laure; Cécillon, Lauric; Baudin, François; Cecchini, Sébastien; Chenu, Claire; Mériguet, Jacques; Nicolas, Manuel; Savignac, Florence; Barré, Pierre

    2017-04-01

    Soil organic matter (SOM) is the largest terrestrial carbon pool and SOM degradation has multiple consequences on key ecosystem properties like nutrients cycling, soil emissions of greenhouse gases or carbon sequestration potential. With the strong feedbacks between SOM and climate change, it becomes particularly urgent to develop reliable routine methodologies capable of indicating the turnover time of soil organic carbon (SOC) stocks. Thermal analyses have been used to characterize SOM and among them, Rock-Eval 6 (RE6) analysis of soil has shown promising results in the determination of in-situ SOC biogeochemical stability. This technique combines a phase of pyrolysis followed by a phase of oxidation to provide information on both the SOC bulk chemistry and thermal stability. We analyzed with RE6 a set of 495 soils samples from 102 permanent forest sites of the French national network for the long-term monitoring of forest ecosystems (''RENECOFOR'' network). Along with covering pedoclimatic variability at a national level, these samples include a range of 5 depths up to 1 meter (0-10 cm, 10-20 cm, 20-40 cm, 40-80 cm and 80-100 cm). Using RE6 parameters that were previously shown to be correlated to short-term (hydrogen index, HI; T50 CH pyrolysis) or long-term (T50 CO2 oxidation and HI) SOC persistence, and that characterize SOM bulk chemical composition (oxygen index, OI and HI), we tested the influence of depth (n = 5), soil class (n = 6) and vegetation type (n = 3; deciduous, coniferous-fir, coniferous-pine) on SOM thermal stability and bulk chemistry. Results showed that depth was the dominant discriminating factor, affecting significantly all RE6 parameters. With depth, we observed a decrease of the thermally labile SOC pool and an increase of the thermally stable SOC pool, along with an oxidation and a depletion of hydrogen-rich moieties of the SOC. Soil class and vegetation type had contrasted effects on the RE6 parameters but both affected significantly T50 CO2 oxidation with, for instance, entic Podzols and dystric Cambisols containing relatively more thermally stable SOC in the deepest layer than hypereutric/calcaric Cambisols. Moreover, soils in deciduous plots contained a higher proportion of thermally stable SOC than soils in coniferous plots. This study shows that RE6 analysis constitutes a fast and cost effective way to qualitatively estimate SOM turnover and to discuss its ecosystem drivers. It offers promising prospects towards a quantitative estimation of SOC turnover and the development of RE6-based indicators related to the size of the different SOC kinetic pools.

  5. Magma intrusion near Volcan Tancitaro: Evidence from seismic analysis

    DOE PAGES

    Pinzon, Juan I.; Nunez-Cornu, Francisco J.; Rowe, Charlotte Anne

    2016-11-17

    Between May and June 2006, an earthquake swarm occurred near Volcan Tancítaro in Mexico, which was recorded by a temporary seismic deployment known as the MARS network. We located ~1000 events from this seismic swarm. Previous earthquake swarms in the area were reported in the years 1997, 1999 and 2000. We relocate and analyze the evolution and properties of the 2006 earthquake swarm, employing a waveform cross-correlation-based phase repicking technique. Hypocenters from 911 events were located and divided into eighteen families having a correlation coefficient at or above 0.75. 90% of the earthquakes provide at least sixteen phase picks. Wemore » used the single-event location code Hypo71 and the P-wave velocity model used by the Jalisco Seismic and Accelerometer Network to improve hypocenters based on the correlation-adjusted phase arrival times. We relocated 121 earthquakes, which show clearly two clusters, between 9–10 km and 3–4 km depth respectively. The average location error estimates are <1 km epicentrally, and <2 km in depth, for the largest event in each cluster. Depths of seismicity migrate upward from 16 to 3.5 km and exhibit a NE-SW trend. The swarm first migrated toward Paricutin Volcano but by mid-June began propagating back toward Volcán Tancítaro. In addition to its persistence, noteworthy aspects of this swarm include a quasi-exponential increase in the rate of activity within the first 15 days; a b-value of 1.47; a jug-shaped hypocenter distribution; a shoaling rate of ~5 km/month within the deeper cluster, and a composite focal mechanism solution indicating largely reverse faulting. As a result, these features of the swarm suggest a magmatic source elevating the crustal strain beneath Volcan Tancítaro.« less

  6. Magma intrusion near Volcan Tancitaro: Evidence from seismic analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinzon, Juan I.; Nunez-Cornu, Francisco J.; Rowe, Charlotte Anne

    Between May and June 2006, an earthquake swarm occurred near Volcan Tancítaro in Mexico, which was recorded by a temporary seismic deployment known as the MARS network. We located ~1000 events from this seismic swarm. Previous earthquake swarms in the area were reported in the years 1997, 1999 and 2000. We relocate and analyze the evolution and properties of the 2006 earthquake swarm, employing a waveform cross-correlation-based phase repicking technique. Hypocenters from 911 events were located and divided into eighteen families having a correlation coefficient at or above 0.75. 90% of the earthquakes provide at least sixteen phase picks. Wemore » used the single-event location code Hypo71 and the P-wave velocity model used by the Jalisco Seismic and Accelerometer Network to improve hypocenters based on the correlation-adjusted phase arrival times. We relocated 121 earthquakes, which show clearly two clusters, between 9–10 km and 3–4 km depth respectively. The average location error estimates are <1 km epicentrally, and <2 km in depth, for the largest event in each cluster. Depths of seismicity migrate upward from 16 to 3.5 km and exhibit a NE-SW trend. The swarm first migrated toward Paricutin Volcano but by mid-June began propagating back toward Volcán Tancítaro. In addition to its persistence, noteworthy aspects of this swarm include a quasi-exponential increase in the rate of activity within the first 15 days; a b-value of 1.47; a jug-shaped hypocenter distribution; a shoaling rate of ~5 km/month within the deeper cluster, and a composite focal mechanism solution indicating largely reverse faulting. As a result, these features of the swarm suggest a magmatic source elevating the crustal strain beneath Volcan Tancítaro.« less

  7. Magma intrusion near Volcan Tancítaro: Evidence from seismic analysis

    NASA Astrophysics Data System (ADS)

    Pinzón, Juan I.; Núñez-Cornú, Francisco J.; Rowe, Charlotte A.

    2017-01-01

    Between May and June 2006, an earthquake swarm occurred near Volcan Tancítaro in Mexico, which was recorded by a temporary seismic deployment known as the MARS network. We located ∼1000 events from this seismic swarm. Previous earthquake swarms in the area were reported in the years 1997, 1999 and 2000. We relocate and analyze the evolution and properties of the 2006 earthquake swarm, employing a waveform cross-correlation-based phase repicking technique. Hypocenters from 911 events were located and divided into eighteen families having a correlation coefficient at or above 0.75. 90% of the earthquakes provide at least sixteen phase picks. We used the single-event location code Hypo71 and the P-wave velocity model used by the Jalisco Seismic and Accelerometer Network to improve hypocenters based on the correlation-adjusted phase arrival times. We relocated 121 earthquakes, which show clearly two clusters, between 9-10 km and 3-4 km depth respectively. The average location error estimates are <1 km epicentrally, and <2 km in depth, for the largest event in each cluster. Depths of seismicity migrate upward from 16 to 3.5 km and exhibit a NE-SW trend. The swarm first migrated toward Paricutin Volcano but by mid-June began propagating back toward Volcán Tancítaro. In addition to its persistence, noteworthy aspects of this swarm include a quasi-exponential increase in the rate of activity within the first 15 days; a b-value of 1.47; a jug-shaped hypocenter distribution; a shoaling rate of ∼5 km/month within the deeper cluster, and a composite focal mechanism solution indicating largely reverse faulting. These features of the swarm suggest a magmatic source elevating the crustal strain beneath Volcan Tancítaro.

  8. Spectrally based bathymetric mapping of a dynamic, sand‐bedded channel: Niobrara River, Nebraska, USA

    USGS Publications Warehouse

    Dilbone, Elizabeth; Legleiter, Carl; Alexander, Jason S.; McElroy, Brandon

    2018-01-01

    Methods for spectrally based mapping of river bathymetry have been developed and tested in clear‐flowing, gravel‐bed channels, with limited application to turbid, sand‐bed rivers. This study used hyperspectral images and field surveys from the dynamic, sandy Niobrara River to evaluate three depth retrieval methods. The first regression‐based approach, optimal band ratio analysis (OBRA), paired in situ depth measurements with image pixel values to estimate depth. The second approach used ground‐based field spectra to calibrate an OBRA relationship. The third technique, image‐to‐depth quantile transformation (IDQT), estimated depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image‐derived variable. OBRA yielded the lowest depth retrieval mean error (0.005 m) and highest observed versus predicted R2 (0.817). Although misalignment between field and image data did not compromise the performance of OBRA in this study, poor georeferencing could limit regression‐based approaches such as OBRA in dynamic, sand‐bedded rivers. Field spectroscopy‐based depth maps exhibited a mean error with a slight shallow bias (0.068 m) but provided reliable estimates for most of the study reach. IDQT had a strong deep bias but provided informative relative depth maps. Overprediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the depth CDF. Although each of the techniques we tested demonstrated potential to provide accurate depth estimates in sand‐bed rivers, each method also was subject to certain constraints and limitations.

  9. Mantle Attenuation Estimated from Regional and Teleseismic P-waves of Deep Earthquakes and Surface Explosions

    NASA Astrophysics Data System (ADS)

    Ichinose, G.; Woods, M.; Dwyer, J.

    2014-03-01

    We estimated the network-averaged mantle attenuation t*(total) of 0.5 s beneath the North Korea test site (NKTS) by use of P-wave spectra and normalized spectral stacks from the 25 May 2009 declared nuclear test (mb 4.5; IDC). This value was checked using P-waves from seven deep (580-600 km) earthquakes (4.8 < M w < 5.5) in the Jilin-Heilongjiang, China region that borders with Russia and North Korea. These earthquakes are 200-300 km from the NKTS, within 200 km of the Global Seismic Network seismic station in Mudanjiang, China (MDJ) and the International Monitoring System primary arrays at Ussuriysk, Russia (USRK) and Wonju, Republic of Korea (KSRS). With the deep earthquakes, we split the t*(total) ray path into two segments: a t*(u), that represents the attenuation of the up-going ray from the deep hypocenters to the local-regional receivers, and t*(d), that represents the attenuation along the down-going ray to teleseismic receivers. The sum of t*(u) and t*(d) should be equal to t*(total), because they both share coincident ray paths. We estimated the upper-mantle attenuation t*(u) of 0.1 s at stations MDJ, USRK, and KSRS from individual and stacks of normalized P-wave spectra. We then estimated the average lower-mantle attenuation t*(d) of 0.4 s using stacked teleseismic P-wave spectra. We finally estimated a network average t*(total) of 0.5 s from the stacked teleseismic P-wave spectra from the 2009 nuclear test, which confirms the equality with the sum of t*(u) and t*(d). We included constraints on seismic moment, depth, and radiation pattern by using results from a moment tensor analysis and corner frequencies from modeling of P-wave spectra recorded at local distances. We also avoided finite-faulting effects by excluding earthquakes with complex source time functions. We assumed ω2 source models for earthquakes and explosions. The mantle attenuation beneath the NKTS is clearly different when compared with the network-averaged t* of 0.75 s for the western US and is similar to values of approximately 0.5 s for the Semipalatinsk test site within the 0.5-2 Hz range.

  10. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    PubMed

    Durstewitz, Daniel

    2017-06-01

    The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.

  11. Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods

    DOE PAGES

    Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste; ...

    2017-04-03

    This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less

  12. Improving the Curie depth estimation through optimizing the spectral block dimensions of the aeromagnetic data in the Sabalan geothermal field

    NASA Astrophysics Data System (ADS)

    Akbar, Somaieh; Fathianpour, Nader

    2016-12-01

    The Curie point depth is of great importance in characterizing geothermal resources. In this study, the Curie iso-depth map was provided using the well-known method of dividing the aeromagnetic dataset into overlapping blocks and analyzing the power spectral density of each block separately. Determining the optimum block dimension is vital in improving the resolution and accuracy of estimating Curie point depth. To investigate the relation between the optimal block size and power spectral density, a forward magnetic modeling was implemented on an artificial prismatic body with specified characteristics. The top, centroid, and bottom depths of the body were estimated by the spectral analysis method for different block dimensions. The result showed that the optimal block size could be considered as the smallest possible block size whose corresponding power spectrum represents an absolute maximum in small wavenumbers. The Curie depth map of the Sabalan geothermal field and its surrounding areas, in the northwestern Iran, was produced using a grid of 37 blocks with different dimensions from 10 × 10 to 50 × 50 km2, which showed at least 50% overlapping with adjacent blocks. The Curie point depth was estimated in the range of 5 to 21 km. The promising areas with the Curie point depths less than 8.5 km are located around Mountain Sabalan encompassing more than 90% of known geothermal resources in the study area. Moreover, the Curie point depth estimated by the improved spectral analysis is in good agreement with the depth calculated from the thermal gradient data measured in one of the exploratory wells in the region.

  13. Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste

    This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less

  14. Extreme precipitation depths for Texas, excluding the Trans-Pecos region

    USGS Publications Warehouse

    Lanning-Rush, Jennifer; Asquith, William H.; Slade, Raymond M.

    1998-01-01

    Storm durations of 1, 2, 3, 4, 5, and 6 days were investigated for this report. The extreme precipitation depth for a particular area is estimated from an “extreme precipitation curve” (an upper limit or envelope curve developed from graphs of extreme precipitation depths for each climatic region). The extreme precipitation curves were determined using precipitation depth-duration information from a subset (24 “extreme” storms) of 213 “notable” storms documented throughout Texas. The extreme precipitation curves can be used to estimate extreme precipitation depth for a particular area. The extreme precipitation depth represents a limiting depth, which can provide useful comparative information for more quantitative analyses.

  15. Inverting near-surface models from virtual-source gathers (SM Division Outstanding ECS Award Lecture)

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; Vossen, Caron; Paulssen, Hanneke

    2017-04-01

    The Groningen gas field is a massive natural gas accumulation in the north-east of the Netherlands. Decades of production have led to significant compaction of the reservoir rock. The (differential) compaction is thought to have reactivated existing faults and to be the main driver of induced seismicity. The potential damage at the surface is largely affected by the state of the near surface. Thin and soft sedimentary layers can lead to large amplifications. By measuring the wavefield at different depth levels, near-surface properties can directly be estimated from the recordings. Seismicity in the Groningen area is monitored primarily with an array of vertical arrays. In the nineties a network of 8 boreholes was deployed. Since 2015, this network has been expanded with 70 new boreholes. Each new borehole consists of an accelerometer at the surface and four downhole geophones with a vertical spacing of 50 m. We apply seismic interferometry to local seismicity, for each borehole individually. Doing so, we obtain the responses as if there were virtual sources at the lowest geophones and receivers at the other depth levels. From the retrieved direct waves and reflections, we invert for P- & S- velocity and Q models. We discuss different implementations of seismic interferometry and the subsequent inversion. The inverted near-surface properties are used to improve both the source location and the hazard assessment.

  16. Using computational modeling of river flow with remotely sensed data to infer channel bathymetry

    USGS Publications Warehouse

    Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.

    2012-01-01

    As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.

  17. Quantifying gully erosion contribution from morphodynamic analysis of historical aerial photographs in a large catchment SW Spain

    NASA Astrophysics Data System (ADS)

    Hayas, Antonio; Giráldez, Juan V.; Laguna, Ana; Peña, Peña; Vanwalleghem, Tom

    2015-04-01

    Gully erosion is widely recognized as an important erosion process and source of sediment, especially in Mediterranean basins. Recent advances in monitoring techniques, such as ground-based LiDAR, drone-bounded cameras or photoreconstruction, allow quantifying gully erosion rates with unprecedented accuracy. However, many studies only focus on gully growth during a short period. In agricultural areas, farmers frequently erase gullies artificially. Over longer time scales, this results in an important dynamic of gully growth and infilling. Also, given the significant temporal variability of precipitation, land use and the proper gully erosion processes, gully growth is non-linear over time. This study therefore aims at analyzing gully morphodynamics over a long time scale (1957-2011) in a large catchment in order to quantify gully erosion processes and its contribution to overall sediment dynamics. The 20 km2 study area is located in SW Spain. The extension of the gully network was digitized by photographic interpretation based on aerial photographs from 1957, 1981, 1985, 1999, 2002, 2005, 2007, 2009 and 2011. Gully width was measured at representative control points for each of these years. During this period, the dominant land use changed considerably from herbaceous crops to olive orchards. A field campaign was conducted in 2014 to measure current gully width and depth. Total gully volume and uncertainty was determined by Monte Carlo-based simulations of gully cross-sectional area for unmeasured sections. The extension of the gully network both increased and decreased in the study period. Gully density varied between 1.93 km km-2 in 1957, with a minimum of 1.37 km km-2 in 1981 and a maximum of 5.40 km km-2 in 2011. Gully width estimated in selected points from the orthophotos range between 0.9 m and 59.2 m, and showed a good lognormal fit. Field campaigns results in a collection of cross-section measures with gullies widths between 1.87 and 28.5 m and depths from 0.55 m to 5.02 m. A gully width-depth relation was established according to a logarithm expression with an overall r2 of 0.82. As no historical information on gully depth was available, this relation was assumed to be constant over time. Monte Carlo simulation was then used to generate width and depth values for the different gully segments, based on different lognormal distributions fitted to the estimated gully widths from 1957-2011 and on the width-depth regression. The calculated mean gully volume between 1953 and 2011 varied between 145.103 m3 and 2454.103 m3. The contribution of gully erosion to the overall sediment budget was found to be relatively stable between 1957-2008 with a mean value of 11.2 ton ha-1 year-1, while in the period 2008-2011 which includes frequent rainy days winter resulted in a mean value of 604 ton ha-1 year-1. Uncertainty estimates by Monte Carlo place the estimated contribution of gully erosion for this last period between 523-694 ton ha-1 year-1. The relation between gully erosion rates and driving factors such as land use change and rainfall was analysed in order to explain this variation. The high gully erosion rates of the period 2008-2011 could be linked to extreme rainfall events. This study has determined gully erosion rates with a high temporal resolution over several decades. The results show that gully erosion rates are highly variable and therefore that a simple interpolation between the start and end date would highly underestimate gully contribution during certain years, such as for example between 2005-2011. Overall, gully erosion is shown to be an important process of sediment generation in Mediterranean basins.

  18. Estimating snow depth of alpine snowpack via airborne multifrequency passive microwave radiance observations: Colorado, USA

    NASA Astrophysics Data System (ADS)

    Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.

    2017-12-01

    This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.

  19. Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.

    PubMed

    Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves

    2016-09-01

    This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.

  20. S-wave velocity structure in the Nankai accretionary prism derived from Rayleigh admittance

    NASA Astrophysics Data System (ADS)

    Tonegawa, Takashi; Araki, Eiichiro; Kimura, Toshinori; Nakamura, Takeshi; Nakano, Masaru; Suzuki, Kensuke

    2017-04-01

    Two cabled seafloor networks with 22 and 29 stations (DONET 1 and 2: Dense Oceanfloor Network System for Earthquake and Tsunamis) have been constructed on the accretionary prism at the Nankai subduction zone of Japan since March 2010. The observation periods of DONET 1 and 2 exceed more than 5 years and 10 months, respectively. Each station contains broadband seismometers and absolute and differential pressure gauges. In this study, using Rayleigh waves of microseisms and earthquakes, we calculate the Rayleigh admittance (Ruan et al., 2014, JGR) at the seafloor for each station, i.e., an amplitude transfer function from pressure to displacement, particularly for the frequencies of 0.1-0.2 Hz (ambient noise) and 0.04-0.1 Hz (earthquake signal), and estimate S-wave velocity (Vs) structure beneath stations in DONET 1 and 2. We calculated the displacement seismogram by removing the instrument response from the velocity seismogram for each station. The pressure record observed at the differential pressure gauge was used in this study because of a high resolution of the pressure observation. In addition to Rayleigh waves of microseisms, we collected waveforms of Rayleigh waves for earthquakes with an epicentral distance of 15-90°, M>5.0, and focal depth shallower than 50 km. In the frequency domain, we smoothed the transfer function of displacement/pressure with the Parzen window of ±0.01 Hz. In order to determine one-dimensional Vs profiles, we performed a nonlinear inversion technique, i.e., simulated annealing. As a result, Vs profiles obtained at stations near the land show simple Vs structure, i.e., Vs increases with depth. However, some profiles located at the toe of the acceretionary prism have a low-velocity zone (LVZ) at a depth of 5-7 km within the accretinary sediment. The velocity reduction is approximately 5-20 %. Park et al. (2010) reported such a large reduction in P-wave velocity in the region of DONET 1 (eastern network and southeast of the Kii Peninsula), but our result shows the LVZ in the regions of both DONET 1 and 2 (2: western network and southwest of the Kii Peninsula). Similar features could also be obtained by using Rayleigh waves of earthquake-signals only. This indicates lateral variation of Vs structure at the toe of the Nankai accretionary prism.

  1. Stress Drop Estimates from Induced Seismic Events in the Fort Worth Basin, Texas

    NASA Astrophysics Data System (ADS)

    Jeong, S. J.; Stump, B. W.; DeShon, H. R.

    2017-12-01

    Since the beginning of Barnett shale oil and gas production in the Fort Worth Basin, there have been earthquake sequences, including multiple magnitude 3.0+ events near the DFW International Airport, Azle, Irving-Dallas, and throughout Johnson County (Cleburne and Venus). These shallow depth earthquakes (2 to 8 km) have not exceeded magnitude 4.0 and have been widely felt; the close proximity of these earthquakes to a large population center motivates an assessment of the kinematics of the events in order to provide more accurate ground motion predictions. Previous studies have estimated average stress drops for the DFW airport and Cleburne earthquakes at 10 and 43 bars, respectively. Here, we calculate stress drops for Azle, Irving-Dallas and Venus earthquakes using seismic data from local (≤25 km) and regional (>25 km) seismic networks. Events with magnitudes above 2.5 are chosen to ensure adequate signal-to-noise. Stress drops are estimated by fitting the Brune earthquake model to the observed source spectrum with correction for propagation path effects and a local site effect using a high-frequency decay parameter, κ, estimated from acceleration spectrum. We find that regional average stress drops are similar to those estimated using local data, supporting the appropriateness of the propagation path and site corrections. The average stress drop estimates are 72 bars, which range from 7 to 240 bars. The results are consistent with global averages of 10 to 100 bars for intra-plate earthquakes and compatible with stress drops of DFW airport and Cleburne earthquakes. The stress drops show a slight breakdown in self-similarity with increasing moment magnitude. The breakdown of similarity for these events requires further study because of the limited magnitude range of the data. These results suggest that strong motions and seismic hazard from an injection-induced earthquake can be expected to be similar to those for tectonic events taking into account the shallow depth of induced earthquakes.

  2. Estimating hourly PM1 concentrations from Himawari-8 aerosol optical depth in China.

    PubMed

    Zang, Lin; Mao, Feiyue; Guo, Jianping; Gong, Wei; Wang, Wei; Pan, Zengxin

    2018-06-11

    Particulate matter with diameter less than 1 μm (PM 1 ) has been found to be closely associated with air quality, climate changes, and even adverse human health. However, a large gap in our knowledge concerning the large-scale distribution and variability of PM 1 remains, which is expected to be bridged with advanced remote-sensing techniques. In this study, a hybrid model called principal component analysis-general regression neural network (PCA-GRNN) is developed to estimate hourly PM 1 concentrations from Himawari-8 aerosol optical depth in combination with coincident ground-based PM 1 measurements in China. Results indicate that the hourly estimated PM 1 concentrations from satellite agree well with the measured values at national scale, with R 2 of 0.65, root-mean-square error (RMSE) of 22.0 μg/m 3 and mean absolute error (MAE) of 13.8 μg/m 3 . On daily and monthly time scales, R 2 increases to 0.70 and 0.81, respectively. Spatially, highly polluted regions of PM 1 are largely located in the North China Plain and Northeast China, in accordance with the distribution of industrialisation and urbanisation. In terms of diurnal variability, PM 1 concentration tends to peak in rush hours during the daytime. PM 1 exhibits distinct seasonality with winter having the largest concentration (31.5±3.5 μg/m 3 ), largely due to peak combustion emissions. We further attempt to estimate PM 2.5 and PM 10 with the proposed method and find that the accuracies of the proposed model for PM 1 and PM 2.5 estimation are significantly higher than that of PM 10 . Our findings suggest that geostationary data is one of the promising data to estimate fine particle concentration on large spatial scale. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Analysis of Acoustic Depth Sounder Signals with Artificial Neural Networks

    DTIC Science & Technology

    1991-04-01

    battery pack, processor, and mode switches and (2) a stainless steel shaft 1 meter long and 27 millimeters in diameter, containing 8 milliCurie of...returned signal which is not used in conventional depth sounders due to lack of real-time tools for interpreting the 36 information. The shape and...develop some software tools for conducting the research. Commercial programs for neural network implementation were available, but were "black box" in

  4. Estimation of optimal nasotracheal tube depth in adult patients.

    PubMed

    Ji, Sung-Mi

    2017-12-01

    The aim of this study was to estimate the optimal depth of nasotracheal tube placement. We enrolled 110 patients scheduled to undergo oral and maxillofacial surgery, requiring nasotracheal intubation. After intubation, the depth of tube insertion was measured. The neck circumference and distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch were measured. To estimate optimal tube depth, correlation and regression analyses were performed using clinical and anthropometric parameters. The mean tube depth was 28.9 ± 1.3 cm in men (n = 62), and 26.6 ± 1.5 cm in women (n = 48). Tube depth significantly correlated with height (r = 0.735, P < 0.001). Distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch correlated with depth of the endotracheal tube (r = 0.363, r = 0.362, and r = 0.546, P < 0.05). The tube depth also correlated with the sum of these distances (r = 0.646, P < 0.001). We devised the following formula for estimating tube depth: 19.856 + 0.267 × sum of the three distances (R 2 = 0.432, P < 0.001). The optimal tube depth for nasotracheally intubated adult patients correlated with height and sum of the distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch. The proposed equation would be a useful guide to determine optimal nasotracheal tube placement.

  5. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    PubMed

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  6. Optimization of a large-scale microseismic monitoring network in northern Switzerland

    NASA Astrophysics Data System (ADS)

    Kraft, Toni; Mignan, Arnaud; Giardini, Domenico

    2013-10-01

    We have developed a network optimization method for regional-scale microseismic monitoring networks and applied it to optimize the densification of the existing seismic network in northeastern Switzerland. The new network will build the backbone of a 10-yr study on the neotectonic activity of this area that will help to better constrain the seismic hazard imposed on nuclear power plants and waste repository sites. This task defined the requirements regarding location precision (0.5 km in epicentre and 2 km in source depth) and detection capability [magnitude of completeness Mc = 1.0 (ML)]. The goal of the optimization was to find the geometry and size of the network that met these requirements. Existing stations in Switzerland, Germany and Austria were considered in the optimization procedure. We based the optimization on the simulated annealing approach proposed by Hardt & Scherbaum, which aims to minimize the volume of the error ellipsoid of the linearized earthquake location problem (D-criterion). We have extended their algorithm to: calculate traveltimes of seismic body waves using a finite difference ray tracer and the 3-D velocity model of Switzerland, calculate seismic body-wave amplitudes at arbitrary stations assuming the Brune source model and using scaling and attenuation relations recently derived for Switzerland, and estimate the noise level at arbitrary locations within Switzerland using a first-order ambient seismic noise model based on 14 land-use classes defined by the EU-project CORINE and open GIS data. We calculated optimized geometries for networks with 10-35 added stations and tested the stability of the optimization result by repeated runs with changing initial conditions. Further, we estimated the attainable magnitude of completeness (Mc) for the different sized optimal networks using the Bayesian Magnitude of Completeness (BMC) method introduced by Mignan et al. The algorithm developed in this study is also applicable to smaller optimization problems, for example, small local monitoring networks. Possible applications are volcano monitoring, the surveillance of induced seismicity associated with geotechnical operations and many more. Our algorithm is especially useful to optimize networks in populated areas with heterogeneous noise conditions and if complex velocity structures or existing stations have to be considered.

  7. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  8. Real-time source deformation modeling through GNSS permanent stations at Merapi volcano (Indonesia

    NASA Astrophysics Data System (ADS)

    Beauducel, F.; Nurnaning, A.; Iguchi, M.; Fahmi, A. A.; Nandaka, M. A.; Sumarti, S.; Subandriyo, S.; Metaxian, J. P.

    2014-12-01

    Mt. Merapi (Java, Indonesia) is one of the most active and dangerous volcano in the world. A first GPS repetition network was setup and periodically measured since 1993, allowing detecting a deep magma reservoir, quantifying magma flux in conduit and identifying shallow discontinuities around the former crater (Beauducel and Cornet, 1999;Beauducel et al., 2000, 2006). After the 2010 centennial eruption, when this network was almost completely destroyed, Indonesian and Japanese teams installed a new continuous GPS network for monitoring purpose (Iguchi et al., 2011), consisting of 3 stations located at the volcano flanks, plus a reference station at the Yogyakarta Observatory (BPPTKG).In the framework of DOMERAPI project (2013-2016) we have completed this network with 5 additional stations, which are located on the summit area and volcano surrounding. The new stations are 1-Hz sampling, GNSS (GPS + GLONASS) receivers, and near real-time data streaming to the Observatory. An automatic processing has been developed and included in the WEBOBS system (Beauducel et al., 2010) based on GIPSY software computing precise daily moving solutions every hour, and for different time scales (2 months, 1 and 5 years), time series and velocity vectors. A real-time source modeling estimation has also been implemented. It uses the depth-varying point source solution (Mogi, 1958; Williams and Wadge, 1998) in a systematic inverse problem model exploration that displays location, volume variation and 3-D probability map.The operational system should be able to better detect and estimate the location and volume variations of possible magma sources, and to follow magma transfer towards the surface. This should help monitoring and contribute to decision making during future unrest or eruption.

  9. Receiver Function Study of the Crustal Structure Beneath the Northern Andes (colombia)

    NASA Astrophysics Data System (ADS)

    Poveda, E.; Monsalve, G.; Vargas-Jimenez, C. A.

    2013-05-01

    We have investigated crustal thickness beneath the Northern Andes with the teleseismic receiver function technique. We used teleseismic data recorded by an array of 18 broadband stations deployed by the Colombian Seismological Network, and operated by the Colombian Geological Survey. We used the primary P-to-S conversion and crustal reverberations to estimate crustal thickness and average Vp/Vs ratio; using Wadati diagrams, we also calculated the mean crustal Vp/Vs ratio around stations to further constrain the crustal thickness estimation. In northern Colombia, near the Caribbean coast, the estimated crustal thickness ranges from 25 to 30 km; in the Middle Magdalena Valley, crustal thickness is around 40 km; beneath the northern Central Cordillera, the Moho depth is nearly 40 km; at the Ecuador-Colombia border, beneath the western flank of the Andes, the estimated thickness is about 46 km. Receiver functions at a station at the craton in South East Colombia, near the foothills of the Eastern Cordillera, clearly indicate the presence of the Moho discontinuity at a depth near 36 km. The greatest values of crustal thickness occur beneath a plateau (Altiplano Cundiboyacense) on the Eastern Cordillera, near the location of Bogota, with values around 58 km. Receiver functions in the volcanic areas of the south-western Colombian Andes do not show a systematic signal from the Moho, indicating abrupt changes in Moho geometry. Signals at stations on the Eastern Cordillera near Bogota reveal a highly complex crustal structure, with a combination of sedimentary layers up to 9 km thick, dipping interfaces, low velocity layers, anisotropy and/or lateral heterogeneity that still remain to be evaluated. This complexity obeys to the location of these stations at a region of a highly deformed fold and thrust belt.

  10. Evaluation of MODIS aerosol optical depth for semi­-arid environments in complex terrain

    NASA Astrophysics Data System (ADS)

    Holmes, H.; Loria Salazar, S. M.; Panorska, A. K.; Arnott, W. P.; Barnard, J.

    2015-12-01

    The use of satellite remote sensing to estimate spatially resolved ground level air pollutant concentrations is increasing due to advancements in remote sensing technology and the limited number of surface observations. Satellite retrievals provide global, spatiotemporal air quality information and are used to track plumes, estimate human exposures, model emissions, and determine sources (i.e., natural versus anthropogenic) in regulatory applications. Ground level PM2.5 concentrations can be estimated using columnar aerosol optical depth (AOD) from MODIS, where the satellite retrieval serves as a spatial surrogate to simulate surface PM2.5 gradients. The spatial statistical models and MODIS AOD retrieval algorithms have been evaluated for the dark, vegetated eastern US, while the semi-arid western US continues to be an understudied region with associated complexity due to heterogeneous emissions, smoke from wildfires, and complex terrain. The objective of this work is to evaluate the uncertainty of MODIS AOD retrievals by comparing with columnar AOD and surface PM2.5 measurements from AERONET and EPA networks. Data is analyzed from multiple stations in California and Nevada for three years where four major wildfires occurred. Results indicate that MODIS retrievals fail to estimate column-integrated aerosol pollution in the summer months. This is further investigated by quantifying the statistical relationships between MODIS AOD, AERONET AOD, and surface PM2.5 concentrations. Data analysis indicates that the distribution of MODIS AOD is significantly (p<0.05) different than AERONET AOD. Further, using the results of distributional and association analysis the impacts of MODIS AOD uncertainties on the spatial gradients are evaluated. Additionally, the relationships between these uncertainties and physical parameters in the retrieval algorithm (e.g., surface reflectance, Ångström Extinction Exponent) are discussed.

  11. GFZ Wireless Seismic Array (GFZ-WISE), a Wireless Mesh Network of Seismic Sensors: New Perspectives for Seismic Noise Array Investigations and Site Monitoring

    PubMed Central

    Picozzi, Matteo; Milkereit, Claus; Parolai, Stefano; Jaeckel, Karl-Heinz; Veit, Ingo; Fischer, Joachim; Zschau, Jochen

    2010-01-01

    Over the last few years, the analysis of seismic noise recorded by two dimensional arrays has been confirmed to be capable of deriving the subsoil shear-wave velocity structure down to several hundred meters depth. In fact, using just a few minutes of seismic noise recordings and combining this with the well known horizontal-to-vertical method, it has also been shown that it is possible to investigate the average one dimensional velocity structure below an array of stations in urban areas with a sufficient resolution to depths that would be prohibitive with active source array surveys, while in addition reducing the number of boreholes required to be drilled for site-effect analysis. However, the high cost of standard seismological instrumentation limits the number of sensors generally available for two-dimensional array measurements (i.e., of the order of 10), limiting the resolution in the estimated shear-wave velocity profiles. Therefore, new themes in site-effect estimation research by two-dimensional arrays involve the development and application of low-cost instrumentation, which potentially allows the performance of dense-array measurements, and the development of dedicated signal-analysis procedures for rapid and robust estimation of shear-wave velocity profiles. In this work, we present novel low-cost wireless instrumentation for dense two-dimensional ambient seismic noise array measurements that allows the real–time analysis of the surface-wavefield and the rapid estimation of the local shear-wave velocity structure for site response studies. We first introduce the general philosophy of the new system, as well as the hardware and software that forms the novel instrument, which we have tested in laboratory and field studies. PMID:22319298

  12. Global distribution of plant-extractable water capacity of soil

    USGS Publications Warehouse

    Dunne, K.A.; Willmott, C.J.

    1996-01-01

    Plant-extractable water capacity of soil is the amount of water that can be extracted from the soil to fulfill evapotranspiration demands. It is often assumed to be spatially invariant in large-scale computations of the soil-water balance. Empirical evidence, however, suggests that this assumption is incorrect. In this paper, we estimate the global distribution of the plant-extractable water capacity of soil. A representative soil profile, characterized by horizon (layer) particle size data and thickness, was created for each soil unit mapped by FAO (Food and Agriculture Organization of the United Nations)/Unesco. Soil organic matter was estimated empirically from climate data. Plant rooting depths and ground coverages were obtained from a vegetation characteristic data set. At each 0.5?? ?? 0.5?? grid cell where vegetation is present, unit available water capacity (cm water per cm soil) was estimated from the sand, clay, and organic content of each profile horizon, and integrated over horizon thickness. Summation of the integrated values over the lesser of profile depth and root depth produced an estimate of the plant-extractable water capacity of soil. The global average of the estimated plant-extractable water capacities of soil is 8??6 cm (Greenland, Antarctica and bare soil areas excluded). Estimates are less than 5, 10 and 15 cm - over approximately 30, 60, and 89 per cent of the area, respectively. Estimates reflect the combined effects of soil texture, soil organic content, and plant root depth or profile depth. The most influential and uncertain parameter is the depth over which the plant-extractable water capacity of soil is computed, which is usually limited by root depth. Soil texture exerts a lesser, but still substantial, influence. Organic content, except where concentrations are very high, has relatively little effect.

  13. Regional waveform calibration in the Pamir-Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Zhu, Lupei; Helmberger, Donald V.; Saikia, Chandan K.; Woods, Bradley B.

    1997-10-01

    Twelve moderate-magnitude earthquakes (mb 4-5.5) in the Pamir-Hindu Kush region are investigated to determine their focal mechanisms and to relocate them using their regional waveform records at two broadband arrays, the Kyrgyzstan Regional Network (KNET), and the 1992 Pakistan Himalayas seismic experiment array (PAKH) in northern Pakistan. We use the "cut-and-paste" source estimation technique to invert the whole broadband waveforms for mechanisms and depths, assuming a one-dimensional velocity model developed for the adjacent Tibetan plateau. For several large events the source mechanisms obtained agree with those available from the Harvard centroid moment tensor (CMT) solutions. An advantage of using regional broadband waveforms is that focal depths can be better constrained either from amplitude ratios of Pnl to surface waves for crustal events or from time separation between the direct P and the shear-coupled P wave (sPn + sPmP) for mantle events. All the crustal events are relocated at shallower depths compared with their International Seismological Centre bulletin or Harvard CMT depths. After the focal depths are established, the events are then relocated horizontally using their first-arrival times. Only minor offsets in epicentral location are found for all mantle events and the bigger crustal events, while rather large offsets (up to 30 km) occur for the smaller crustal events. We also tested the performance of waveform inversion using only two broadband stations, one from the KNET array in the north of the region and one from the PAKH array in the south. We found that this geometry is adequate for determining focal depths and mechanisms of moderate size earthquakes in the Pamir-Hindu Kush region.

  14. Bathymetric surveys of Morse and Geist Reservoirs in central Indiana made with acoustic Doppler current profiler and global positioning system technology, 1996

    USGS Publications Warehouse

    Wilson, J.T.; Morlock, S.E.; Baker, N.T.

    1997-01-01

    Acoustic Doppler current profiler, global positioning system, and geographic information system technology were used to map the bathymetry of Morse and Geist Reservoirs, two artificial lakes used for public water supply in central Indiana. The project was a pilot study to evaluate the use of the technologies for bathymetric surveys. Bathymetric surveys were last conducted in 1978 on Morse Reservoir and in 1980 on Geist Reservoir; those surveys were done with conventional methods using networks of fathometer transects. The 1996 bathymetric surveys produced updated estimates of reservoir volumes that will serve as base-line data for future estimates of storage capacity and sedimentation rates.An acoustic Doppler current profiler and global positioning system receiver were used to collect water-depth and position data from April 1996 through October 1996. All water-depth and position data were imported to a geographic information system to create a data base. The geographic information system then was used to generate water-depth contour maps and to compute the volumes for each reservoir.The computed volume of Morse Reservoir was 22,820 acre-feet (7.44 billion gallons), with a surface area of 1,484 acres. The computed volume of Geist Reservoir was 19,280 acre-feet (6.29 billion gallons), with a surface area of 1,848 acres. The computed 1996 reservoir volumes are less than the design volumes and indicate that sedimentation has occurred in both reservoirs. Cross sections were constructed from the computer-generated surfaces for 1996 and compared to the fathometer profiles from the 1978 and 1980 surveys; analysis of these cross sections also indicates that some sedimentation has occurred in both reservoirs.The acoustic Doppler current profiler, global positioning system, and geographic information system technologies described in this report produced bathymetric maps and volume estimates more efficiently and with comparable or greater resolution than conventional bathymetry methods.

  15. A coupled geomorphic and ecological model of tidal marsh evolution.

    PubMed

    Kirwan, Matthew L; Murray, A Brad

    2007-04-10

    The evolution of tidal marsh platforms and interwoven channel networks cannot be addressed without treating the two-way interactions that link biological and physical processes. We have developed a 3D model of tidal marsh accretion and channel network development that couples physical sediment transport processes with vegetation biomass productivity. Tidal flow tends to cause erosion, whereas vegetation biomass, a function of bed surface depth below high tide, influences the rate of sediment deposition and slope-driven transport processes such as creek bank slumping. With a steady, moderate rise in sea level, the model builds a marsh platform and channel network with accretion rates everywhere equal to the rate of sea-level rise, meaning water depths and biological productivity remain temporally constant. An increase in the rate of sea-level rise, or a reduction in sediment supply, causes marsh-surface depths, biomass productivity, and deposition rates to increase while simultaneously causing the channel network to expand. Vegetation on the marsh platform can promote a metastable equilibrium where the platform maintains elevation relative to a rapidly rising sea level, although disturbance to vegetation could cause irreversible loss of marsh habitat.

  16. Accelerations from the September 5, 2012 (Mw=7.6) Nicoya, Costa Rica Earthquake

    NASA Astrophysics Data System (ADS)

    Simila, G. W.; Quintero, R.; Burgoa, B.; Mohammadebrahim, E.; Segura, J.

    2013-05-01

    Since 1984, the Seismic Network of the Volcanological and Seismological Observatory of Costa Rica, Universidad Nacional (OVSICORI-UNA) has been recording and registering the seismicity in Costa Rica. Before September 2012, the earthquakes registered by this seismic network in northwestern Costa Rica were moderate to small, except the Cóbano earthquake of March 25, 1990, 13:23, Mw 7.3, lat. 9.648, long. 84.913, depth 20 km; a subduction quake at the entrance of the Gulf of Nicoya and generated peak intensities in the range of MM = VIII near the epicentral area and VI-VII in the Central Valley of Costa Rica. Six years before the installation of the seismic network, OVSICORI-UNA registered two subduction earthquakes in northwestern Costa Rica, specifically on August 23, 1978, at 00:38:32 and 00:50:29 with magnitudes Mw 7.0 (HRVD), Ms 7.0 (ISC) and depths of 58 and 69 km, respectively (EHB Bulletin). On September 5, 2012, at 14:42:02.8 UTC, the seismic network OVSICORI-UNA registered another large subduction earthquake in Nicoya peninsula, northwestern Costa Rica, located 29 km south of Samara, with a depth of 21 km and magnitude Mw 7.6, lat. 9.6392, long. 85.6167. This earthquake was caused by the subduction of the Cocos plate under the Caribbean plate in northwestern Costa Rica. This earthquake was felt throughout the country and also in much of Nicaragua. The instrumental intensity map for the Nicoya earthquake indicates that the earthquake was felt with an intensity of VII-VIII in the Puntarenas and Nicoya Peninsulas, in an area between Liberia, Cañas, Puntarenas, Cabo Blanco, Carrillo, Garza, Sardinal, and Tamarindo in Guanacaste; Nicoya city being the place where the maximum reported intensity of VIII is most notable. An intensity of VIII indicates that damage estimates are moderate to severe, and intensity VII indicates that damage estimates are moderate. According to the National Emergency Commission of Costa Rica, 371 affected communities were reported; most reports were of damage to homes, bridges, roads, aqueducts, schools and public buildings. There were 12 structures reported with damages in hospitals and health care sites, specifically in Hojancha, Nandayure, Nicoya, Santa Cruz and Puntarenas. There are no reports of deaths from the earthquake and only 78 injured and a total of 1474 people mobilized, which includes hospital evacuations and preventive transfers. 223 schools were reported with various damages, mostly in Santa Cruz, Nicoya, Carrillo, Nandayure, and Puntarenas. A total of 98 homes were reported with severe damage, 805 with moderate damage and 87 with minor damage. In general, the buildings in the Nicoya Peninsula (where the highest intensities were reported) endured the Nicoya Earthquake. Also, acceleration data from OVSICORI, UCR, LIS-UCR, and the Seismic Strong Motion Array Project (SSMAP) show a range of accelerations (500 - 10 cm/sec2) for distance range of 30-250 km, respectively.

  17. Modeling Gas and Gas Hydrate Accumulation in Marine Sediments Using a K-Nearest Neighbor Machine-Learning Technique

    NASA Astrophysics Data System (ADS)

    Wood, W. T.; Runyan, T. E.; Palmsten, M.; Dale, J.; Crawford, C.

    2016-12-01

    Natural Gas (primarily methane) and gas hydrate accumulations require certain bio-geochemical, as well as physical conditions, some of which are poorly sampled and/or poorly understood. We exploit recent advances in the prediction of seafloor porosity and heat flux via machine learning techniques (e.g. Random forests and Bayesian networks) to predict the occurrence of gas and subsequently gas hydrate in marine sediments. The prediction (actually guided interpolation) of key parameters we use in this study is a K-nearest neighbor technique. KNN requires only minimal pre-processing of the data and predictors, and requires minimal run-time input so the results are almost entirely data-driven. Specifically we use new estimates of sedimentation rate and sediment type, along with recently derived compaction modeling to estimate profiles of porosity and age. We combined the compaction with seafloor heat flux to estimate temperature with depth and geologic age, which, with estimates of organic carbon, and models of methanogenesis yield limits on the production of methane. Results include geospatial predictions of gas (and gas hydrate) accumulations, with quantitative estimates of uncertainty. The Generic Earth Modeling System (GEMS) we have developed to derive the machine learning estimates is modular and easily updated with new algorithms or data.

  18. Surface Soil Moisture Estimates Across China Based on Multi-satellite Observations and A Soil Moisture Model

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Yang, Tao; Ye, Jinyin; Li, Zhijia; Yu, Zhongbo

    2017-04-01

    Soil moisture is a key variable that regulates exchanges of water and energy between land surface and atmosphere. Soil moisture retrievals based on microwave satellite remote sensing have made it possible to estimate global surface (up to about 10 cm in depth) soil moisture routinely. Although there are many satellites operating, including NASA's Soil Moisture Acitive Passive mission (SMAP), ESA's Soil Moisture and Ocean Salinity mission (SMOS), JAXA's Advanced Microwave Scanning Radiometer 2 mission (AMSR2), and China's Fengyun (FY) missions, key differences exist between different satellite-based soil moisture products. In this study, we applied a single-channel soil moisture retrieval model forced by multiple sources of satellite brightness temperature observations to estimate consistent daily surface soil moisture across China at a spatial resolution of 25 km. By utilizing observations from multiple satellites, we are able to estimate daily soil moisture across the whole domain of China. We further developed a daily soil moisture accounting model and applied it to downscale the 25-km satellite-based soil moisture to 5 km. By comparing our estimated soil moisture with observations from a dense observation network implemented in Anhui Province, China, our estimated soil moisture results show a reasonably good agreement with the observations (RMSE < 0.1 and r > 0.8).

  19. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    PubMed

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  20. Utility of MODIS Aerosol Optical Depth for Estimating PM2.5 Exposure in Environmental Public Health Surveillance

    NASA Technical Reports Server (NTRS)

    Al-Hamdan, Mohammad; Crosson, William; Limaye, Ashutosh; Rickman, Doug; Quattrochi, Dale; Estes, Maury; Adeniyi, Kafayat; Qualters, Judith; Niskar, Amanda Sue

    2006-01-01

    As part of the National Environmental Public Health Tracking Network (EPHTN) the National Center for Environmental Health (NCEH) at the Centers for Disease Control and Prevention (CDC) is leading a project called Health and Environment Linked for Information Exchange (HELIX-Atlanta). The goal of developing the National Environmental Public Health Tracking Network is to improve the health of communities. Currently, few systems exist at the state or national level to concurrently track many of the exposures and health effects that might be associated with environmental hazards. An additional challenge is estimating exposure to environmental hazards such as particulate matter whose aerodynamic diameter is less than or equal to 2.5 micrometers (PM(2.5)) HELIX-Atlanta's goal is to examine the feasibility of building an integrated electronic health and environmental data network in five counties of Metropolitan Atlanta, GA (Clayton, Cobb, DeKalb, Fulton, and Gwinnett counties). Under HELIX-Atlanta, pilot projects are being conducted to develop methods to characterize exposure; link health and environmental data; analyze the relationship between health and environmental factors; and communicate findings. NASA Marshall Space Flight Center (NASA/MSFC) is collaborating with CDC to combine NASA earth science satellite observations related to air quality and environmental monitoring data to model surface estimates of PM(2.5) concentrations that can be linked with clinic visits for asthma. From 1999-2000 there were over 9,400 hospitalizations per year in Georgia with asthma as the primary diagnosis. The majority of these hospitalizations occurred in medical facilities in the five most populous Metro-Atlanta counties. Hospital charges resulting from asthma in Georgia are approximately $59 million dollars annually. There is evidence in the research literature that asthmatic persons are at increased risk of developing asthma exacerbations with exposure to environmental factors, including PM(2.5). Thus, HELIX-Atlanta is focusing on methods for characterizing population exposure to PM(2.5) for the Atlanta metropolitan area that could be used in on-going surveillance. While use of the Air Quality System, (AQS) PM(2.5) data alone could meet HELIX Atlanta, specifications, there are only five AQS sites in the Atlanta area, thus the spatial coverage is not ideal. Also, the AQS ground observations are made at time intervals ranging from one hour to six days leaving some temporal gaps. NASA Moderate Resolution Imaging Spectroradiometer (MODIS) satellite Aerosol Optical Depth (AOD) data have the potential for estimating daily ground level PM(2.5) at 10 km resolution over the metropolitan Atlanta area supplementing the AQS ground observations and filling their spatial and temporal gaps.

  1. A COMPARISON OF AEROSOL OPTICAL DEPTH SIMULATED USING CMAQ WITH SATELLITE ESTIMATES

    EPA Science Inventory

    Satellite data provide new opportunities to study the regional distribution of particulate matter. The aerosol optical depth (AOD) - a derived estimate from the satellite measured irradiance, can be compared against model derived estimate to provide an evaluation of the columnar ...

  2. The capability of professional- and lay-rescuers to estimate the chest compression-depth target: a short, randomized experiment.

    PubMed

    van Tulder, Raphael; Laggner, Roberta; Kienbacher, Calvin; Schmid, Bernhard; Zajicek, Andreas; Haidvogel, Jochen; Sebald, Dieter; Laggner, Anton N; Herkner, Harald; Sterz, Fritz; Eisenburger, Philip

    2015-04-01

    In CPR, sufficient compression depth is essential. The American Heart Association ("at least 5cm", AHA-R) and the European Resuscitation Council ("at least 5cm, but not to exceed 6cm", ERC-R) recommendations differ, and both are hardly achieved. This study aims to investigate the effects of differing target depth instructions on compression depth performances of professional and lay-rescuers. 110 professional-rescuers and 110 lay-rescuers were randomized (1:1, 4 groups) to estimate the AHA-R or ERC-R on a paper sheet (given horizontal axis) using a pencil and to perform chest compressions according to AHA-R or ERC-R on a manikin. Distance estimation and compression depth were the outcome variables. Professional-rescuers estimated the distance according to AHA-R in 19/55 (34.5%) and to ERC-R in 20/55 (36.4%) cases (p=0.84). Professional-rescuers achieved correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 36/55 (65.4%) cases (p=0.97). Lay-rescuers estimated the distance correctly according to AHA-R in 18/55 (32.7%) and to ERC-R in 20/55 (36.4%) cases (p=0.59). Lay-rescuers yielded correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 26/55 (47.3%) cases (p=0.02). Professional and lay-rescuers have severe difficulties in correctly estimating distance on a sheet of paper. Professional-rescuers are able to yield AHA-R and ERC-R targets likewise. In lay-rescuers AHA-R was associated with significantly higher success rates. The inability to estimate distance could explain the failure to appropriately perform chest compressions. For teaching lay-rescuers, the AHA-R with no upper limit of compression depth might be preferable. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Spatial prediction of ground subsidence susceptibility using an artificial neural network.

    PubMed

    Lee, Saro; Park, Inhye; Choi, Jong-Kuk

    2012-02-01

    Ground subsidence in abandoned underground coal mine areas can result in loss of life and property. We analyzed ground subsidence susceptibility (GSS) around abandoned coal mines in Jeong-am, Gangwon-do, South Korea, using artificial neural network (ANN) and geographic information system approaches. Spatial data of subsidence area, topography, and geology, as well as various ground-engineering data, were collected and used to create a raster database of relevant factors for a GSS map. Eight major factors causing ground subsidence were extracted from the existing ground subsidence area: slope, depth of coal mine, distance from pit, groundwater depth, rock-mass rating, distance from fault, geology, and land use. Areas of ground subsidence were randomly divided into a training set to analyze GSS using the ANN and a test set to validate the predicted GSS map. Weights of each factor's relative importance were determined by the back-propagation training algorithms and applied to the input factor. The GSS was then calculated using the weights, and GSS maps were created. The process was repeated ten times to check the stability of analysis model using a different training data set. The map was validated using area-under-the-curve analysis with the ground subsidence areas that had not been used to train the model. The validation showed prediction accuracies between 94.84 and 95.98%, representing overall satisfactory agreement. Among the input factors, "distance from fault" had the highest average weight (i.e., 1.5477), indicating that this factor was most important. The generated maps can be used to estimate hazards to people, property, and existing infrastructure, such as the transportation network, and as part of land-use and infrastructure planning.

  4. A Preliminary Theory of Dark Network Resilience

    ERIC Educational Resources Information Center

    Bakker, Rene M.; Raab, Jorg; Milward, H. Brinton

    2012-01-01

    A crucial contemporary policy question for governments across the globe is how to cope with international crime and terrorist networks. Many such "dark" networks--that is, networks that operate covertly and illegally--display a remarkable level of resilience when faced with shocks and attacks. Based on an in-depth study of three cases…

  5. Networking Course Syllabus in Accredited Library and Information Science Programs: A Comparative Analysis Study

    ERIC Educational Resources Information Center

    Abouserie, Hossam Eldin Mohamed Refaat

    2009-01-01

    The study investigated networking courses offered in accredited Library and Information Science schools in the United States in 2009. The study analyzed and compared network syllabi according to Course Syllabus Evaluation Rubric to obtain in-depth understanding of basic features and characteristics of networking courses taught. The study embraced…

  6. How and What Do Academics Learn through Their Personal Networks

    ERIC Educational Resources Information Center

    Pataraia, Nino; Margaryan, Anoush; Falconer, Isobel; Littlejohn, Allison

    2015-01-01

    This paper investigates the role of personal networks in academics' learning in relation to teaching. Drawing on in-depth interviews with 11 academics, this study examines, first, how and what academics learn through their personal networks; second, the perceived value of networks in relation to academics' professional development; and, third,…

  7. Depth-dependence of time-lapse seismic velocity change detected by a joint interferometric analysis of vertical array data

    NASA Astrophysics Data System (ADS)

    Sawazaki, K.; Saito, T.; Ueno, T.; Shiomi, K.

    2015-12-01

    In this study, utilizing depth-sensitivity of interferometric waveforms recorded by co-located Hi-net and KiK-net sensors, we separate the responsible depth of seismic velocity change associated with the M6.3 earthquake occurred on November 22, 2014, in central Japan. The Hi-net station N.MKGH is located about 20 km northeast from the epicenter, where the seismometer is installed at the 150 m depth. At the same site, the KiK-net has two strong motion seismometers installed at the depths of 0 and 150 m. To estimate average velocity change around the N.MKGH station, we apply the stretching technique to auto-correlation function (ACF) of ambient noise recorded by the Hi-net sensor. To evaluate sensitivity of the Hi-net ACF to velocity change above and below the 150 m depth, we perform a numerical wave propagation simulation using 2-D FDM. To obtain velocity change above the 150 m depth, we measure response waveform from the depths of 150 m to 0 m by computing deconvolution function (DCF) of earthquake records obtained by the two KiK-net vertical array sensors. The background annual velocity variation is subtracted from the detected velocity change. From the KiK-net DCF records, the velocity reduction ratio above the 150 m depth is estimated to be 4.2 % and 3.1 % in the periods of 1-7 days and 7 days - 4 months after the mainshock, respectively. From the Hi-net ACF records, the velocity reduction ratio is estimated to be 2.2 % and 1.8 % in the same time periods, respectively. This difference in the estimated velocity reduction ratio is attributed to depth-dependence of the velocity change. By using the depth sensitivity obtained from the numerical simulation, we estimate the velocity reduction ratio below the 150 m depth to be lower than 1.0 % for both time periods. Thus the significant velocity reduction and recovery are observed above the 150 m depth only, which may be caused by strong ground motion of the mainshock and following healing in the shallow ground.

  8. Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM

    NASA Technical Reports Server (NTRS)

    Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.

    2014-01-01

    Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.

  9. Topology reduction in deep convolutional feature extraction networks

    NASA Astrophysics Data System (ADS)

    Wiatowski, Thomas; Grohs, Philipp; Bölcskei, Helmut

    2017-08-01

    Deep convolutional neural networks (CNNs) used in practice employ potentially hundreds of layers and 10,000s of nodes. Such network sizes entail significant computational complexity due to the large number of convolutions that need to be carried out; in addition, a large number of parameters needs to be learned and stored. Very deep and wide CNNs may therefore not be well suited to applications operating under severe resource constraints as is the case, e.g., in low-power embedded and mobile platforms. This paper aims at understanding the impact of CNN topology, specifically depth and width, on the network's feature extraction capabilities. We address this question for the class of scattering networks that employ either Weyl-Heisenberg filters or wavelets, the modulus non-linearity, and no pooling. The exponential feature map energy decay results in Wiatowski et al., 2017, are generalized to O(a-N), where an arbitrary decay factor a > 1 can be realized through suitable choice of the Weyl-Heisenberg prototype function or the mother wavelet. We then show how networks of fixed (possibly small) depth N can be designed to guarantee that ((1 - ɛ) · 100)% of the input signal's energy are contained in the feature vector. Based on the notion of operationally significant nodes, we characterize, partly rigorously and partly heuristically, the topology-reducing effects of (effectively) band-limited input signals, band-limited filters, and feature map symmetries. Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes - for fixed network depth N - the average number of operationally significant nodes per layer.

  10. Salient regions detection using convolutional neural networks and color volume

    NASA Astrophysics Data System (ADS)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  11. Vein networks in hydrothermal systems provide constraints for the monitoring of active volcanoes.

    PubMed

    Cucci, Luigi; Di Luccio, Francesca; Esposito, Alessandra; Ventura, Guido

    2017-03-10

    Vein networks affect the hydrothermal systems of many volcanoes, and variations in their arrangement may precede hydrothermal and volcanic eruptions. However, the long-term evolution of vein networks is often unknown because data are lacking. We analyze two gypsum-filled vein networks affecting the hydrothermal field of the active Lipari volcanic Island (Italy) to reconstruct the dynamics of the hydrothermal processes. The older network (E1) consists of sub-vertical, N-S striking veins; the younger network (E2) consists of veins without a preferred strike and dip. E2 veins have larger aperture/length, fracture density, dilatancy, and finite extension than E1. The fluid overpressure of E2 is larger than that of E1 veins, whereas the hydraulic conductance is lower. The larger number of fracture intersections in E2 slows down the fluid movement, and favors fluid interference effects and pressurization. Depths of the E1 and E2 hydrothermal sources are 0.8 km and 4.6 km, respectively. The decrease in the fluid flux, depth of the hydrothermal source, and the pressurization increase in E2 are likely associated to a magma reservoir. The decrease of fluid discharge in hydrothermal fields may reflect pressurization at depth potentially preceding hydrothermal explosions. This has significant implications for the long-term monitoring strategy of volcanoes.

  12. Using protein-protein interactions for refining gene networks estimated from microarray data by Bayesian networks.

    PubMed

    Nariai, N; Kim, S; Imoto, S; Miyano, S

    2004-01-01

    We propose a statistical method to estimate gene networks from DNA microarray data and protein-protein interactions. Because physical interactions between proteins or multiprotein complexes are likely to regulate biological processes, using only mRNA expression data is not sufficient for estimating a gene network accurately. Our method adds knowledge about protein-protein interactions to the estimation method of gene networks under a Bayesian statistical framework. In the estimated gene network, a protein complex is modeled as a virtual node based on principal component analysis. We show the effectiveness of the proposed method through the analysis of Saccharomyces cerevisiae cell cycle data. The proposed method improves the accuracy of the estimated gene networks, and successfully identifies some biological facts.

  13. A COMPARISON OF AEROSOL OPTICAL DEPTH SIMULATED USING CMAQ WITH SATELLITE ESTIMATES

    EPA Science Inventory

    Satellite data provide new opportunities to study the regional distribution of particulate matter.

    The aerosol optical depth (AOD) - a derived estimate from the satellite-measured radiance, can be compared against model estimates to provide an evaluation of the columnar ae...

  14. Estimation of infiltration and hydraulic resistance in furrow irrigation, with infiltration dependent on flow depth

    USDA-ARS?s Scientific Manuscript database

    The estimation of parameters of a flow-depth dependent furrow infiltration model and of hydraulic resistance, using irrigation evaluation data, was investigated. The estimated infiltration parameters are the saturated hydraulic conductivity and the macropore volume per unit area. Infiltration throu...

  15. Toward tsunami early warning system in Indonesia by using rapid rupture durations estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madlazim

    2012-06-20

    Indonesia has Indonesian Tsunami Early Warning System (Ina-TEWS) since 2008. The Ina-TEWS has used automatic processing on hypocenter; Mwp, Mw (mB) and Mj. If earthquake occurred in Ocean, depth < 70 km and magnitude > 7, then Ina-TEWS announce early warning that the earthquake can generate tsunami. However, the announcement of the Ina-TEWS is still not accuracy. Purposes of this research are to estimate earthquake rupture duration of large Indonesia earthquakes that occurred in Indian Ocean, Java, Timor sea, Banda sea, Arafura sea and Pasific ocean. We analyzed at least 330 vertical seismogram recorded by IRIS-DMC network using a directmore » procedure for rapid assessment of earthquake tsunami potential using simple measures on P-wave vertical seismograms on the velocity records, and the likelihood that the high-frequency, apparent rupture duration, T{sub dur}. T{sub dur} can be related to the critical parameters rupture length (L), depth (z), and shear modulus ({mu}) while T{sub dur} may be related to wide (W), slip (D), z or {mu}. Our analysis shows that the rupture duration has a stronger influence to generate tsunami than Mw and depth. The rupture duration gives more information on tsunami impact, Mo/{mu}, depth and size than Mw and other currently used discriminants. We show more information which known from the rupture durations. The longer rupture duration, the shallower source of the earthquake. For rupture duration greater than 50 s, the depth less than 50 km, Mw greater than 7, the longer rupture length, because T{sub dur} is proportional L and greater Mo/{mu}. Because Mo/{mu} is proportional L. So, with rupture duration information can be known information of the four parameters. We also suggest that tsunami potential is not directly related to the faulting type of source and for events that have rupture duration greater than 50 s, the earthquakes generated tsunami. With available real-time seismogram data, rapid calculation, rupture duration discriminant can be completed within 4-5 min after an earthquake occurs and thus can aid in effective, accuracy and reliable tsunami early warning for Indonesia region.« less

  16. Spectral analysis of aeromagnetic profiles for depth estimation principles, software, and practical application

    USGS Publications Warehouse

    Sadek, H.S.; Rashad, S.M.; Blank, H.R.

    1984-01-01

    If proper account is taken of the constraints of the method, it is capable of providing depth estimates to within an accuracy of about 10 percent under suitable circumstances. The estimates are unaffected by source magnetization and are relatively insensitive to assumptions as to source shape or distribution. The validity of the method is demonstrated by analyses of synthetic profiles and profiles recorded over Harrat Rahat, Saudi Arabia, and Diyur, Egypt, where source depths have been proved by drilling.

  17. Estimation of global snow cover using passive microwave data

    NASA Astrophysics Data System (ADS)

    Chang, Alfred T. C.; Kelly, Richard E.; Foster, James L.; Hall, Dorothy K.

    2003-04-01

    This paper describes an approach to estimate global snow cover using satellite passive microwave data. Snow cover is detected using the high frequency scattering signal from natural microwave radiation, which is observed by passive microwave instruments. Developed for the retrieval of global snow depth and snow water equivalent using Advanced Microwave Scanning Radiometer EOS (AMSR-E), the algorithm uses passive microwave radiation along with a microwave emission model and a snow grain growth model to estimate snow depth. The microwave emission model is based on the Dense Media Radiative Transfer (DMRT) model that uses the quasi-crystalline approach and sticky particle theory to predict the brightness temperature from a single layered snowpack. The grain growth model is a generic single layer model based on an empirical approach to predict snow grain size evolution with time. Gridding to the 25 km EASE-grid projection, a daily record of Special Sensor Microwave Imager (SSM/I) snow depth estimates was generated for December 2000 to March 2001. The estimates are tested using ground measurements from two continental-scale river catchments (Nelson River and the Ob River in Russia). This regional-scale testing of the algorithm shows that for passive microwave estimates, the average daily snow depth retrieval standard error between estimated and measured snow depths ranges from 0 cm to 40 cm of point observations. Bias characteristics are different for each basin. A fraction of the error is related to uncertainties about the grain growth initialization states and uncertainties about grain size changes through the winter season that directly affect the parameterization of the snow depth estimation in the DMRT model. Also, the algorithm does not include a correction for forest cover and this effect is clearly observed in the retrieval. Finally, error is also related to scale differences between in situ ground measurements and area-integrated satellite estimates. With AMSR-E data, improvements to snow depth and water equivalent estimates are expected since AMSR-E will have twice the spatial resolution of the SSM/I and will be able to characterize better the subnivean snow environment from an expanded range of microwave frequencies.

  18. Representing distributed cognition in complex systems: how a submarine returns to periscope depth.

    PubMed

    Stanton, Neville A

    2014-01-01

    This paper presents the Event Analysis of Systemic Teamwork (EAST) method as a means of modelling distributed cognition in systems. The method comprises three network models (i.e. task, social and information) and their combination. This method was applied to the interactions between the sound room and control room in a submarine, following the activities of returning the submarine to periscope depth. This paper demonstrates three main developments in EAST. First, building the network models directly, without reference to the intervening methods. Second, the application of analysis metrics to all three networks. Third, the combination of the aforementioned networks in different ways to gain a broader understanding of the distributed cognition. Analyses have shown that EAST can be used to gain both qualitative and quantitative insights into distributed cognition. Future research should focus on the analyses of network resilience and modelling alternative versions of a system.

  19. Using wireless sensor networks to improve understanding of rain-on-snow events across the Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Maurer, T.; Avanzi, F.; Oroza, C.; Malek, S. A.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2017-12-01

    We use data gathered from Wireless Sensor Networks (WSNs) between 2008 and 2017 to investigate the temporal/spatial patterns of rain-on-snow events in three river basins of California's Sierra Nevada. Rain-on-snow transitions occur across a broad elevation range (several hundred meters), both between storms and within a given storm, creating an opportunity to use spatially and temporally dense data to forecast and study them. WSNs collect snow depth; meteorological data; and soil moisture and temperature data across relatively dense sensor clusters. Ten to twelve measurement nodes per cluster are placed across 1-km2 areas in locations representative of snow patterns at larger scales. Combining precipitation and snow data from snow-pillow and climate stations with an estimation of dew-point temperature from WSNs, we determine the frequency, timing, and geographic extent of rain-on-snow events. We compare these results to WSN data to evaluate the impact of rain-on-snow events on snowpack energy balance, density, and depth as well as on soil moisture. Rain-on-snow events are compared to dry warm-weather days to identify the relative importance of rain and radiation as the primary energy input to the snowpack for snowmelt generation. An intercomparison of rain-on-snow events for the WSNs in the Feather, American, and Kings River basins captures the behavior across a 2° latitudinal range of the Sierra Nevada. Rain-on-snow events are potentially a more important streamflow generation mechanism in the lower-elevation Feather River basin. Snowmelt response to rain-on-snow events changes throughout the wet season, with later events resulting in more melt due to snow isothermal conditions, coarser grain size, and more-homogeneous snow stratigraphy. Regardless of snowmelt response, rain-on-snow events tend to result in decreasing snow depth and a corresponding increase in snow density. Our results demonstrate that strategically placed WSNs can provide the necessary data at high temporal resolution to investigate how hydrologic responses evolve in both space and time, data not available from operational networks.

  20. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    PubMed Central

    Zhang, Hongpo

    2018-01-01

    Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369

  1. Monocular Depth Perception and Robotic Grasping of Novel Objects

    DTIC Science & Technology

    2009-06-01

    resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show

  2. Epidural Catheter Placement in Morbidly Obese Parturients with the Use of an Epidural Depth Equation prior to Ultrasound Visualization

    PubMed Central

    Singh, Sukhdip; Wirth, Keith M.; Phelps, Amy L.; Badve, Manasi H.; Shah, Tanmay H.; Vallejo, Manuel C.

    2013-01-01

    Background. Previously, Balki determined the Pearson correlation coefficient with the use of ultrasound (US) was 0.85 in morbidly obese parturients. We aimed to determine if the use of the epidural depth equation (EDE) in conjunction with US can provide better clinical correlation in estimating the distance from the skin to the epidural space in morbidly obese parturients. Methods. One hundred sixty morbidly obese (≥40 kg/m2) parturients requesting labor epidural analgesia were enrolled. Before epidural catheter placement, EDE was used to estimate depth to the epidural space. This estimation was used to help visualize the epidural space with the transverse and midline longitudinal US views and to measure depth to epidural space. The measured epidural depth was made available to the resident trainee before needle insertion. Actual needle depth (ND) to the epidural space was recorded. Results. Pearson's correlation coefficients comparing actual (ND) versus US estimated depth to the epidural space in the longitudinal median and transverse planes were 0.905 (95% CI: 0.873 to 0.929) and 0.899 (95% CI: 0.865 to 0.925), respectively. Conclusion. Use of the epidural depth equation (EDE) in conjunction with the longitudinal and transverse US views results in better clinical correlation than with the use of US alone. PMID:23983645

  3. Deep learning-based depth estimation from a synthetic endoscopy image training set

    NASA Astrophysics Data System (ADS)

    Mahmood, Faisal; Durr, Nicholas J.

    2018-03-01

    Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.

  4. Precipitation-chemistry measurements from the California Acid Deposition Monitoring Program, 1985-1990

    USGS Publications Warehouse

    Blanchard, Charles L.; Tonnessen, Kathy A.

    1993-01-01

    The configuration of the California Acid Deposition Monitoring Program (CADMP) precipitation network is described and quality assurance results summarized. Comparison of CADMP and the National Acid Deposition Program/National Trends Network (NADP/NTN) data at four parallel sites indicated that mean depth-weighted differences were less than 3 μeq ℓ−1 for all ions, being statistically significant for ammonium, sulfate and hydrogen ion. These apparently small differences were 15–30% of the mean concentrations of ammonium, sulfate and hydrogen ion. Mean depth-weighted concentrations and mass deposition rates for the period 1985–1990 are summarized; the latter were highest either where concentrations or precipitation depths were relatively high.

  5. The World Optical Depth Research and Calibration Center (WORCC) quality assurance and quality control of GAW-PFR AOD measurements

    NASA Astrophysics Data System (ADS)

    Kazadzis, Stelios; Kouremeti, Natalia; Nyeki, Stephan; Gröbner, Julian; Wehrli, Christoph

    2018-02-01

    The World Optical Depth Research Calibration Center (WORCC) is a section within the World Radiation Center at Physikalisches-Meteorologisches Observatorium (PMOD/WRC), Davos, Switzerland, established after the recommendations of the World Meteorological Organization for calibration of aerosol optical depth (AOD)-related Sun photometers. WORCC is mandated to develop new methods for instrument calibration, to initiate homogenization activities among different AOD networks and to run a network (GAW-PFR) of Sun photometers. In this work we describe the calibration hierarchy and methods used under WORCC and the basic procedures, tests and processing techniques in order to ensure the quality assurance and quality control of the AOD-retrieved data.

  6. A regional-scale network for geoid monitoring and satellite gravimetry validation

    NASA Astrophysics Data System (ADS)

    Winester, D.; Pool, D.; Kennedy, J.

    2010-12-01

    In the past two decades, improved measurements of acceleration due to gravity have allowed for accurate detection of temporal gravity change. Terrestrial absolute gravimeters (for example, Micro-g LaCoste FG5 or A-10) can sense changes of gravity induced by elevation or mass changes, including local effects that may bias regional studies. Satellite instrumentation (e.g. GRACE) can detect large scale mass changes on a regular basis. However, the Nyquist wave number for satellite observations is often much too small for the size of regional studies. Also, satellites are limited by their life of deployment. Both techniques are used to (in)validate change models generated from other geophysical observations including water storage(underground and glacial), geoid definition, isostatic adjustments and tectonic(magmatic and faulting)activity. The gap between terrestrial and satellite gravity observations (and between satellite missions) might be bridged by developing a terrestrial network of sites of various observation techniques that define a representative sample of a given, regional study area. This information could then be statistically extrapolated to the extent of the region. The Southern High Plains Aquifer is such a region, since it has widespread relatively uniform geology, has relatively flat topography, and is well monitored for groundwater levels and soil moisture. Each site would have extensive instrumentation for monitoring, at a minimum, gravity (periodic and continuous) using absolute and tidal gravimeters, soil moisture, precipitation, depths to water in wells, evapotranspiration, air pressure, and land surface (GPS). Where possible, the network would build upon existing, data collection infrastructure. Preferably, the region would also have seismic tomography or crustal seismic reflection observations to characterize Moho-depth mass changes and have regional Bouguer anomaly mapping. In addition to information on local hydrology and geology, data collection would allow for characterization of local seasonal corrections, earth tides, atmospheric loading and episodic slip. No test network has yet been funded, but cost and man-power can be estimated. Such a network would rely on co-operation between various federal, state, local and university groups.

  7. BERG2 Micro-computer Estimation of Freeze and Thaw Depths and Thaw Consolidation (PDF file)

    DOT National Transportation Integrated Search

    1989-06-01

    The BERG2 microcomputer program uses a methology similar to the Modified Berggren method (Aldrich and Paynter, 1953) to estimate the freeze and thaw depths in layered soil systems. The program also provides an estimate of the thaw consolidation in ic...

  8. Neural network modeling and an uncertainty analysis in Bayesian framework: A case study from the KTB borehole site

    NASA Astrophysics Data System (ADS)

    Maiti, Saumen; Tiwari, Ram Krishna

    2010-10-01

    A new probabilistic approach based on the concept of Bayesian neural network (BNN) learning theory is proposed for decoding litho-facies boundaries from well-log data. We show that how a multi-layer-perceptron neural network model can be employed in Bayesian framework to classify changes in litho-log successions. The method is then applied to the German Continental Deep Drilling Program (KTB) well-log data for classification and uncertainty estimation in the litho-facies boundaries. In this framework, a posteriori distribution of network parameter is estimated via the principle of Bayesian probabilistic theory, and an objective function is minimized following the scaled conjugate gradient optimization scheme. For the model development, we inflict a suitable criterion, which provides probabilistic information by emulating different combinations of synthetic data. Uncertainty in the relationship between the data and the model space is appropriately taken care by assuming a Gaussian a priori distribution of networks parameters (e.g., synaptic weights and biases). Prior to applying the new method to the real KTB data, we tested the proposed method on synthetic examples to examine the sensitivity of neural network hyperparameters in prediction. Within this framework, we examine stability and efficiency of this new probabilistic approach using different kinds of synthetic data assorted with different level of correlated noise. Our data analysis suggests that the designed network topology based on the Bayesian paradigm is steady up to nearly 40% correlated noise; however, adding more noise (˜50% or more) degrades the results. We perform uncertainty analyses on training, validation, and test data sets with and devoid of intrinsic noise by making the Gaussian approximation of the a posteriori distribution about the peak model. We present a standard deviation error-map at the network output corresponding to the three types of the litho-facies present over the entire litho-section of the KTB. The comparisons of maximum a posteriori geological sections constructed here, based on the maximum a posteriori probability distribution, with the available geological information and the existing geophysical findings suggest that the BNN results reveal some additional finer details in the KTB borehole data at certain depths, which appears to be of some geological significance. We also demonstrate that the proposed BNN approach is superior to the conventional artificial neural network in terms of both avoiding "over-fitting" and aiding uncertainty estimation, which are vital for meaningful interpretation of geophysical records. Our analyses demonstrate that the BNN-based approach renders a robust means for the classification of complex changes in the litho-facies successions and thus could provide a useful guide for understanding the crustal inhomogeneity and the structural discontinuity in many other tectonically complex regions.

  9. An Experimental Seismic Data and Parameter Exchange System for Interim NEAMTWS

    NASA Astrophysics Data System (ADS)

    Hanka, W.; Hoffmann, T.; Weber, B.; Heinloo, A.; Hoffmann, M.; Müller-Wrana, T.; Saul, J.

    2009-04-01

    In 2008 GFZ Potsdam has started to operate its global earthquake monitoring system as an experimental seismic background data centre for the interim NEAMTWS (NE Atlantic and Mediterranean Tsunami Warning System). The SeisComP3 (SC3) software, developed within the GITEWS (German Indian Ocean Tsunami Early Warning System) project was extended to test the export and import of individual processing results within a cluster of SC3 systems. The initiated NEAMTWS SC3 cluster consists presently of the 24/7 seismic services at IMP, IGN, LDG/EMSC and KOERI, whereas INGV and NOA are still pending. The GFZ virtual real-time seismic network (GEOFON Extended Virtual Network - GEVN) was substantially extended by many stations from Western European countries optimizing the station distribution for NEAMTWS purposes. To amend the public seismic network (VEBSN - Virtual European Broadband Seismic Network) some attached centres provided additional private stations for NEAMTWS usage. In parallel to the data collection by Internet the GFZ VSAT hub for the secured data collection of the EuroMED GEOFON and NEAMTWS backbone network stations became operational and the first data links were established. In 2008 the experimental system could already prove its performance since a number of relevant earthquakes have happened in NEAMTWS area. The results are very promising in terms of speed as the automatic alerts (reliable solutions based on a minimum of 25 stations and disseminated by emails and SMS) were issued between 2 1/2 and 4 minutes for Greece and 5 minutes for Iceland. They are also promising in terms of accuracy since epicenter coordinates, depth and magnitude estimates were sufficiently accurate from the very beginning, usually don't differ substantially from the final solutions and provide a good starting point for the operations of the interim NEAMTWS. However, although an automatic seismic system is a good first step, 24/7 manned RTWCs are mandatory for regular manual verification of the automatic seismic results and the estimation of the tsunami potential for a given event.

  10. Alternatives to Full-Depth Patching on Resurfacing Projects

    DOT National Transportation Integrated Search

    1993-09-01

    The vast majority of Illinois' non-interstate network is constructed of jointed Portland cement concrete (PCC). Typically, Illinois' first significant rehabilitation efforts for jointed PCC pavements are in the form of full-depth bituminous concrete ...

  11. Real-time hydraulic interval state estimation for water transport networks: a case study

    NASA Astrophysics Data System (ADS)

    Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.

    2018-03-01

    Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.

  12. The importance of atmospheric correction for airborne hyperspectral remote sensing of shallow waters: application to depth estimation

    NASA Astrophysics Data System (ADS)

    Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe

    2017-10-01

    Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.

  13. Criteria of Effectiveness for Network Delivery of Citizens Information through Libraries. Final Report.

    ERIC Educational Resources Information Center

    Chen, Ching-chih; Hernon, Peter

    This two-part publication reports on a study of consumer information delivery by library and non-library networks, which involved an extensive literature review, a telephone survey of 620 library networks, the development of an assessment model for the effectiveness of network information delivery, the development of an in-depth guide for…

  14. Channel morphometry, sediment transport, and implications for tectonic activity and surficial ages of Titan basins

    USGS Publications Warehouse

    Cartwright, R.; Clayton, J.A.; Kirk, R.L.

    2011-01-01

    Fluvial features on Titan and drainage basins on Earth are remarkably similar despite differences in gravity and surface composition. We determined network bifurcation (Rb) ratios for five Titan and three terrestrial analog basins. Tectonically-modified Earth basins have Rb values greater than the expected range (3.0-5.0) for dendritic networks; comparisons with Rb values determined for Titan basins, in conjunction with similarities in network patterns, suggest that portions of Titan's north polar region are modified by tectonic forces. Sufficient elevation data existed to calculate bed slope and potential fluvial sediment transport rates in at least one Titan basin, indicating that 75mm water ice grains (observed at the Huygens landing site) should be readily entrained given sufficient flow depths of liquid hydrocarbons. Volumetric sediment transport estimates suggest that ???6700-10,000 Titan years (???2.0-3.0??105 Earth years) are required to erode this basin to its minimum relief (assuming constant 1m and 1.5m flows); these lowering rates increase to ???27,000-41,000 Titan years (???8.0-12.0??105 Earth years) when flows in the north polar region are restricted to summer months. ?? 2011 Elsevier Inc.

  15. Channel morphometry, sediment transport, and implications for tectonic activity and surficial ages of Titan basins

    USGS Publications Warehouse

    Cartwright, Richard; Clayton, Jordan A.; Kirk, Randolph L.

    2011-01-01

    Fluvial features on Titan and drainage basins on Earth are remarkably similar despite differences in gravity and surface composition. We determined network bifurcation (Rb) ratios for five Titan and three terrestrial analog basins. Tectonically-modified Earth basins have Rb values greater than the expected range (3.0–5.0) for dendritic networks; comparisons with Rb values determined for Titanbasins, in conjunction with similarities in network patterns, suggest that portions of Titan's north polar region are modified by tectonic forces. Sufficient elevation data existed to calculate bed slope and potential fluvial sedimenttransport rates in at least one Titanbasin, indicating that 75 mm water ice grains (observed at the Huygens landing site) should be readily entrained given sufficient flow depths of liquid hydrocarbons. Volumetric sedimenttransport estimates suggest that ~6700–10,000 Titan years (~2.0–3.0 x 105 Earth years) are required to erode this basin to its minimum relief (assuming constant 1 m and 1.5 m flows); these lowering rates increase to ~27,000–41,000 Titan years (~8.0–12.0 x 105 Earth years) when flows in the north polar region are restricted to summer months.

  16. Updated operational protocols for the U.S. Geological Survey Precipitation Chemistry Quality Assurance Project in support of the National Atmospheric Deposition Program

    USGS Publications Warehouse

    Wetherbee, Gregory A.; Martin, RoseAnn

    2017-02-06

    The U.S. Geological Survey Branch of Quality Systems operates the Precipitation Chemistry Quality Assurance Project (PCQA) for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) and National Atmospheric Deposition Program/Mercury Deposition Network (NADP/MDN). Since 1978, various programs have been implemented by the PCQA to estimate data variability and bias contributed by changing protocols, equipment, and sample submission schemes within NADP networks. These programs independently measure the field and laboratory components which contribute to the overall variability of NADP wet-deposition chemistry and precipitation depth measurements. The PCQA evaluates the quality of analyte-specific chemical analyses from the two, currently (2016) contracted NADP laboratories, Central Analytical Laboratory and Mercury Analytical Laboratory, by comparing laboratory performance among participating national and international laboratories. Sample contamination and stability are evaluated for NTN and MDN by using externally field-processed blank samples provided by the Branch of Quality Systems. A colocated sampler program evaluates the overall variability of NTN measurements and bias between dissimilar precipitation gages and sample collectors.This report documents historical PCQA operations and general procedures for each of the external quality-assurance programs from 2007 to 2016.

  17. One-dimensional modelling of the interactions between heavy rainfall-runoff in an urban area and flooding flows from sewer networks and rivers.

    PubMed

    Kouyi, G Lipeme; Fraisse, D; Rivière, N; Guinot, V; Chocat, B

    2009-01-01

    Many investigations have been carried out in order to develop models which allow the linking of complex physical processes involved in urban flooding. The modelling of the interactions between overland flows on streets and flooding flows from rivers and sewer networks is one of the main objectives of recent and current research programs in hydraulics and urban hydrology. This paper outlines the original one-dimensional linking of heavy rainfall-runoff in urban areas and flooding flows from rivers and sewer networks under the RIVES project framework (Estimation of Scenario and Risks of Urban Floods). The first part of the paper highlights the capacity of Canoe software to simulate the street flows. In the second part, we show the original method of connection which enables the modelling of interactions between processes in urban flooding. Comparisons between simulated results and the results of Despotovic et al. or Gomez & Mur show a good agreement for the calibrated one-dimensional connection model. The connection operates likes a manhole with the orifice/weir coefficients used as calibration parameters. The influence of flooding flows from river was taken into account as a variable water depth boundary condition.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawase, Kazumasa, E-mail: Kawase.Kazumasa@ak.MitsubishiElectric.co.jp; Motoya, Tsukasa; Uehara, Yasushi

    Silicon dioxide (SiO{sub 2}) films formed by chemical vapor deposition (CVD) have been treated with Ar plasma excited by microwave. The changes of the mass densities, carrier trap densities, and thicknesses of the CVD-SiO{sub 2} films with the Ar plasma treatments were investigated. The mass density depth profiles were estimated with X-Ray Reflectivity (XRR) analysis using synchrotron radiation. The densities of carrier trap centers due to defects of Si-O bond network were estimated with X-ray Photoelectron Spectroscopy (XPS) time-dependent measurement. The changes of the thicknesses due to the oxidation of Si substrates were estimated with the XRR and XPS. Themore » mass densities of the CVD-SiO{sub 2} films are increased by the Ar plasma treatments. The carrier trap densities of the films are decreased by the treatments. The thicknesses of the films are not changed by the treatments. It has been clarified that the mass densification and defect restoration in the CVD-SiO{sub 2} films are caused by the Ar plasma treatments without the oxidation of the Si substrates.« less

  19. Slip rates and spatially variable creep on faults of the northern San Andreas system inferred through Bayesian inversion of Global Positioning System data

    USGS Publications Warehouse

    Murray, Jessica R.; Minson, Sarah E.; Svarc, Jerry L.

    2014-01-01

    Fault creep, depending on its rate and spatial extent, is thought to reduce earthquake hazard by releasing tectonic strain aseismically. We use Bayesian inversion and a newly expanded GPS data set to infer the deep slip rates below assigned locking depths on the San Andreas, Maacama, and Bartlett Springs Faults of Northern California and, for the latter two, the spatially variable interseismic creep rate above the locking depth. We estimate deep slip rates of 21.5 ± 0.5, 13.1 ± 0.8, and 7.5 ± 0.7 mm/yr below 16 km, 9 km, and 13 km on the San Andreas, Maacama, and Bartlett Springs Faults, respectively. We infer that on average the Bartlett Springs fault creeps from the Earth's surface to 13 km depth, and below 5 km the creep rate approaches the deep slip rate. This implies that microseismicity may extend below the locking depth; however, we cannot rule out the presence of locked patches in the seismogenic zone that could generate moderate earthquakes. Our estimated Maacama creep rate, while comparable to the inferred deep slip rate at the Earth's surface, decreases with depth, implying a slip deficit exists. The Maacama deep slip rate estimate, 13.1 mm/yr, exceeds long-term geologic slip rate estimates, perhaps due to distributed off-fault strain or the presence of multiple active fault strands. While our creep rate estimates are relatively insensitive to choice of model locking depth, insufficient independent information regarding locking depths is a source of epistemic uncertainty that impacts deep slip rate estimates.

  20. Consistent Steering System using SCTP for Bluetooth Scatternet Sensor Network

    NASA Astrophysics Data System (ADS)

    Dhaya, R.; Sadasivam, V.; Kanthavel, R.

    2012-12-01

    Wireless communication is the best way to convey information from source to destination with flexibility and mobility and Bluetooth is the wireless technology suitable for short distance. On the other hand a wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Using Bluetooth piconet wireless technique in sensor nodes creates limitation in network depth and placement. The introduction of Scatternet solves the network restrictions with lack of reliability in data transmission. When the depth of the network increases, it results in more difficulties in routing. No authors so far focused on the reliability factors of Scatternet sensor network's routing. This paper illustrates the proposed system architecture and routing mechanism to increase the reliability. The another objective is to use reliable transport protocol that uses the multi-homing concept and supports multiple streams to prevent head-of-line blocking. The results show that the Scatternet sensor network has lower packet loss even in the congestive environment than the existing system suitable for all surveillance applications.

  1. Stratospheric aerosol optical depths, 1850-1990

    NASA Technical Reports Server (NTRS)

    Sato, Makiko; Hansen, James E.; Mccormick, M. Patrick; Pollack, James B.

    1993-01-01

    A global stratospheric aerosol database employed for climate simulations is described. For the period 1883-1990, aerosol optical depths are estimated from optical extinction data, whose quality increases with time over that period. For the period 1850-1882, aerosol optical depths are more crudely estimated from volcanological evidence for the volume of ejecta from major known volcanoes. The data set is available over Internet.

  2. Southern California Beaches during the El Niño Winter of 2009/2010

    NASA Astrophysics Data System (ADS)

    Doria, A.; Guza, R. T.; Yates, M. L.; O'Reilly, W.

    2010-12-01

    Storms during the El Niño winter 2009/2010 produced prolonged periods of energetic waves, and severely eroded southern California beaches. Sand elevations were measured at several beaches over alongshore spans of a few km, for up to 5 years, on cross-shore transects extending from the back beach to about 8 meters depth, and spaced every 100 meters alongshore. Wave conditions were estimated using the CDIP network of directional wave buoys. At the Torrey Pines Outer Buoy, the median significant wave height for January 2010 was the largest for any month in the past 10 year record. Anomalous changes in beach sand level, characterized as the excess volume displaced relative to average-winter profiles, were extreme in both the amount of shoreline erosion and the amount of offshore accretion. Anomalous shoreline erosion volumes were almost twice as large as the second-most severe winter, with vertical deviations as large as -2.3m. Anomalous offshore accretion, in depths between 4-8m and as large as 1.5m vertical, was also exceptional. Beach widths, based on the cross-shore location of the Mean Sea Level (MSL) contour, were narrower than measured in previous winters. The accuracy of shoreline (MSL) location, predicted using an existing shoreline change equilibrium model driven with the estimated waves, will be assessed. Beach recovery, based on ongoing surveys, will also be discussed.

  3. Modifying Bagnold's Sediment Transport Equation for Use in Watershed-Scale Channel Incision Models

    NASA Astrophysics Data System (ADS)

    Lammers, R. W.; Bledsoe, B. P.

    2016-12-01

    Destabilized stream channels may evolve through a sequence of stages, initiated by bed incision and followed by bank erosion and widening. Channel incision can be modeled using Exner-type mass balance equations, but model accuracy is limited by the accuracy and applicability of the selected sediment transport equation. Additionally, many sediment transport relationships require significant data inputs, limiting their usefulness in data-poor environments. Bagnold's empirical relationship for bedload transport is attractive because it is based on stream power, a relatively straightforward parameter to estimate using remote sensing data. However, the equation is also dependent on flow depth, which is more difficult to measure or estimate for entire drainage networks. We recast Bagnold's original sediment transport equation using specific discharge in place of flow depth. Using a large dataset of sediment transport rates from the literature, we show that this approach yields similar predictive accuracy as other stream power based relationships. We also explore the applicability of various critical stream power equations, including Bagnold's original, and support previous conclusions that these critical values can be predicted well based solely on sediment grain size. In addition, we propagate error in these sediment transport equations through channel incision modeling to compare the errors associated with our equation to alternative formulations. This new version of Bagnold's bedload transport equation has utility for channel incision modeling at larger spatial scales using widely available and remote sensing data.

  4. Improved heuristics for early melanoma detection using multimode hyperspectral dermoscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas B.; Booth, Nicholas; Farkas, Daniel L.

    2017-02-01

    Purpose: To determine the performance of a multimode dermoscopy system (SkinSpect) designed to quantify and 3-D map in vivo melanin and hemoglobin concentrations in skin and its melanoma scoring system, and compare the results accuracy with SIAscopy, and histopathology. Methods: A multimode imaging dermoscope is presented that combines polarization, fluorescence and hyperspectral imaging to accurately map the distribution of skin melanin, collagen and hemoglobin in pigmented lesions. We combine two depth-sensitive techniques: polarization, and hyperspectral imaging, to determine the spatial distribution of melanin and hemoglobin oxygenation in a skin lesion. By quantifying melanin absorption in pigmented areas, we can also more accurately estimate fluorescence emission distribution mainly from skin collagen. Results and discussion: We compared in vivo features of melanocytic lesions (N = 10) extracted by non-invasive SkinSpect and SIMSYS-MoleMate SIAscope, and correlate them to pathology report. Melanin distribution at different depths as well as hemodynamics including abnormal vascularity we detected will be discussed. We will adapt SkinSpect scoring with ABCDE (asymmetry , border, color, diameter, evolution) and seven point dermatologic checklist including: (1) atypical pigment network, (2) blue-whitish veil, (3) atypical vascular pattern, (4) irregular streaks, (5) irregular pigmentation, (6) irregular dots and globules, (7) regression structures estimated by dermatologist. Conclusion: Distinctive, diagnostic features seen by SkinSpect in melanoma vs. normal pigmented lesions will be compared by SIAscopy and results from histopathology.

  5. ANN Surface Roughness Optimization of AZ61 Magnesium Alloy Finish Turning: Minimum Machining Times at Prime Machining Costs.

    PubMed

    Abbas, Adel Taha; Pimenov, Danil Yurievich; Erdakov, Ivan Nikolaevich; Taha, Mohamed Adel; Soliman, Mahmoud Sayed; El Rayes, Magdy Mostafa

    2018-05-16

    Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth⁻Pareto optimization of an artificial neural network (ANN) is presented in this paper for surface roughness ( Ra ) prediction of one component in computer numerical control (CNC) turning over minimal machining time ( T m ) and at prime machining costs ( C ). An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP), to predict Ra , T m , and C , in relation to cutting speed, v c , depth of cut, a p , and feed per revolution, f r . For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values v c , a p , and f r . The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, T m = 0.358 min/cm³, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed v c = 250 m/min, cutting depth a p = 1.0 mm, and feed per revolution f r = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness.

  6. Moderate Imaging Resolution Spectroradiometer (MODIS) Aerosol Optical Depth Retrieval for Aerosol Radiative Forcing

    NASA Astrophysics Data System (ADS)

    Asmat, A.; Jalal, K. A.; Ahmad, N.

    2018-02-01

    The present study uses the Aerosol Optical Depth (AOD) retrieved from Moderate Imaging Resolution Spectroradiometer (MODIS) data for the period from January 2011 until December 2015 over an urban area in Kuching, Sarawak. The results show the minimum AOD value retrieved from MODIS is -0.06 and the maximum value is 6.0. High aerosol loading with high AOD value observed during dry seasons and low AOD monitored during wet seasons. Multi plane regression technique used to retrieve AOD from MODIS (AODMODIS) and different statistics parameter is proposed by using relative absolute error for accuracy assessment in spatial and temporal averaging approach. The AODMODIS then compared with AOD derived from Aerosol Robotic Network (AERONET) Sunphotometer (AODAERONET) and the results shows high correlation coefficient (R2) for AODMODIS and AODAERONET with 0.93. AODMODIS used as an input parameters into Santa Barbara Discrete Ordinate Radiative Transfer (SBDART) model to estimate urban radiative forcing at Kuching. The observed hourly averaged for urban radiative forcing is -0.12 Wm-2 for top of atmosphere (TOA), -2.13 Wm-2 at the surface and 2.00 Wm-2 in the atmosphere. There is a moderate relationship observed between urban radiative forcing calculated using SBDART and AERONET which are 0.75 at the surface, 0.65 at TOA and 0.56 in atmosphere. Overall, variation in AOD tends to cause large bias in the estimated urban radiative forcing.

  7. Near-Surface Shear Wave Velocity Versus Depth Profiles, VS30, and NEHRP Classifications for 27 Sites in Puerto Rico

    USGS Publications Warehouse

    Odum, Jack K.; Williams, Robert A.; Stephenson, William J.; Worley, David M.; von Hillebrandt-Andrade, Christa; Asencio, Eugenio; Irizarry, Harold; Cameron, Antonio

    2007-01-01

    In 2004 and 2005 the Puerto Rico Seismic Network (PRSN), Puerto Rico Strong Motion Program (PRSMP) and the Geology Department at the University of Puerto Rico-Mayaguez (UPRM) collaborated with the U.S. Geological Survey to study near-surface shear-wave (Vs) and compressional-wave (Vp) velocities in and around major urban areas of Puerto Rico. Using noninvasive seismic refraction-reflection profiling techniques, we acquired velocities at 27 locations. Surveyed sites were predominantly selected on the premise that they were generally representative of near-surface materials associated with the primary geologic units located within the urbanized areas of Puerto Rico. Geologic units surveyed included Cretaceous intrusive and volcaniclastic bedrock, Tertiary sedimentary and volcanic units, and Quaternary unconsolidated eolian, fluvial, beach, and lagoon deposits. From the data we developed Vs and Vp depth versus velocity columns, calculated average Vs to 30-m depth (VS30), and derived NEHRP (National Earthquake Hazards Reduction Program) site classifications for all sites except one where results did not reach 30-m depth. The distribution of estimated NEHRP classes is as follows: three class 'E' (VS30 below 180 m/s), nine class 'D' (VS30 between 180 and 360 m/s), ten class 'C' (VS30 between 360 and 760 m/s), and four class 'B' (VS30 greater than 760 m/s). Results are being used to calibrate site response at seismograph stations and in the development of regional and local shakemap models for Puerto Rico.

  8. Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks.

    PubMed

    Peker, Musa; Şen, Baha; Gürüler, Hüseyin

    2015-02-01

    The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time.

  9. A new, improved and fully automatic method for teleseismic depth estimation of moderate earthquakes (4.5 < M < 5.5): application to the Guerrero subduction zone (Mexico)

    NASA Astrophysics Data System (ADS)

    Letort, Jean; Guilbert, Jocelyn; Cotton, Fabrice; Bondár, István; Cano, Yoann; Vergoz, Julien

    2015-06-01

    The depth of an earthquake is difficult to estimate because of the trade-off between depth and origin time estimations, and because it can be biased by lateral Earth heterogeneities. To face this challenge, we have developed a new, blind and fully automatic teleseismic depth analysis. The results of this new method do not depend on epistemic uncertainties due to depth-phase picking and identification. The method consists of a modification of the cepstral analysis from Letort et al. and Bonner et al., which aims to detect surface reflected (pP, sP) waves in a signal at teleseismic distances (30°-90°) through the study of the spectral holes in the shape of the signal spectrum. The ability of our automatic method to improve depth estimations is shown by relocation of the recent moderate seismicity of the Guerrero subduction area (Mexico). We have therefore estimated the depth of 152 events using teleseismic data from the IRIS stations and arrays. One advantage of this method is that it can be applied for single stations (from IRIS) as well as for classical arrays. In the Guerrero area, our new cepstral analysis efficiently clusters event locations and provides an improved view of the geometry of the subduction. Moreover, we have also validated our method through relocation of the same events using the new International Seismological Centre (ISC)-locator algorithm, as well as comparing our cepstral depths with the available Harvard-Centroid Moment Tensor (CMT) solutions and the three available ground thrust (GT5) events (where lateral localization is assumed to be well constrained with uncertainty <5 km) for this area. These comparisons indicate an overestimation of focal depths in the ISC catalogue for deeper parts of the subduction, and they show a systematic bias between the estimated cepstral depths and the ISC-locator depths. Using information from the CMT catalogue relating to the predominant focal mechanism for this area, this bias can be explained as a misidentification of sP phases by pP phases, which shows the greater interest for the use of this new automatic cepstral analysis, as it is less sensitive to phase identification.

  10. Estimated and measured bridge scour at selected sites in North Dakota, 1990-97

    USGS Publications Warehouse

    Williams-Sether, Tara

    1999-01-01

    A Level 2 bridge scour method was used to estimate scour depths at 36 selected bridge sites located on the primary road system throughout North Dakota. Of the 36 bridge sites analyzed, the North Dakota Department of Transportation rated 15 as scour critical. Flood and scour data were collected at 19 of the 36 selected bridge sites during 1990-97. Data collected were sufficient to estimate pier scour but not contraction or abutment scour. Estimated pier scour depths ranged from -10.6 to -1.2 feet, and measured bed-elevation changes at piers ranged from -2.31 to +2.37 feet. Comparisons between the estimated pier scour depths and the measured bed-elevation changes indicate that the pier scour equations overestimate scour at bridges in North Dakota.A Level 1.5 bridge scour method also was used to estimate scour depths at 495 bridge sites located on the secondary road system throughout North Dakota. The North Dakota Department of Transportation determined that 26 of the 495 bridge sites analyzed were potentially scour critical.

  11. The volume and mean depth of Earth's lakes

    NASA Astrophysics Data System (ADS)

    Cael, B. B.; Heathcote, A. J.; Seekell, D. A.

    2017-01-01

    Global lake volume estimates are scarce, highly variable, and poorly documented. We developed a rigorous method for estimating global lake depth and volume based on the Hurst coefficient of Earth's surface, which provides a mechanistic connection between lake area and volume. Volume-area scaling based on the Hurst coefficient is accurate and consistent when applied to lake data sets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3). This volume is in the range of historical estimates (166,000-280,000 km3), but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62-151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles.

  12. Radiance Assimilation Shows Promise for Snowpack Characterization: A 1-D Case Study

    NASA Technical Reports Server (NTRS)

    Durand, Michael; Kim, Edward; Margulis, Steve

    2008-01-01

    We demonstrate an ensemble-based radiometric data assimilation (DA) methodology for estimating snow depth and snow grain size using ground-based passive microwave (PM) observations at 18.7 and 36.5 GHz collected during the NASA CLPX-1, March 2003, Colorado, USA. A land surface model was used to develop a prior estimate of the snowpack states, and a radiative transfer model was used to relate the modeled states to the observations. Snow depth bias was -53.3 cm prior to the assimilation, and -7.3 cm after the assimilation. Snow depth estimated by a non-DA-based retrieval algorithm using the same PM data had a bias of -18.3 cm. The sensitivity of the assimilation scheme to the grain size uncertainty was evaluated; over the range of grain size uncertainty tested, the posterior snow depth estimate bias ranges from -2.99 cm to -9.85 cm, which is uniformly better than both the prior and retrieval estimates. This study demonstrates the potential applicability of radiometric DA at larger scales.

  13. Estimate of Cosmic Muon Background for Shallow Underground Neutrino Detectors

    NASA Astrophysics Data System (ADS)

    Casimiro, E.; Simão, F. R. A.; Anjos, J. C.

    One of the severe limitations in detecting neutrino signals from nuclear reactors is that the copious cosmic ray background imposes the use of a time veto upon the passage of the muons to reduce the number of fake signals due to muon-induced spallation neutrons. For this reason neutrino detectors are usually located underground, with a large overburden. However there are practical limitations that do restrain from locating the detectors at large depths underground. In order to decide the depth underground at which the Neutrino Angra Detector (currently in preparation) should be installed, an estimate of the cosmogenic background in the detector as a function of the depth is required. We report here a simple analytical estimation of the muon rates in the detector volume for different plausible depths, assuming a simple plain overburden geometry. We extend the calculation to the case of the San Onofre neutrino detector and to the case of the Double Chooz neutrino detector, where other estimates or measurements have been performed. Our estimated rates are consistent.

  14. Subduction of lower continental crust beneath the Pamir imaged by receiver functions from the seismological TIPAGE network

    NASA Astrophysics Data System (ADS)

    Schneider, F. M.; Yuan, X.; Schurr, B.; Mechie, J.; Sippl, C.; Kufner, S.; Haberland, C. A.; Minaev, V.; Oimahmadov, I.; Gadoev, M.; Abdybachaev, U.; Orunbaev, S.

    2013-12-01

    As the northwestern promontory of the Tibetan Plateau, the Pamir forms an outstanding part of the India-Asia convergence zone. The Pamir plateau has an average elevation of more than 4000 m surrounded by peaks exceeding 7000 m at its northern, eastern and southern borders. The Pamir is thought to consist of the same collage of continental terranes as Tibet. However, in this region the Indian-Asian continental collision presents an extreme situation since, compared to Tibet, in the Pamir a similar amount of north-south convergence has been accommodated within a much smaller distance. The Pamir hosts a zone of intermediate depth earthquakes being the seismic imprint of Earth's most spectacular active intra-continental subduction zone. We present receiver function (RF) images from the TIPAGE seismic profile giving evidence that the intermediate depth seismicity is situated within a subducted layer of lower continental crust: We observe a southerly dipping 10-15 km thick low-velocity zone (LVZ), that starts from the base of the crust and extends to a depth of more than 150 km enveloping the intermediate depth earthquakes that have been located with high precision from our local network records. In a second northwest to southeast cross section we observe that towards the western Pamir the dip direction of the LVZ bends to the southeast following the geometry of the intermediate depth seismic zone. Our observations imply that the complete arcuate intermediate depth seismic zone beneath the Pamir traces a slab of subducting Eurasian continental lower crust. These observations provide important implications for the geodynamics of continental collision: First, it shows that under extreme conditions lower crust can be brought to mantle depths despite its buoyancy, a fact that is also testified by the exhumation of ultra-high pressure metamorphic rocks. Recent results from teleseismic tomography show a signal of Asian mantle lithosphere down to 600 km depth, implying a great amount of mantle lithosphere to be involved in the subduction, which possibly transmits pull forces to the lower crust to overcome its buoyancy. Secondly, the observation that earthquakes occur within the subducted crust implies that similar to oceanic subduction, metamorphic processes within the lower continental crust can cause or enable earthquakes at depths, where the high pressure and temperature conditions would normally not allow brittle failure of rocks. For imaging of the dipping LVZ, cross sections of Q- and T-component RFs are generated using a migration technique that accounts for the inclination of the conversion layers. Furthermore we present a Moho map of the Pamir, showing crustal thickness in most places of the Pamir ranging between 65 km and 75 km, while the greatest Moho depths of around 80 km are observed at the upper end of the LVZ. The surrounding areas namely the Tajik Depression, and the Ferghana and Tarim Basins show Moho depths of around 40 to 45 km giving an estimate of the pre-collisional crustal thickness of the former Basin area that was overthrust by the Pamir.

  15. Estimation of River Bathymetry from ATI-SAR Data

    NASA Astrophysics Data System (ADS)

    Almeida, T. G.; Walker, D. T.; Farquharson, G.

    2013-12-01

    A framework for estimation of river bathymetry from surface velocity observation data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs) including bottom friction. We start with with a cost function defined by the error between observed and estimated surface velocities, and introduce the SWEs as a constraint on the velocity field. The constrained minimization problem is converted to an unconstrained minimization through the use of Lagrange multipliers, and an adjoint SWE model is developed. The adjoint model solution is used to calculate the gradient of the cost function with respect to river bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In applying the algorithm, the 2D depth-averaged flow is computed assuming a known, constant discharge rate and a known, uniform bottom-friction coefficient; a correlation relating surface velocity and depth-averaged velocity is also used. Observation data was collected using a dual beam squinted along-track-interferometric, synthetic-aperture radar (ATI-SAR) system, which provides two independent components of the surface velocity, oriented roughly 30 degrees fore and aft of broadside, offering high-resolution bank-to-bank velocity vector coverage of the river. Data and bathymetry estimation results are presented for two rivers, the Snohomish River near Everett, WA and the upper Sacramento River, north of Colusa, CA. The algorithm results are compared to available measured bathymetry data, with favorable results. General trends show that the water-depth estimates are most accurate in shallow regions, and performance is sensitive to the accuracy of the specified discharge rate and bottom friction coefficient. The results also indicate that, for a given reach, the estimated water depth reaches a maximum that is smaller than the true depth; this apparent maximum depth scales with the true river depth and discharge rate, so that the deepest parts of the river show the largest bathymetry errors.

  16. Studying local earthquakes in the area Baltic-Bothnia Megashear using the data of the POLENET/LAPNET temporary array

    NASA Astrophysics Data System (ADS)

    Usoltseva, Olga; Kozlovskaya, Elena

    2016-07-01

    Earthquakes in areas within continental plates are still not completely understood, and progress on understanding intraplate seismicity is slow due to a short history of instrumental seismology and sparse regional seismic networks in seismically non-active areas. However, knowledge about position and depth of seismogenic structures in such areas is necessary in order to estimate seismic hazard for such critical facilities such as nuclear power plants and nuclear waste deposits. In the present paper we address the problem of seismicity in the intraplate area of northern Fennoscandia using the information on local events recorded by the POLENET/LAPNET (Polar Earth Observing Network) temporary seismic array during the International Polar Year 2007-2009. We relocate the seismic events using the program HYPOELLIPS (a computer program for determining local earthquake hypocentral parameters) and grid search method. We use the first arrivals of P waves of local events in order to calculate a 3-D tomographic P wave velocity model of the uppermost crust (down to 20 km) for a selected region inside the study area and show that the velocity heterogeneities in the upper crust correlate well with known tectonic units. We compare the position of the velocity heterogeneities with the seismogenic structures delineated by epicentres of relocated events and demonstrate that these structures generally do not correlate with the crustal units formed as a result of crustal evolution in the Archaean and Palaeoproterozoic. On the contrary, they correlate well with the postglacial faults located in the area of the Baltic-Bothnia Megashear (BBMS). Hypocentres of local events have depths down to 30 km. We also obtain the focal mechanism of a selected event with good data quality. The focal mechanism is of oblique type with strike-slip prevailing. Our results demonstrate that the Baltic-Bothnia Megashear is an important large-scale, reactivated tectonic structure that has to be taken into account when estimating seismic hazard in northern Fennoscandia.

  17. Strong Ground Motion Analysis and Afterslip Modeling of Earthquakes near Mendocino Triple Junction

    NASA Astrophysics Data System (ADS)

    Gong, J.; McGuire, J. J.

    2017-12-01

    The Mendocino Triple Junction (MTJ) is one of the most seismically active regions in North America in response to the ongoing motions between North America, Pacific and Gorda plates. Earthquakes near the MTJ come from multiple types of faults due to the interaction boundaries between the three plates and the strong internal deformation within them. Understanding the stress levels that drive the earthquake rupture on the various types of faults and estimating the locking state of the subduction interface are especially important for earthquake hazard assessment. However due to lack of direct offshore seismic and geodetic records, only a few earthquakes' rupture processes have been well studied and the locking state of the subducted slab is not well constrained. In this study we first use the second moment inversion method to study the rupture process of the January 28, 2015 Mw 5.7 strike slip earthquake on Mendocino transform fault using strong ground motion records from Cascadia Initiative community experiment as well as onshore seismic networks. We estimate the rupture dimension to be of 6 km by 3 km and a stress drop of 7 MPa on the transform fault. Next we investigate the frictional locking state on the subduction interface through afterslip simulation based on coseismic rupture models of this 2015 earthquake and a Mw 6.5 intraplate eathquake inside Gorda plate whose slip distribution is inverted using onshore geodetic network in previous study. Different depths for velocity strengthening frictional properties to start at the downdip of the locked zone are used to simulate afterslip scenarios and predict the corresponding surface deformation (GPS) movements onshore. Our simulations indicate that locking depth on the slab surface is at least 14 km, which confirms that the next M8 earthquake rupture will likely reach the coastline and strong shaking should be expected near the coast.

  18. High Resolution Shear-Wave Velocity Structure of Greenland from Surface Wave Analysis

    NASA Astrophysics Data System (ADS)

    Pourpoint, M.; Anandakrishnan, S.; Ammon, C. J.

    2016-12-01

    We present a high resolution seismic tomography model of Greenland's lithosphere from the analysis of fundamental mode Rayleigh-wave group velocity dispersion measurements. Regional and teleseismic events recorded by the GLISN, GSN and CN seismic networks over the last 20 years were used. In order to better constrain the crustal structure of Greenland, we also collected and processed several years of ambient noise data. We developed a new group velocity correction method that helps to alleviate the limitations of the sparse Greenland station network and the relatively few local events. The global dispersion model GDM52 from Ekström [2011] was used to calculate group delays from the earthquake to the boundaries of our study area. An iterative reweighted generalized least-square approach was used to invert for the group velocity maps between periods of 5 s and 180 s. A Markov chain Monte Carlo technique was then applied to invert for a 3-D shear wave velocity model of Greenland up to a depth of 200 km and estimate the uncertainties in the model. Our method results in relatively uniform azimuthal coverage and high resolution length ( 200 to 400 km) in west and east Greenland. We detect a deep high velocity zone extending from northwestern to southwestern Greenland and a low velocity zone (LVZ) between central-eastern and northeastern Greenland. The location of the LVZ correlates well with a previously measured high geothermal heat flux and could provide valuable information about its source. We expect the results of the ambient noise tomography to cross-validate the earthquake tomography results and give us a better estimate of the spatial extent and amplitude of the LVZ at shallow depths. A refined regional model of Greenland's lithospheric structure should eventually help better understand how underlying geological and geophysical processes may impact the dynamics of the ice sheet and influence its potential contribution to future sea level changes.

  19. Impact of Planetary Boundary Layer Depth on Climatological Tracer Transport in the GEOS-5 AGCM

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2013-12-01

    Planetary boundary layer (PBL) processes have large implications for tropospheric tracer transport since surface fluxes are diluted by the depth of the PBL through vertical mixing. However, no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to diagnose PBL depth and produce climatologies that are evaluated here. All seven methods evaluate a single atmosphere so differences are related solely to the definition chosen. PBL depths that are estimated using a Richardson number are shallower than those given by methods based on the scalar diffusivity during warm, moist conditions at midday and collapse to lower values at night. In GEOS-5, the PBL depth is used in the estimation of the turbulent length scale and so impacts vertical mixing. Changing the method used to determine the PBL depth for this length scale thus changes the tracer transport. Using a bulk Richardson number method instead of a scalar diffusivity method produces changes in the quantity of Saharan dust lofted into the free troposphere and advected to North America, with more surface dust in North America during boreal summer and less in boreal winter. Additionally, greenhouse gases are considerably impacted. During boreal winter, changing the PBL depth definition produces carbon dioxide differences of nearly 5 ppm over Siberia and gradients of about 5 ppm over 1000 km in Europe. PBL depth changes are responsible for surface carbon monoxide changes of 20 ppb or more over the biomass burning regions of Africa.

  20. Estimating temporal and spatial variation of ocean surface pCO2 in the North Pacific using a Self Organizing Map neural network technique

    NASA Astrophysics Data System (ADS)

    Nakaoka, S.; Telszewski, M.; Nojiri, Y.; Yasunaka, S.; Miyazaki, C.; Mukai, H.; Usui, N.

    2013-03-01

    This study produced maps of the partial pressure of oceanic carbon dioxide (pCO2sea) in the North Pacific on a 0.25° latitude × 0.25° longitude grid from 2002 to 2008. The pCO2sea values were estimated by using a self-organizing map neural network technique to explain the non-linear relationships between observed pCO2sea data and four oceanic parameters: sea surface temperature (SST), mixed layer depth, chlorophyll a concentration, and sea surface salinity (SSS). The observed pCO2sea data was obtained from an extensive dataset generated by the volunteer observation ship program operated by the National Institute for Environmental Studies. The reconstructed pCO2sea values agreed rather well with the pCO2sea measurements, the root mean square error being 17.6 μatm. The pCO2sea estimates were improved by including SSS as one of the training parameters and by taking into account secular increases of pCO2sea that have tracked increases in atmospheric CO2. Estimated pCO2sea values accurately reproduced pCO2sea data at several stations in the North Pacific. The distributions of pCO2sea revealed by seven-year averaged monthly pCO2sea maps were similar to Lamont-Doherty Earth Observatory pCO2sea climatology and more precisely reflected oceanic conditions. The distributions of pCO2sea anomalies over the North Pacific during the winter clearly showed regional contrasts between El Niño and La Niña years related to changes of SST and vertical mixing.

  1. Strong motion recordings of the 2008/12/23 earthquake in Northern Italy: another case of very weak motion?

    NASA Astrophysics Data System (ADS)

    Sabetta, F.; Zambonelli, E.

    2009-04-01

    On December 23 2008 an earthquake of magnitude ML=5.1 (INGV) Mw=5.4 (INGV-Harvard Global CMT) occurred in northern Italy close to the cities of Parma and Reggio Emilia. The earthquake, with a macroseismic intensity of VI MCS, caused a very slight damage (some tens of unusable buildings and some hundreds of damaged buildings), substantially lower than the damage estimated by the loss simulation scenario currently used by the Italian Civil Protection. Due to the recent upgrading of the Italian strong motion network (RAN), the event has been recorded by a great number of accelerometers (the largest ever obtained in Italy for a single shock): 21 digital and 8 analog instruments with epicentral distances ranging from 16 to 140 km. The comparison of recorded PGA, PGV, Arias intensity, and spectral values with several widely used Ground Motion Prediction Equations (GMPEs) showed much lower ground motion values respect to the empirical predictions (a factor ranging from 4 to 2). A first explanation of the strong differences, in damage and ground motion, between actual data and predictions could be, at a first sight, attributed to the rather high focal depth of 27 km. However, even the adoption of GMPEs accounting for depth of the source and using hypocentral distance (Berge et al 2003, Pousse et al 2005), does not predict large differences in motions, especially at distances larger than 30 km where most of the data are concentrated and where the effect of depth on source-to-site distance is small. At the same time the adoption of the most recent GMPEs (Ambraseys et al 2005, Akkar & bommer 2007) taking into account the different magnitude scaling and the faster attenuation of small magnitudes through magnitude-dependent attenuation, does not show a better agreement with the recorded data. The real reasons of the above mentioned discrepancies need to be further investigated, however a possible explanation could be a low source rupture velocity, likewise the 2002 Molise earthquake that also generated very weak motions. Another explanation comes from the fact that the moment magnitude estimated by the INGV network on the basis of body-waves instead of surface-waves used by Harvard CMT, is 4.9 and not 5.4, providing a much better fit of recorded ground motions with GMPEs.

  2. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps

    PubMed Central

    Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi

    2015-01-01

    Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003

  3. Induced Seismicity Related to Hydrothermal Operation of Geothermal Projects in Southern Germany - Observations and Future Directions

    NASA Astrophysics Data System (ADS)

    Megies, T.; Kraft, T.; Wassermann, J. M.

    2015-12-01

    Geothermal power plants in Southern Germany are operated hydrothermally and at low injection pressures in a seismically inactive region considered very low seismic hazard. For that reason, permit authorities initially enforced no monitoring requirements on the operating companies. After a series of events perceived by local residents, a scientific monitoring survey was conducted over several years, revealing several hundred induced earthquakes at one project site.We summarize results from monitoring at this site, including absolute locations in a local 3D velocity model, relocations using double-difference and master-event methods and focal mechanism determinations that show a clear association with fault structures in the reservoir which extend down into the underlying crystalline basement. To better constrain the shear wave velocity models that have a strong influence on hypocentral depth estimates, several different approaches to estimate layered vp/vs models are employed.Results from these studies have prompted permit authorities to start imposing minimal monitoring requirements. Since in some cases these geothermal projects are only separated by a few kilometers, we investigate the capabilities of an optimized network combining the monitoring resources of six neighboring well doublets in a joint network. Optimization is taking into account the -- on this local scale, urban environment -- highly heterogeneous background noise conditions and the feasibility of potential monitoring sites, removing non-viable sites before the optimization procedure. First results from the actual network realization show good detection capabilities for small microearthquakes despite the minimum instrumentational effort, demonstrating the benefits of good coordination of monitoring efforts.

  4. Network Analysis on Attitudes: A Brief Tutorial.

    PubMed

    Dalege, Jonas; Borsboom, Denny; van Harreveld, Frenk; van der Maas, Han L J

    2017-07-01

    In this article, we provide a brief tutorial on the estimation, analysis, and simulation on attitude networks using the programming language R. We first discuss what a network is and subsequently show how one can estimate a regularized network on typical attitude data. For this, we use open-access data on the attitudes toward Barack Obama during the 2012 American presidential election. Second, we show how one can calculate standard network measures such as community structure, centrality, and connectivity on this estimated attitude network. Third, we show how one can simulate from an estimated attitude network to derive predictions from attitude networks. By this, we highlight that network theory provides a framework for both testing and developing formalized hypotheses on attitudes and related core social psychological constructs.

  5. Network Analysis on Attitudes

    PubMed Central

    Borsboom, Denny; van Harreveld, Frenk; van der Maas, Han L. J.

    2017-01-01

    In this article, we provide a brief tutorial on the estimation, analysis, and simulation on attitude networks using the programming language R. We first discuss what a network is and subsequently show how one can estimate a regularized network on typical attitude data. For this, we use open-access data on the attitudes toward Barack Obama during the 2012 American presidential election. Second, we show how one can calculate standard network measures such as community structure, centrality, and connectivity on this estimated attitude network. Third, we show how one can simulate from an estimated attitude network to derive predictions from attitude networks. By this, we highlight that network theory provides a framework for both testing and developing formalized hypotheses on attitudes and related core social psychological constructs. PMID:28919944

  6. A Comparison Between Heliosat-2 and Artificial Neural Network Methods for Global Horizontal Irradiance Retrievals over Desert Environments

    NASA Astrophysics Data System (ADS)

    Ghedira, H.; Eissa, Y.

    2012-12-01

    Global horizontal irradiance (GHI) retrievals at the surface of any given location could be used for preliminary solar resource assessments. More accurately, the direct normal irradiance (DNI) and diffuse horizontal irradiance (DHI) are also required to estimate the global tilt irradiance, mainly used for fixed flat plate collectors. Two different satellite-based models for solar irradiance retrievals have been applied over the desert environment of the United Arab Emirates (UAE). Both models employ channels of the SEVIRI instrument, onboard the geostationary satellite Meteosat Second Generation, as their main inputs. The satellite images used in this study have a temporal resolution of 15-min and a spatial resolution of 3-km. The objective of this study is to compare between the GHI retrieved using the Heliosat-2 method and an artificial neural network (ANN) ensemble method over the UAE. The high-resolution visible channel of SEVIRI is used in the Heliosat-2 method to derive the cloud index. The cloud index is then used to compute the cloud transmission, while the cloud-free GHI is computed from the Linke turbidity factor. The product of the cloud transmission and the cloud-free GHI denotes the estimated GHI. A constant underestimation is observed in the estimated GHI over the dataset available in the UAE. Therefore, the cloud-free DHI equation in the model was recalibrated to fix the bias. After recalibration, results over the UAE show a root mean square error (RMSE) value of 10.1% and a mean bias error (MBE) of -0.5%. As for the ANN approach, six thermal channels of SEVIRI were used to estimate the DHI and the total optical depth of the atmosphere (δ). An ensemble approach is employed to obtain a better generalizability of the results, as opposed to using one single weak network. The DNI is then computed from the estimated δ using the Beer-Bouguer-Lambert law. The GHI is computed from the DNI and DHI estimates. The RMSE for the estimated GHI obtained over an independent dataset over the UAE is 7.2% and the MBE is +1.9%. The results obtained by the two methods have shown that both the recalibrated Heliosat-2 and the ANN ensemble methods estimate the GHI at a 15-min resolution with high accuracy. The advantage of the ANN ensemble approach is that it derives the GHI from accurate DNI and DHI estimates. The DNI and DHI estimates are valuable when computing the global tilt irradiance. Also, accurate DNI estimates are beneficial for preliminary site selection for concentrating solar powered plants.

  7. Real-time tsunami inundation forecasting and damage mapping towards enhancing tsunami disaster resiliency

    NASA Astrophysics Data System (ADS)

    Koshimura, S.; Hino, R.; Ohta, Y.; Kobayashi, H.; Musa, A.; Murashima, Y.

    2014-12-01

    With use of modern computing power and advanced sensor networks, a project is underway to establish a new system of real-time tsunami inundation forecasting, damage estimation and mapping to enhance society's resilience in the aftermath of major tsunami disaster. The system consists of fusion of real-time crustal deformation monitoring/fault model estimation by Ohta et al. (2012), high-performance real-time tsunami propagation/inundation modeling with NEC's vector supercomputer SX-ACE, damage/loss estimation models (Koshimura et al., 2013), and geo-informatics. After a major (near field) earthquake is triggered, the first response of the system is to identify the tsunami source model by applying RAPiD Algorithm (Ohta et al., 2012) to observed RTK-GPS time series at GEONET sites in Japan. As performed in the data obtained during the 2011 Tohoku event, we assume less than 10 minutes as the acquisition time of the source model. Given the tsunami source, the system moves on to running tsunami propagation and inundation model which was optimized on the vector supercomputer SX-ACE to acquire the estimation of time series of tsunami at offshore/coastal tide gauges to determine tsunami travel and arrival time, extent of inundation zone, maximum flow depth distribution. The implemented tsunami numerical model is based on the non-linear shallow-water equations discretized by finite difference method. The merged bathymetry and topography grids are prepared with 10 m resolution to better estimate the tsunami inland penetration. Given the maximum flow depth distribution, the system performs GIS analysis to determine the numbers of exposed population and structures using census data, then estimates the numbers of potential death and damaged structures by applying tsunami fragility curve (Koshimura et al., 2013). Since the tsunami source model is determined, the model is supposed to complete the estimation within 10 minutes. The results are disseminated as mapping products to responders and stakeholders, e.g. national and regional municipalities, to be utilized for their emergency/response activities. In 2014, the system is verified through the case studies of 2011 Tohoku event and potential earthquake scenarios along Nankai Trough with regard to its capability and robustness.

  8. Modeling hazardous fire potential within a completed fuel treatment network in the northern Sierra Nevada

    Treesearch

    Brandon M. Collins; Heather A. Kramer; Kurt Menning; Colin Dillingham; David Saah; Peter A. Stine; Scott L. Stephens

    2013-01-01

    We built on previous work by performing a more in-depth examination of a completed landscape fuel treatment network. Our specific objectives were: (1) model hazardous fire potential with and without the treatment network, (2) project hazardous fire potential over several decades to assess fuel treatment network longevity, and (3) assess fuel treatment effectiveness and...

  9. Optical depth retrievals from Delta-T SPN1 measurements of broadband solar irradiance at ground

    NASA Astrophysics Data System (ADS)

    Estelles, Victor; Serrano, David; Segura, Sara; Wood, John; Webb, Nick

    2016-04-01

    The SPN1 radiometer, manufactured by Delta-T Devices Ltd., is an instrument designed for the measurement of global solar irradiance and its components (diffuse, direct) at ground level. In the present study, the direct irradiance component has been used to retrieve an effective total optical depth, by applying the Beer-Lambert law to the broadband measurements. The results have been compared with spectral total optical depths derived from two Cimel CE318 and Prede POM01 sun-sky radiometers, located at the Burjassot site in Valencia (Spain), during years 2013 - 2015. The SPN1 is an inexpensive and versatile instrument for the measurement of the three components of the solar radiation without any mobile part and without any need to azimuthally align the instrument to track the sun (http://www.delta-t.co.uk). The three components of the solar radiation are estimated from a combination of measurements performed by 7 different miniature thermopiles. In turn, the Beer-Lambert law has been applied to the broadband direct solar component to obtain an effective total optical depth, representative of the total extinction in the atmosphere. For the assessment of the total optical depth values retrieved with the SPN1, two different sun-sky radiometers (Cimel CE318 and Prede POM01L) have been employed. Both instruments belong to the international networks AERONET and SKYNET. The modified SUNRAD package has been applied in both Cimel and Prede instruments. Cloud affected data has been removed by applying the Smirnov cloud-screening procedure in the SUNRAD algorithm. The broadband SPN1 total optical depth has been analysed by comparison with the spectral total optical depth from the sun-sky radiometer measurements at wavelengths 440, 500, 675, 870 and 1020 nm. The slopes and intercepts have been estimated to be 0.47 - 0.98 and 0.055 - 0.16 with increasing wavelength. The average correlation coefficients and RMSD were 0.80 - 0.83 and 0.034 - 0.036 for all the channels. The analysis shows that the SPN1 instrument underestimates the TOD increasingly with wavelength, for higher TOD. This observation is in agreement with the already known effect of a larger effective field of view in the SPN1, as the aureole radiation increase. In any case, these results are promising and would be useful as a determination of the total atmospheric extinction, mainly for users of the SPN1 in the solar radiation field.

  10. Co-occurrence Analysis of Microbial Taxa in the Atlantic Ocean Reveals High Connectivity in the Free-Living Bacterioplankton

    PubMed Central

    Milici, Mathias; Deng, Zhi-Luo; Tomasch, Jürgen; Decelle, Johan; Wos-Oxley, Melissa L.; Wang, Hui; Jáuregui, Ruy; Plumeier, Iris; Giebel, Helge-Ansgar; Badewien, Thomas H.; Wurst, Mascha; Pieper, Dietmar H.; Simon, Meinhard; Wagner-Döbler, Irene

    2016-01-01

    We determined the taxonomic composition of the bacterioplankton of the epipelagic zone of the Atlantic Ocean along a latitudinal transect (51°S–47°N) using Illumina sequencing of the V5-V6 region of the 16S rRNA gene and inferred co-occurrence networks. Bacterioplankon community composition was distinct for Longhurstian provinces and water depth. Free-living microbial communities (between 0.22 and 3 μm) were dominated by highly abundant and ubiquitous taxa with streamlined genomes (e.g., SAR11, SAR86, OM1, Prochlorococcus) and could clearly be separated from particle-associated communities which were dominated by Bacteroidetes, Planktomycetes, Verrucomicrobia, and Roseobacters. From a total of 369 different communities we then inferred co-occurrence networks for each size fraction and depth layer of the plankton between bacteria and between bacteria and phototrophic micro-eukaryotes. The inferred networks showed a reduction of edges in the deepest layer of the photic zone. Networks comprised of free-living bacteria had a larger amount of connections per OTU when compared to the particle associated communities throughout the water column. Negative correlations accounted for roughly one third of the total edges in the free-living communities at all depths, while they decreased with depth in the particle associated communities where they amounted for roughly 10% of the total in the last part of the epipelagic zone. Co-occurrence networks of bacteria with phototrophic micro-eukaryotes were not taxon-specific, and dominated by mutual exclusion (~60%). The data show a high degree of specialization to micro-environments in the water column and highlight the importance of interdependencies particularly between free-living bacteria in the upper layers of the epipelagic zone. PMID:27199970

  11. Estimation of subsurface thermal structure using sea surface height and sea surface temperature

    NASA Technical Reports Server (NTRS)

    Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)

    2012-01-01

    A method of determining a subsurface temperature in a body of water is disclosed. The method includes obtaining surface temperature anomaly data and surface height anomaly data of the body of water for a region of interest, and also obtaining subsurface temperature anomaly data for the region of interest at a plurality of depths. The method further includes regressing the obtained surface temperature anomaly data and surface height anomaly data for the region of interest with the obtained subsurface temperature anomaly data for the plurality of depths to generate regression coefficients, estimating a subsurface temperature at one or more other depths for the region of interest based on the generated regression coefficients and outputting the estimated subsurface temperature at the one or more other depths. Using the estimated subsurface temperature, signal propagation times and trajectories of marine life in the body of water are determined.

  12. Source Parameters Inversion of the 2012 LA VEGA Colombia mw 7.2 Earthquake Using Near-Regional Waveform Data

    NASA Astrophysics Data System (ADS)

    Pedraza, P.; Poveda, E.; Blanco Chia, J. F.; Zahradnik, J.

    2013-05-01

    On September 30th, 2012, an earthquake of magnitude Mw 7.2 occurred at the depth of ~170km in the southeast of Colombia. This seismic event is associated to the Nazca plate drifting eastward relative the South America plate. The distribution of seismicity obtained by the National Seismological Network of Colombia (RSNC) since 1993 shows a segmented subduction zone with varying dip angles. The earthquake occurred in a seismic gap zone of intermediate depth. The recent deployment of broadband seismic stations on the Colombian, as a part of the Colombian Seismological Network, operated by the Colombian Survey, has provided high-quality data to study rupture process. We estimated the moment tensor, the centroid position, and the source time function. The parameters were obtained by inverting waveforms recorded by RSNC at distances 100 km to 800 km, and modeled at 0.01-0.09Hz, using different 1D crustal models, taking advantage of the ISOLA code. The DC-percentage of the earthquake is very high (~90%). The focal mechanism is mostly normal, hence the determination of the fault plane is challenging. An attempt to determine the fault plane was made based on mutual relative position of the centroid and hypocenter (H-C method). Studies in progress are devoted to searching possible complexity of the fault rupture process (total duration of about 15 seconds), quantified by multiple-point source models.

  13. Longitudinal analysis on human cervical tissue using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gan, Yu; Yao, Wang; Myers, Kristin M.; Vink, Joy-Sarah Y.; Wapner, Ronald J.; Hendon, Christine P.

    2017-02-01

    Uterine cervical collagen fiber network is vital to the normal cervical function in pregnancy. Previously, we presented an orientation estimation method to enable dispersion analysis on a single axial slice of human cervical tissue obtained from the upper half of cervix using optical coherence tomography (OCT). How the collagen fiber network structure changes from the internal os (top of the cervix which meets the uterus) to external os (bottom of cervix which extends into the vagina), remains unknown due to depth penetration limitations of OCT. To establish a collagen fiber directionality "map" of the entire cervix, we imaged serial axial slices of human NP (n=11) and PG (n=2) cervical tissue obtained from the internal to external os using Institutional Review Board approved protocols at Columbia University Medical Center. Each slice was divided into four quadrants. In each quadrant, we stitched multiple overlapped OCT volumes and analyzed the en face images that were parallel to the surface. A pixel-wise directionality map was generated. We analyzed fiber trend by measuring the mean angles and quantified dispersion by calculating the standard deviation of the fiber direction over a region of 400 μm × 400 μm. For the initial four samples, our analysis confirms a circumferential fiber pattern in the outer region of slices at all depths. We found that the standard deviation close to internal os showed no significance to the standard deviation close to external os (p>0.05), indicating comparable dispersion.

  14. Surface wave tomography of Europe from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Stehly, Laurent; Paul, Anne

    2017-04-01

    We present a European scale high-resolution 3-D shear wave velocity model derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous seismic recordings from 1293 stations across much of the European region (10˚W-35˚E, 30˚N-75˚N), which yields more than 0.8 million virtual station pairs. This data set compiles records from 67 seismic networks, both permanent and temporary from the EIDA (European Integrated Data Archive). Rayleigh wave group velocity are measured at each station pair using the multiple-filter analysis technique. Group velocity maps are estimated through a linearized tomographic inversion algorithm at period from 5s to 100s. Adaptive parameterization is used to accommodate heterogeneity in data coverage. We then apply a two-step data-driven inversion method to obtain the shear wave velocity model. The two steps refer to a Monte Carlo inversion to build the starting model, followed by a linearized inversion for further improvement. Finally, Moho depth (and its uncertainty) are determined over most of our study region by identifying and analysing sharp velocity discontinuities (and sharpness). The resulting velocity model shows good agreement with main geological features and previous geophyical studies. Moho depth coincides well with that obtained from active seismic experiments. A focus on the Greater Alpine region (covered by the AlpArray seismic network) displays a clear crustal thinning that follows the arcuate shape of the Alps from the southern French Massif Central to southern Germany.

  15. Investigating local controls on soil moisture temporal stability using an inverse modeling approach

    NASA Astrophysics Data System (ADS)

    Bogena, Heye; Qu, Wei; Huisman, Sander; Vereecken, Harry

    2013-04-01

    A better understanding of the temporal stability of soil moisture and its relation to local and nonlocal controls is a major challenge in modern hydrology. Both local controls, such as soil and vegetation properties, and non-local controls, such as topography and climate variability, affect soil moisture dynamics. Wireless sensor networks are becoming more readily available, which opens up opportunities to investigate spatial and temporal variability of soil moisture with unprecedented resolution. In this study, we employed the wireless sensor network SoilNet developed by the Forschungszentrum Jülich to investigate soil moisture variability of a grassland headwater catchment in Western Germany within the framework of the TERENO initiative. In particular, we investigated the effect of soil hydraulic parameters on the temporal stability of soil moisture. For this, the HYDRUS-1D code coupled with a global optimizer (DREAM) was used to inversely estimate Mualem-van Genuchten parameters from soil moisture observations at three depths under natural (transient) boundary conditions for 83 locations in the headwater catchment. On the basis of the optimized parameter sets, we then evaluated to which extent the variability in soil hydraulic conductivity, pore size distribution, air entry suction and soil depth between these 83 locations controlled the temporal stability of soil moisture, which was independently determined from the observed soil moisture data. It was found that the saturated hydraulic conductivity (Ks) was the most significant attribute to explain temporal stability of soil moisture as expressed by the mean relative difference (MRD).

  16. Gestalt grouping via closure degrades suprathreshold depth percepts.

    PubMed

    Deas, Lesley M; Wilcox, Laurie M

    2014-08-19

    It is well known that the perception of depth is susceptible to changes in configuration. For example, stereoscopic precision for a pair of vertical lines can be dramatically reduced when these lines are connected to form a closed object. Here, we extend this paradigm to suprathreshold estimates of perceived depth. Using a touch-sensor, observers made quantitative estimates of depth between a vertical line pair presented in isolation or as edges of a closed rectangular object with different figural interpretations. First, we show that the amount of depth estimated within a closed rectangular object is consistently reduced relative to the vertical edges presented in isolation or when they form the edges of two segmented objects. We then demonstrate that the reduction in perceived depth for closed objects is modulated by manipulations that influence perceived closure of the central figure. Depth percepts were most disrupted when the horizontal connectors and vertical lines matched in color. Perceived depth increased slightly when the connectors had opposite contrast polarity, but increased dramatically when flankers were added. Thus, as grouping cues were added to counter the interpretation of a closed object, the depth degradation effect was systematically eliminated. The configurations tested here rule out explanations based on early, local interactions such as inhibition or cue conflict; instead, our results provide strong evidence of the impact of Gestalt grouping, via closure, on depth magnitude percepts from stereopsis. © 2014 ARVO.

  17. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination

    NASA Astrophysics Data System (ADS)

    Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.

    2005-05-01

    A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.

  18. Stages as models of scene geometry.

    PubMed

    Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark

    2010-09-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations.

  19. Developing a robust wireless sensor network structure for environmental sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Oroza, C.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2013-12-01

    The American River Hydrologic Observatory is being strategically deployed as a real-time ground-based measurement network that delivers accurate and timely information on snow conditions and other hydrologic attributes with a previously unheard of granularity of time and space. The basin-scale network involves 18 sub-networks set out at physiographically representative locations spanning the seasonally snow-covered half of the 5000 km2 American river basin. Each sub-network, covering about a 1-km2 area, consists of 10 wirelessly networked sensing nodes that continuously measure and telemeter temperature, and snow depth; plus selected locations are equipped with sensors for relative humidity, solar radiation, and soil moisture at several depths. The sensor locations were chosen to maximize the variance sampled for snow depth within the basin. Network design and deployment involves an iterative but efficient process. After sensor-station locations are determined, a robust network of interlinking sensor stations and signal repeaters must be constructed to route sensor data to a central base station with a two-way communicable data uplink. Data can then be uploaded from site to remote servers in real time through satellite and cell modems. Signal repeaters are placed for robustness of a self-healing network with redundant signal paths to the base station. Manual, trial-and-error heuristic approaches for node placement are inefficient and labor intensive. In that approach field personnel must restructure the network in real time and wait for new network statistics to be calculated at the base station before finalizing a placement, acting without knowledge of the global topography or overall network structure. We show how digital elevation plus high-definition aerial photographs to give foliage coverage can optimize planning of signal repeater placements and guarantee a robust network structure prior to the physical deployment. We can also 'stress test' the final network by simulating the failure of an individual node and investigating the effect and the self-healing ability of the stressed network. The resulting sensor network can survive temporary service interruption from a small subset of signal repeaters and sensor stations. The robustness and the resilient of the network performance ensure the integrity of the dataset and the real-time transmissibility during harsh conditions.

  20. Estimation of depth to magnetic source using maximum entropy power spectra, with application to the Peru-Chile Trench

    USGS Publications Warehouse

    Blakely, Richard J.

    1981-01-01

    Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.

  1. Tensor-guided fitting of subduction slab depths

    USGS Publications Warehouse

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  2. Neural Network Modeling for Gallium Arsenide IC Fabrication Process and Device Characteristics.

    NASA Astrophysics Data System (ADS)

    Creech, Gregory Lee, I.

    This dissertation presents research focused on the utilization of neurocomputing technology to achieve enhanced yield and effective yield prediction in integrated circuit (IC) manufacturing. Artificial neural networks are employed to model complex relationships between material and device characteristics at critical stages of the semiconductor fabrication process. Whole wafer testing was performed on the starting substrate material and during wafer processing at four critical steps: Ohmic or Post-Contact, Post-Recess, Post-Gate and Final, i.e., at completion of fabrication. Measurements taken and subsequently used in modeling include, among others, doping concentrations, layer thicknesses, planar geometries, layer-to-layer alignments, resistivities, device voltages, and currents. The neural network architecture used in this research is the multilayer perceptron neural network (MLPNN). The MLPNN is trained in the supervised mode using the generalized delta learning rule. It has one hidden layer and uses continuous perceptrons. The research focuses on a number of different aspects. First is the development of inter-process stage models. Intermediate process stage models are created in a progressive fashion. Measurements of material and process/device characteristics taken at a specific processing stage and any previous stages are used as input to the model of the next processing stage characteristics. As the wafer moves through the fabrication process, measurements taken at all previous processing stages are used as input to each subsequent process stage model. Secondly, the development of neural network models for the estimation of IC parametric yield is demonstrated. Measurements of material and/or device characteristics taken at earlier fabrication stages are used to develop models of the final DC parameters. These characteristics are computed with the developed models and compared to acceptance windows to estimate the parametric yield. A sensitivity analysis is performed on the models developed during this yield estimation effort. This is accomplished by analyzing the total disturbance of network outputs due to perturbed inputs. When an input characteristic bears no, or little, statistical or deterministic relationship to the output characteristics, it can be removed as an input. Finally, neural network models are developed in the inverse direction. Characteristics measured after the final processing step are used as the input to model critical in-process characteristics. The modeled characteristics are used for whole wafer mapping and its statistical characterization. It is shown that this characterization can be accomplished with minimal in-process testing. The concepts and methodologies used in the development of the neural network models are presented. The modeling results are provided and compared to the actual measured values of each characteristic. An in-depth discussion of these results and ideas for future research are presented.

  3. Compensating for geographic variation in detection probability with water depth improves abundance estimates of coastal marine megafauna.

    PubMed

    Hagihara, Rie; Jones, Rhondda E; Sobtzick, Susan; Cleguer, Christophe; Garrigue, Claire; Marsh, Helene

    2018-01-01

    The probability of an aquatic animal being available for detection is typically <1. Accounting for covariates that reduce the probability of detection is important for obtaining robust estimates of the population abundance and determining its status and trends. The dugong (Dugong dugon) is a bottom-feeding marine mammal and a seagrass community specialist. We hypothesized that the probability of a dugong being available for detection is dependent on water depth and that dugongs spend more time underwater in deep-water seagrass habitats than in shallow-water seagrass habitats. We tested this hypothesis by quantifying the depth use of 28 wild dugongs fitted with GPS satellite transmitters and time-depth recorders (TDRs) at three sites with distinct seagrass depth distributions: 1) open waters supporting extensive seagrass meadows to 40 m deep (Torres Strait, 6 dugongs, 2015); 2) a protected bay (average water depth 6.8 m) with extensive shallow seagrass beds (Moreton Bay, 13 dugongs, 2011 and 2012); and 3) a mixture of lagoon, coral and seagrass habitats to 60 m deep (New Caledonia, 9 dugongs, 2013). The fitted instruments were used to measure the times the dugongs spent in the experimentally determined detection zones under various environmental conditions. The estimated probability of detection was applied to aerial survey data previously collected at each location. In general, dugongs were least available for detection in Torres Strait, and the population estimates increased 6-7 fold using depth-specific availability correction factors compared with earlier estimates that assumed homogeneous detection probability across water depth and location. Detection probabilities were higher in Moreton Bay and New Caledonia than Torres Strait because the water transparency in these two locations was much greater than in Torres Strait and the effect of correcting for depth-specific detection probability much less. The methodology has application to visual survey of coastal megafauna including surveys using Unmanned Aerial Vehicles.

  4. Hydrogeologic data for the Big River-Mishnock River stream-aquifer system, central Rhode Island

    USGS Publications Warehouse

    Craft, P.A.

    2001-01-01

    Hydrogeology, ground-water development alternatives, and water quality in the BigMishnock stream-aquifer system in central Rhode Island are being investigated as part of a long-term cooperative program between the Rhode Island Water Resources Board and the U.S. Geological Survey to evaluate the ground-water resources throughout Rhode Island. The study area includes the Big River drainage basin and that portion of the Mishnock River drainage basin upstream from the Mishnock River at State Route 3. This report presents geologic data and hydrologic and water-quality data for ground and surface water. Ground-water data were collected from July 1996 through September 1998 from a network of observation wells consisting of existing wells and wells installed for this study, which provided a broad distribution of data-collection sites throughout the study area. Streambed piezometers were used to obtain differences in head data between surface-water levels and ground-water levels to help evaluate stream-aquifer interactions throughout the study area. The types of data presented include monthly ground-water levels, average daily ground-water withdrawals, drawdown data from aquifer tests, and water-quality data. Historical water-level data from other wells within the study area also are presented in this report. Surface-water data were obtained from a network consisting of surface-water impoundments, such as ponds and reservoirs, existing and newly established partial-record stream-discharge sites, and synoptic surface-water-quality sites. Water levels were collected monthly from the surface-water impoundments. Stream-discharge measurements were made at partial-record sites to provide measurements of inflow, outflow, and internal flow throughout the study area. Specific conductance was measured monthly at partial-record sites during the study, and also during the fall and spring of 1997 and 1998 at 41 synoptic sites throughout the study area. General geologic data, such as estimates of depth to bedrock and depth to water table, as well as indications of underlying geologic structure, were obtained from geophysical surveys. Site-specific geologic data were collected during the drilling of observation wells and test holes. These data include depth to bedrock or refusal, depth to water table, and lithologic information.

  5. Temporal Surface Reconstruction

    DTIC Science & Technology

    1991-05-03

    and the convergence cannot be guaranteed. Maybank [68] investigated alternative incremental schemes for the estimation of feature locations from a...depth from image sequences. International Journal of Computer Vision, 3, 1989. [68] S. J. Maybank . Filter based estimates of depth. In Proceedings of the

  6. Proceedings of the 11th Annual DARPA/AFGL Seismic Research symposium

    NASA Astrophysics Data System (ADS)

    Lewkowicz, James F.; McPhetres, Jeanne M.

    1990-11-01

    The following subjects are covered: near source observations of quarry explosions; small explosion discrimination and yield estimation; Rg as a depth discriminant for earthquakes and explosions: a case study in New England; a comparative study of high frequency seismic noise at selected sites in the USSR and USA; chemical explosions and the discrimination problem; application of simulated annealing to joint hypocenter determination; frequency dependence of Q(sub Lg) and Q in the continental crust; statistical approaches to testing for compliance with a threshold test ban treaty; broad-band studies of seismic sources at regional and teleseismic distances using advanced time series analysis methods; effects of depth of burial and tectonic release on regional and teleseismic explosion waveforms; finite difference simulations of seismic wave excitation at Soviet test sites with deterministic structures; stochastic geologic effects on near-field ground motions; the damage mechanics of porous rock; nonlinear attenuation mechanism in salt at moderate strain; compressional- and shear-wave polarizations at the Anza seismic array; and a generalized beamforming approach to real time network detection and phase association.

  7. Passive remote sensing of altitude and optical depth of dust plumes using the oxygen A and B bands: First results from EPIC/DSCOVR at Lagrange-1 point

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoguang; Wang, Jun; Wang, Yi; Zeng, Jing; Torres, Omar; Yang, Yuekui; Marshak, Alexander; Reid, Jeffrey; Miller, Steve

    2017-07-01

    We presented an algorithm for inferring aerosol layer height (ALH) and optical depth (AOD) over ocean surface from radiances in oxygen A and B bands measured by the Earth Polychromatic Imaging Camera (EPIC) on the Deep Space Climate Observatory (DSCOVR) orbiting at Lagrangian-1 point. The algorithm was applied to EPIC imagery of a 2 day dust outbreak over the North Atlantic Ocean. Retrieved ALHs and AODs were evaluated against counterparts observed by Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), Moderate Resolution Imaging Spectroradiometer, and Aerosol Robotic Network. The comparisons showed 71.5% of EPIC-retrieved ALHs were within ±0.5 km of those determined from CALIOP and 74.4% of EPIC AOD retrievals fell within a ± (0.1 + 10%) envelope of MODIS retrievals. This study demonstrates the potential of EPIC measurements for retrieving global aerosol height multiple times daily, which are essential for evaluating aerosol profile simulated in climate models and for better estimating aerosol radiative effects.

  8. The design and implementation of postprocessing for depth map on real-time extraction system.

    PubMed

    Tang, Zhiwei; Li, Bin; Li, Huosheng; Xu, Zheng

    2014-01-01

    Depth estimation becomes the key technology to resolve the communications of the stereo vision. We can get the real-time depth map based on hardware, which cannot implement complicated algorithm as software, because there are some restrictions in the hardware structure. Eventually, some wrong stereo matching will inevitably exist in the process of depth estimation by hardware, such as FPGA. In order to solve the problem a postprocessing function is designed in this paper. After matching cost unique test, the both left-right and right-left consistency check solutions are implemented, respectively; then, the cavities in depth maps can be filled by right depth values on the basis of right-left consistency check solution. The results in the experiments have shown that the depth map extraction and postprocessing function can be implemented in real time in the same system; what is more, the quality of the depth maps is satisfactory.

  9. Comparison of NEXRAD multisensor precipitation estimates to rain gage observations in and near DuPage County, Illinois, 2002–12

    USGS Publications Warehouse

    Spies, Ryan R.; Over, Thomas M.; Ortel, Terry W.

    2018-05-21

    In this report, precipitation data from 2002 to 2012 from the hourly gridded Next-Generation Radar (NEXRAD)-based Multisensor Precipitation Estimate (MPE) precipitation product are compared to precipitation data from two rain gage networks—an automated tipping bucket network of 25 rain gages operated by the U.S. Geological Survey (USGS) and 51 rain gages from the volunteer-operated Community Collaborative Rain, Hail, and Snow (CoCoRaHS) network—in and near DuPage County, Illinois, at a daily time step to test for long-term differences in space, time, and distribution. The NEXRAD–MPE data that are used are from the fifty 2.5-mile grid cells overlying the rain gages from the other networks. Because of the challenges of measuring of frozen precipitation, the analysis period is separated between days with or without the chance of freezing conditions. The NEXRAD–MPE and tipping-bucket rain gage precipitation data are adjusted to account for undercatch by multiplying by a previously determined factor of 1.14. Under nonfreezing conditions, the three precipitation datasets are broadly similar in cumulative depth and distribution of daily values when the data are combined spatially across the networks. However, the NEXRAD–MPE data indicate a significant trend relative to both rain gage networks as a function of distance from the NEXRAD radar just south of the study area. During freezing conditions, of the USGS network rain gages only the heated gages were considered, and these gages indicate substantial mean undercatch of 50 and 61 percent compared to the NEXRAD–MPE and the CoCoRaHS gages, respectively. The heated USGS rain gages also indicate substantially lower quantile values during freezing conditions, except during the most extreme (highest) events. Because NEXRAD precipitation products are continually evolving, the report concludes with a discussion of recent changes in those products and their potential for improved precipitation estimation. An appendix provides an analysis of spatially combined NEXRAD–MPE precipitation data as a function of temperature at an hourly time scale and indicates, among other results, that most precipitation in the study area occurs at moderate temperatures of 30 to 74 degrees Fahrenheit. However, when precipitation does occur, its intensity increases with temperature to about 86 degrees Fahrenheit.

  10. Evolution of Neural Networks for the Prediction of Hydraulic Conductivity as a Function of Borehole Geophysical Logs: Shobasama Site, Japan

    NASA Astrophysics Data System (ADS)

    Reeves, P.; McKenna, S. A.; Takeuchi, S.; Saegusa, H.

    2003-12-01

    In situ measurements of hydraulic conductivity in fractured rocks are expensive to acquire. Borehole geophysical measurements are relatively inexpensive to acquire but do not provide direct information on hydraulic conductivity. These geophysical measurements quantify properties of the rock that influence the hydraulic conductivity and it may be possible to employ a non-linear combination of these measurements to estimate hydraulic conductivity. Geophysical measurements collected in fractured granite at the Shobasama site in central Japan were used as the input to a feed-forward neural network. A simple genetic algorithm was used to simultaneously evolve the architecture and parameters of the neural network as well as determine an optimal subset of geophysical measurements for the prediction of hydraulic conductivity. The initial estimation procedure focused on predicting the class of the hydraulic conductivity, high, medium or low, from the geophysical measurements. This estimation was done while using the genetic algorithm to simultaneously determine the most important geophysical logs and optimize the architecture of the neural network. Results show that certain geophysical logs provide more information than others- most notably the short-normal resistivity, micro-resistivity, porosity and sonic logs provided the most information on hydraulic conductivity. The neural network produced excellent training results with accuracy of 90 percent or greater, but was unable to produce accurate predictions of the hydraulic conductivity class In the second phase of calculations, the selection of geophysical measurements is limited to only those that provide significant information. Additionally, this second phase predicts transmissivity instead of hydraulic conductivity in order to account for the differences in the length of the hydraulic test zones. Resulting predictions of transmissivity exhibit conditional bias with maximum prediction errors of three orders of magnitude occurring at the extreme measurement values. Results of these simulations indicate that the most informative geophysical measurements for the prediction of transmissivity are depth and sonic velocity. The long normal resistivity and self potential geophysical measurements are moderately informative. In addition, it was found that porosity and crack counts (clear, open, or hairline) do not inform predictions of transmissivity. This work was funded by the Japan Nuclear Cycle Development Institute. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94-AL-85000

  11. What We Do Not Yet Know About Global Ocean Depths, and How Satellite Altimetry Can Help

    NASA Astrophysics Data System (ADS)

    Smith, W. H. F.; Sandwell, D. T.; Marks, K. M.

    2017-12-01

    Half Earth's ocean floor area lies several km or more away from the nearest depth measurement. Areas more than 50 km from any sounding sum to a total area larger than the entire United States land area; areas more than 100 km from any sounding comprise a total area larger than Alaska. In remote basins the majority of available data were collected before the mid-1960s, and so often are mis-located by many km, as well as mis-digitized. Satellite altimetry has mapped the marine gravity field with better than 10 km horizontal resolution, revealing nearly all seamounts taller than 2 km; new data can detect some seamounts less than 1 km tall. Seafloor topography can be estimated from satellite altimetry if sediment is thin and relief is due to seafloor spreading and mid-plate volcanism. The accuracy of the estimate depends on the geological nature of the relief and on the accuracy of the soundings available to calibrate the estimation. At best, the estimate is a band-pass-filtered version of the true depth variations, but does not resolve the small-scale seafloor roughness needed to model mixing and dissipation in the ocean. In areas of thick or variable sediment cover there can be little correlation between depth and altimetry. Yet altimeter-estimated depth is the best guess available in most of the ocean. The MH370 search area provides an illustration. Prior to the search it was very sparsely (1% to 5%) covered by soundings, many of these were old, low-tech data, and plateaus with thick sediments complicate the estimation of depth from altimetry. Even so, the estimate was generally correct about the tectonic nature of the terrain and the extent of depth variations to be expected. If ships will fill gaps strategically, visiting areas where altimetry shows that interesting features will be found, and passing near the centroids of the larger gaps, the data will be exciting in their own right and will also improve future altimetry estimates.

  12. Resource-aware system architecture model for implementation of quantum aided Byzantine agreement on quantum repeater networks

    NASA Astrophysics Data System (ADS)

    Taherkhani, Mohammand Amin; Navi, Keivan; Van Meter, Rodney

    2018-01-01

    Quantum aided Byzantine agreement is an important distributed quantum algorithm with unique features in comparison to classical deterministic and randomized algorithms, requiring only a constant expected number of rounds in addition to giving a higher level of security. In this paper, we analyze details of the high level multi-party algorithm, and propose elements of the design for the quantum architecture and circuits required at each node to run the algorithm on a quantum repeater network (QRN). Our optimization techniques have reduced the quantum circuit depth by 44% and the number of qubits in each node by 20% for a minimum five-node setup compared to the design based on the standard arithmetic circuits. These improvements lead to a quantum system architecture with 160 qubits per node, space-time product (an estimate of the required fidelity) {KQ}≈ 1.3× {10}5 per node and error threshold 1.1× {10}-6 for the total nodes in the network. The evaluation of the designed architecture shows that to execute the algorithm once on the minimum setup, we need to successfully distribute a total of 648 Bell pairs across the network, spread evenly between all pairs of nodes. This framework can be considered a starting point for establishing a road-map for light-weight demonstration of a distributed quantum application on QRNs.

  13. Effects of Increasing Neuromuscular Electrical Stimulation Current Intensity on Cortical Sensorimotor Network Activation: A Time Domain fNIRS Study

    PubMed Central

    Zucchelli, Lucia; Perrey, Stephane; Contini, Davide; Caffini, Matteo; Spinelli, Lorenzo; Kerr, Graham; Quaresima, Valentina; Ferrari, Marco; Torricelli, Alessandro

    2015-01-01

    Neuroimaging studies have shown neuromuscular electrical stimulation (NMES)-evoked movements activate regions of the cortical sensorimotor network, including the primary sensorimotor cortex (SMC), premotor cortex (PMC), supplementary motor area (SMA), and secondary somatosensory area (S2), as well as regions of the prefrontal cortex (PFC) known to be involved in pain processing. The aim of this study, on nine healthy subjects, was to compare the cortical network activation profile and pain ratings during NMES of the right forearm wrist extensor muscles at increasing current intensities up to and slightly over the individual maximal tolerated intensity (MTI), and with reference to voluntary (VOL) wrist extension movements. By exploiting the capability of the multi-channel time domain functional near-infrared spectroscopy technique to relate depth information to the photon time-of-flight, the cortical and superficial oxygenated (O2Hb) and deoxygenated (HHb) hemoglobin concentrations were estimated. The O2Hb and HHb maps obtained using the General Linear Model (NIRS-SPM) analysis method, showed that the VOL and NMES-evoked movements significantly increased activation (i.e., increase in O2Hb and corresponding decrease in HHb) in the cortical layer of the contralateral sensorimotor network (SMC, PMC/SMA, and S2). However, the level and area of contralateral sensorimotor network (including PFC) activation was significantly greater for NMES than VOL. Furthermore, there was greater bilateral sensorimotor network activation with the high NMES current intensities which corresponded with increased pain ratings. In conclusion, our findings suggest that greater bilateral sensorimotor network activation profile with high NMES current intensities could be in part attributable to increased attentional/pain processing and to increased bilateral sensorimotor integration in these cortical regions. PMID:26158464

  14. Effects of Increasing Neuromuscular Electrical Stimulation Current Intensity on Cortical Sensorimotor Network Activation: A Time Domain fNIRS Study.

    PubMed

    Muthalib, Makii; Re, Rebecca; Zucchelli, Lucia; Perrey, Stephane; Contini, Davide; Caffini, Matteo; Spinelli, Lorenzo; Kerr, Graham; Quaresima, Valentina; Ferrari, Marco; Torricelli, Alessandro

    2015-01-01

    Neuroimaging studies have shown neuromuscular electrical stimulation (NMES)-evoked movements activate regions of the cortical sensorimotor network, including the primary sensorimotor cortex (SMC), premotor cortex (PMC), supplementary motor area (SMA), and secondary somatosensory area (S2), as well as regions of the prefrontal cortex (PFC) known to be involved in pain processing. The aim of this study, on nine healthy subjects, was to compare the cortical network activation profile and pain ratings during NMES of the right forearm wrist extensor muscles at increasing current intensities up to and slightly over the individual maximal tolerated intensity (MTI), and with reference to voluntary (VOL) wrist extension movements. By exploiting the capability of the multi-channel time domain functional near-infrared spectroscopy technique to relate depth information to the photon time-of-flight, the cortical and superficial oxygenated (O2Hb) and deoxygenated (HHb) hemoglobin concentrations were estimated. The O2Hb and HHb maps obtained using the General Linear Model (NIRS-SPM) analysis method, showed that the VOL and NMES-evoked movements significantly increased activation (i.e., increase in O2Hb and corresponding decrease in HHb) in the cortical layer of the contralateral sensorimotor network (SMC, PMC/SMA, and S2). However, the level and area of contralateral sensorimotor network (including PFC) activation was significantly greater for NMES than VOL. Furthermore, there was greater bilateral sensorimotor network activation with the high NMES current intensities which corresponded with increased pain ratings. In conclusion, our findings suggest that greater bilateral sensorimotor network activation profile with high NMES current intensities could be in part attributable to increased attentional/pain processing and to increased bilateral sensorimotor integration in these cortical regions.

  15. Depth Estimates for Slingram Electromagnetic Anomalies from Dipping Sheet-like Bodies by the Normalized Full Gradient Method

    NASA Astrophysics Data System (ADS)

    Dondurur, Derman

    2005-11-01

    The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.

  16. Sampling strategies to improve passive optical remote sensing of river bathymetry

    USGS Publications Warehouse

    Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.

    2018-01-01

    Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.

  17. Insights into mountain precipitation and snowpack from a basin-scale wireless-sensor network

    USDA-ARS?s Scientific Manuscript database

    A spatially distributed wireless-sensor network, installed across the 2154 km2 portion of the 5311 km2 American River basin above 1500 m elevation, provided spatial measurements of temperature, relative humidity and snow depth. The network consisted of 10 sensor clusters, each with 10 measurement no...

  18. Quantitative depth resolved microcirculation imaging with optical coherence tomography angiography (Part ΙΙ): Microvascular network imaging.

    PubMed

    Gao, Wanrong

    2017-04-17

    In this work, we review the main phenomena that have been explored in OCT angiography to image the vessels of the microcirculation within living tissues with the emphasis on how the different processing algorithms were derived to circumvent specific limitations. Parameters are then discussed that can quantitatively describe the depth-resolved microvascular network for possible clinic diagnosis applications. Finally,future directions in continuing OCT development are discussed. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    PubMed

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  20. Using convolutional neural networks to estimate time-of-flight from PET detector waveforms

    NASA Astrophysics Data System (ADS)

    Berg, Eric; Cherry, Simon R.

    2018-01-01

    Although there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time-of-flight information from the detector signals. In most cases, the timing pick-off for each waveform is computed using leading edge discrimination or constant fraction discrimination, as these were historically easily implemented with analog pulse processing electronics. However, now with the availability of fast waveform digitizers, there is opportunity to make use of more of the timing information contained in the coincident detector waveforms with advanced signal processing techniques. Here we describe the application of deep convolutional neural networks (CNNs), a type of machine learning, to estimate time-of-flight directly from the pair of digitized detector waveforms for a coincident event. One of the key features of this approach is the simplicity in obtaining ground-truth-labeled data needed to train the CNN: the true time-of-flight is determined from the difference in path length between the positron emission and each of the coincident detectors, which can be easily controlled experimentally. The experimental setup used here made use of two photomultiplier tube-based scintillation detectors, and a point source, stepped in 5 mm increments over a 15 cm range between the two detectors. The detector waveforms were digitized at 10 GS s-1 using a bench-top oscilloscope. The results shown here demonstrate that CNN-based time-of-flight estimation improves timing resolution by 20% compared to leading edge discrimination (231 ps versus 185 ps), and 23% compared to constant fraction discrimination (242 ps versus 185 ps). By comparing several different CNN architectures, we also showed that CNN depth (number of convolutional and fully connected layers) had the largest impact on timing resolution, while the exact network parameters, such as convolutional filter size and number of feature maps, had only a minor influence.

  1. Aerosol single-scattering albedo over the global oceans: Comparing PARASOL retrievals with AERONET, OMI, and AeroCom models estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lacagnina, Carlo; Hasekamp, Otto P.; Bian, Huisheng

    2015-09-27

    The aerosol Single Scattering Albedo (SSA) over the global oceans is evaluated based on polarimetric measurements by the PARASOL satellite. The retrieved values for SSA and Aerosol Optical Depth (AOD) agree well with the ground-based measurements of the AErosol RObotic NETwork (AERONET). The global coverage provided by the PARASOL observations represents a unique opportunity to evaluate SSA and AOD simulated by atmospheric transport model runs, as performed in the AeroCom framework. The SSA estimate provided by the AeroCom models is generally higher than the SSA retrieved from both PARASOL and AERONET. On the other hand, the mean simulated AOD ismore » about right or slightly underestimated compared with observations. An overestimate of the SSA by the models would suggest that these simulate an overly strong aerosol radiative cooling at top-of-atmosphere (TOA) and underestimate it at surface. This implies that aerosols have a potential stronger impact within the atmosphere than currently simulated.« less

  2. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    PubMed

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  3. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-02-08

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.

  4. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination

    PubMed Central

    2018-01-01

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759

  5. Systematic observations of long-range transport events and climatological backscatter profiles with the DWD ceilometer network

    NASA Astrophysics Data System (ADS)

    Mattis, Ina; Müller, Gerhard; Wagner, Frank; Hervo, Maxime

    2015-04-01

    The German Meteorological Service (DWD) operates a network of about 60 CHM15K-Nimbus ceilometers for cloud base height observations. Those very powerful ceilometers allow for the detection and characterization of aerosol layers. Raw data of all network ceilometers are transferred online to DWD's data analysis center at the Hohenpeißenberg Meteorological Observatory. There, the occurrence of aerosol layers from long-range transport events in the free troposphere is systematically monitored on daily basis for each single station. If possible, the origin of the aerosol layers is determined manually from the analysis of the meteorological situation and model output. We use backward trajectories as well as the output of the MACC and DREAM models for the decision, whether the observed layer originated in the Sahara region, from forest fires in North America or from another, unknown source. Further, the magnitude of the observed layers is qualitatively estimated taking into account the geometrical layer depth, signal intensity, model output and nearby sun photometer or lidar observations (where available). All observed layers are attributed to one of the categories 'faint', 'weak', 'medium', 'strong', or 'extreme'. We started this kind of analysis in August 2013 and plan to continue this systematic documentation of long-range transport events of aerosol layers to Germany on long-term base in the framework of our GAW activities. Most of the observed aerosol layers have been advected from the Sahara region to Germany. In the 15 months between August 2013 and November 2014 we observed on average 46 days with Sahara dust layers per station, but only 16 days with aerosol layers from forest fires. The occurrence of Sahara dust layers vary with latitude. We observed only 28 dusty days in the north, close to the coasts of North Sea and Baltic Sea. In contrast, in southern Germany, in Bavarian Pre-Alps and in the Black Forest mountains, we observed up to 59 days with dust. At about 6 days per station, the optical depth of the dust particles was estimated to be larger than 0.4. Those events are classified as 'strong'. 'Faint', 'weak', and 'medium' events were detected at 13, 15, and 12 days per station, respectively. Almost all of the forest fire events have been classified as 'faint' and 'weak' with optical depths below 0.15. Beside this qualitative investigations on transport events, we started to obtain calibration constants for all individual ceilometers in our network within the framework of the European projects E-PROFILE and TOPROF. We are currently producing a data set of 1-hour-mean particle backscatter profiles at 1064 nm at all ceilometer stations in Germany for the period between summer 2013 and winter 2014. We will present an overview on the used methodologies of analysis of long-range transport events and of the calibration procedures. More detailed results of the event analysis, e.g. on seasonal behaviour will be presented as well. Further we will show results of a first statistical analysis of our 18-months data set of backscatter profiles over Germany.

  6. Prediction of Weld Penetration in FCAW of HSLA steel using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Asl, Y. Dadgar; Mostafa, N. B.; Panahizadeh R., V.; Seyedkashi, S. M. H.

    2011-01-01

    Flux-cored arc welding (FCAW) is a semiautomatic or automatic arc welding process that requires a continuously-fed consumable tubular electrode containing a flux. The main FCAW process parameters affecting the depth of penetration are welding current, arc voltage, nozzle-to-work distance, torch angle and welding speed. Shallow depth of penetration may contribute to failure of a welded structure since penetration determines the stress-carrying capacity of a welded joint. To avoid such occurrences; the welding process parameters influencing the weld penetration must be properly selected to obtain an acceptable weld penetration and hence a high quality joint. Artificial neural networks (ANN), also called neural networks (NN), are computational models used to express complex non-linear relationships between input and output data. In this paper, artificial neural network (ANN) method is used to predict the effects of welding current, arc voltage, nozzle-to-work distance, torch angle and welding speed on weld penetration depth in gas shielded FCAW of a grade of high strength low alloy steel. 32 experimental runs were carried out using the bead-on-plate welding technique. Weld penetrations were measured and on the basis of these 32 sets of experimental data, a feed-forward back-propagation neural network was created. 28 sets of the experiments were used as the training data and the remaining 4 sets were used for the testing phase of the network. The ANN has one hidden layer with eight neurons and is trained after 840 iterations. The comparison between the experimental results and ANN results showed that the trained network could predict the effects of the FCAW process parameters on weld penetration adequately.

  7. Evaluating the hydraulic and transport properties of peat soil using pore network modeling and X-ray micro computed tomography

    NASA Astrophysics Data System (ADS)

    Gharedaghloo, Behrad; Price, Jonathan S.; Rezanezhad, Fereidoun; Quinton, William L.

    2018-06-01

    Micro-scale properties of peat pore space and their influence on hydraulic and transport properties of peat soils have been given little attention so far. Characterizing the variation of these properties in a peat profile can increase our knowledge on the processes controlling contaminant transport through peatlands. As opposed to the common macro-scale (or bulk) representation of groundwater flow and transport processes, a pore network model (PNM) simulates flow and transport processes within individual pores. Here, a pore network modeling code capable of simulating advective and diffusive transport processes through a 3D unstructured pore network was developed; its predictive performance was evaluated by comparing its results to empirical values and to the results of computational fluid dynamics (CFD) simulations. This is the first time that peat pore networks have been extracted from X-ray micro-computed tomography (μCT) images of peat deposits and peat pore characteristics evaluated in a 3D approach. Water flow and solute transport were modeled in the unstructured pore networks mapped directly from μCT images. The modeling results were processed to determine the bulk properties of peat deposits. Results portray the commonly observed decrease in hydraulic conductivity with depth, which was attributed to the reduction of pore radius and increase in pore tortuosity. The increase in pore tortuosity with depth was associated with more decomposed peat soil and decreasing pore coordination number with depth, which extended the flow path of fluid particles. Results also revealed that hydraulic conductivity is isotropic locally, but becomes anisotropic after upscaling to core-scale; this suggests the anisotropy of peat hydraulic conductivity observed in core-scale and field-scale is due to the strong heterogeneity in the vertical dimension that is imposed by the layered structure of peat soils. Transport simulations revealed that for a given solute, the effective diffusion coefficient decreases with depth due to the corresponding increase of diffusional tortuosity. Longitudinal dispersivity of peat also was computed by analyzing advective-dominant transport simulations that showed peat dispersivity is similar to the empirical values reported in the same peat soil; it is not sensitive to soil depth and does not vary much along the soil profile.

  8. An Application of Semi-parametric Estimator with Weighted Matrix of Data Depth in Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Pan, X. G.; Wang, J. Q.; Zhou, H. Y.

    2013-05-01

    The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.

  9. Estimating the Probability of Elevated Nitrate Concentrations in Ground Water in Washington State

    USGS Publications Warehouse

    Frans, Lonna M.

    2008-01-01

    Logistic regression was used to relate anthropogenic (manmade) and natural variables to the occurrence of elevated nitrate concentrations in ground water in Washington State. Variables that were analyzed included well depth, ground-water recharge rate, precipitation, population density, fertilizer application amounts, soil characteristics, hydrogeomorphic regions, and land-use types. Two models were developed: one with and one without the hydrogeomorphic regions variable. The variables in both models that best explained the occurrence of elevated nitrate concentrations (defined as concentrations of nitrite plus nitrate as nitrogen greater than 2 milligrams per liter) were the percentage of agricultural land use in a 4-kilometer radius of a well, population density, precipitation, soil drainage class, and well depth. Based on the relations between these variables and measured nitrate concentrations, logistic regression models were developed to estimate the probability of nitrate concentrations in ground water exceeding 2 milligrams per liter. Maps of Washington State were produced that illustrate these estimated probabilities for wells drilled to 145 feet below land surface (median well depth) and the estimated depth to which wells would need to be drilled to have a 90-percent probability of drawing water with a nitrate concentration less than 2 milligrams per liter. Maps showing the estimated probability of elevated nitrate concentrations indicated that the agricultural regions are most at risk followed by urban areas. The estimated depths to which wells would need to be drilled to have a 90-percent probability of obtaining water with nitrate concentrations less than 2 milligrams per liter exceeded 1,000 feet in the agricultural regions; whereas, wells in urban areas generally would need to be drilled to depths in excess of 400 feet.

  10. Antarctic Sea Ice Thickness and Snow-to-Ice Conversion from Atmospheric Reanalysis and Passive Microwave Snow Depth

    NASA Technical Reports Server (NTRS)

    Markus, Thorsten; Maksym, Ted

    2007-01-01

    Passive microwave snow depth, ice concentration, and ice motion estimates are combined with snowfall from the European Centre for Medium Range Weather Forecasting (ECMWF) reanalysis (ERA-40) from 1979-200 1 to estimate the prevalence of snow-to-ice conversion (snow-ice formation) on level sea ice in the Antarctic for April-October. Snow ice is ubiquitous in all regions throughout the growth season. Calculated snow- ice thicknesses fall within the range of estimates from ice core analysis for most regions. However, uncertainties in both this analysis and in situ data limit the usefulness of snow depth and snow-ice production to evaluate the accuracy of ERA-40 snowfall. The East Antarctic is an exception, where calculated snow-ice production exceeds observed ice thickness over wide areas, suggesting that ERA-40 precipitation is too high there. Snow-ice thickness variability is strongly controlled not just by snow accumulation rates, but also by ice divergence. Surprisingly, snow-ice production is largely independent of snow depth, indicating that the latter may be a poor indicator of total snow accumulation. Using the presence of snow-ice formation as a proxy indicator for near-zero freeboard, we examine the possibility of estimating level ice thickness from satellite snow depths. A best estimate for the mean level ice thickness in September is 53 cm, comparing well with 51 cm from ship-based observations. The error is estimated to be 10-20 cm, which is similar to the observed interannual and regional variability. Nevertheless, this is comparable to expected errors for ice thickness determined by satellite altimeters. Improvement in satellite snow depth retrievals would benefit both of these methods.

  11. Crustal Structure Beneath Taiwan Using Frequency-band Inversion of Receiver Function Waveforms

    NASA Astrophysics Data System (ADS)

    Tomfohrde, D. A.; Nowack, R. L.

    Receiver function analysis is used to determine local crustal structure beneath Taiwan. We have performed preliminary data processing and polarization analysis for the selection of stations and events and to increase overall data quality. Receiver function analysis is then applied to data from the Taiwan Seismic Network to obtain radial and transverse receiver functions. Due to the limited azimuthal coverage, only the radial receiver functions are analyzed in terms of horizontally layered crustal structure for each station. In order to improve convergence of the receiver function inversion, frequency-band inversion (FBI) is implemented, in which an iterative inversion procedure with sequentially higher low-pass corner frequencies is used to stabilize the waveform inversion. Frequency-band inversion is applied to receiver functions at six stations of the Taiwan Seismic Network. Initial 20-layer crustal models are inverted for using prior tomographic results for the initial models. The resulting 20-1ayer models are then simplified to 4 to 5 layer models and input into an alternating depth and velocity frequency-band inversion. For the six stations investigated, the resulting simplified models provide an average estimate of 38 km for the Moho thickness surrounding the Central Range of Taiwan. Also, the individual station estimates compare well with the recent tomographic model of and the refraction results of Rau and Wu (1995) and the refraction results of Ma and Song (1997).

  12. Materials characterization on efforts for ablative materials

    NASA Technical Reports Server (NTRS)

    Tytula, Thomas P.; Schad, Kristin C.; Swann, Myles H.

    1992-01-01

    Experimental efforts to develop a new procedure to measure char depth in carbon phenolic nozzle material are described. Using a Shor Type D Durometer, hardness profiles were mapped across post fired sample blocks and specimens from a fired rocket nozzle. Linear regression was used to estimate the char depth. Results are compared to those obtained from computed tomography in a comparative experiment. There was no significant difference in the depth estimates obtained by the two methods.

  13. Neurometric assessment of intraoperative anesthetic

    DOEpatents

    Kangas, Lars J.; Keller, Paul E.

    1998-01-01

    The present invention is a method and apparatus for collecting EEG data, reducing the EEG data into coefficients, and correlating those coefficients with a depth of unconsciousness or anesthetic depth, and which obtains a bounded first derivative of anesthetic depth to indicate trends. The present invention provides a developed artificial neural network based method capable of continuously analyzing EEG data to discriminate between awake and anesthetized states in an individual and continuously monitoring anesthetic depth trends in real-time. The present invention enables an anesthesiologist to respond immediately to changes in anesthetic depth of the patient during surgery and to administer the correct amount of anesthetic.

  14. Comparisons of Derived Metrics from Computed Tomography (CT) Scanned Images of Fluvial Sediment from Gravel-Bed Flume Experiments

    NASA Astrophysics Data System (ADS)

    Voepel, Hal; Ahmed, Sharif; Hodge, Rebecca; Leyland, Julian; Sear, David

    2016-04-01

    Uncertainty in bedload estimates for gravel bed rivers is largely driven by our inability to characterize arrangement, orientation and resultant forces of fluvial sediment in river beds. Water working of grains leads to structural differences between areas of the bed through particle sorting, packing, imbrication, mortaring and degree of bed armoring. In this study, non-destructive, micro-focus X-ray computed tomography (CT) imaging in 3D is used to visualize, quantify and assess the internal geometry of sections of a flume bed that have been extracted keeping their fabric intact. Flume experiments were conducted at 1:1 scaling of our prototype river. From the volume, center of mass, points of contact, and protrusion of individual grains derived from 3D scan data we estimate 3D static force properties at the grain-scale such as pivoting angles, buoyancy and gravity forces, and local grain exposure. Here metrics are derived for images from two flume experiments: one with a bed of coarse grains (>4mm) and the other where sand and clay were incorporated into the coarse flume bed. In addition to deriving force networks, comparison of metrics such as critical shear stress, pivot angles, grain distributions, principle axis orientation, and pore space over depth are made. This is the first time bed stability has been studied in 3D using CT scanned images of sediment from the bed surface to depths well into the subsurface. The derived metrics, inter-granular relationships and characterization of bed structures will lead to improved bedload estimates with reduced uncertainty, as well as improved understanding of relationships between sediment structure, grain size distribution and channel topography.

  15. Improving Satellite Quantitative Precipitation Estimation Using GOES-Retrieved Cloud Optical Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenz, Ronald; Dong, Xiquan; Xi, Baike

    To address significant gaps in ground-based radar coverage and rain gauge networks in the U.S., geostationary satellite quantitative precipitation estimates (QPEs) such as the Self-Calibrating Multivariate Precipitation Retrievals (SCaMPR) can be used to fill in both the spatial and temporal gaps of ground-based measurements. Additionally, with the launch of GOES-R, the temporal resolution of satellite QPEs may be comparable to that of Weather Service Radar-1988 Doppler (WSR-88D) volume scans as GOES images will be available every five minutes. However, while satellite QPEs have strengths in spatial coverage and temporal resolution, they face limitations particularly during convective events. Deep Convective Systemsmore » (DCSs) have large cloud shields with similar brightness temperatures (BTs) over nearly the entire system, but widely varying precipitation rates beneath these clouds. Geostationary satellite QPEs relying on the indirect relationship between BTs and precipitation rates often suffer from large errors because anvil regions (little/no precipitation) cannot be distinguished from rain-cores (heavy precipitation) using only BTs. However, a combination of BTs and optical depth (τ) has been found to reduce overestimates of precipitation in anvil regions (Stenz et al. 2014). A new rain mask algorithm incorporating both τ and BTs has been developed, and its application to the existing SCaMPR algorithm was evaluated. The performance of the modified SCaMPR was evaluated using traditional skill scores and a more detailed analysis of performance in individual DCS components by utilizing the Feng et al. (2012) classification algorithm. SCaMPR estimates with the new rain mask applied benefited from significantly reduced overestimates of precipitation in anvil regions and overall improvements in skill scores.« less

  16. Analysis of the similar epicenter earthquakes on 22 January 2013 and 01 June 2013, Central Gulf of Suez, Egypt

    NASA Astrophysics Data System (ADS)

    Toni, Mostafa; Barth, Andreas; Ali, Sherif M.; Wenzel, Friedemann

    2016-09-01

    On 22 January 2013 an earthquake with local magnitude ML 4.1 occurred in the central part of the Gulf of Suez. Six months later on 1 June 2013 another earthquake with local magnitude ML 5.1 took place at the same epicenter and different depths. These two perceptible events were recorded and localized by the Egyptian National Seismological Network (ENSN) and additional networks in the region. The purpose of this study is to determine focal mechanisms and source parameters of both earthquakes to analyze their tectonic relation. We determine the focal mechanisms by applying moment tensor inversion and first motion analysis of P- and S-waves. Both sources reveal oblique focal mechanisms with normal faulting and strike-slip components on differently oriented faults. The source mechanism of the larger event on 1 June in combination with the location of aftershock sequence indicates a left-lateral slip on N-S striking fault structure in 21 km depth that is in conformity with the NE-SW extensional Shmin (orientation of minimum horizontal compressional stress) and the local fault pattern. On the other hand, the smaller earthquake on 22 January with a shallower hypocenter in 16 km depth seems to have happened on a NE-SW striking fault plane sub-parallel to Shmin. Thus, here an energy release on a transfer fault connecting dominant rift-parallel structures might have resulted in a stress transfer, triggering the later ML 5.1 earthquake. Following Brune's model and using displacement spectra, we calculate the dynamic source parameters for the two events. The estimated source parameters for the 22 January 2013 and 1 June 2013 earthquakes are fault length (470 and 830 m), stress drop (1.40 and 2.13 MPa), and seismic moment (5.47E+21 and 6.30E+22 dyn cm) corresponding to moment magnitudes of MW 3.8 and 4.6, respectively.

  17. Groundwater levels in the Kabul Basin, Afghanistan, 2004-2013

    USGS Publications Warehouse

    Taher, Mohammad R.; Chornack, Michael P.; Mack, Thomas J.

    2014-01-01

    The Afghanistan Geological Survey, with technical assistance from the U.S. Geological Survey, established a network of wells to measure and monitor groundwater levels to assess seasonal, areal, and potentially climatic variations in groundwater characteristics in the Kabul Basin, Afghanistan, the most populous region in the country. Groundwater levels were monitored in 71 wells in the Kabul Basin, Afghanistan, starting as early as July 2004 and continuing to the present (2013). The monitoring network is made up exclusively of existing production wells; therefore, both static and dynamic water levels were recorded. Seventy wells are in unconsolidated sediments, and one well is in bedrock. Water levels were measured periodically, generally monthly, using electric tape water-level meters. Water levels in well 64 on the grounds of the Afghanistan Geological Survey building were measured more frequently. This report provides a 10-year compilation of groundwater levels in the Kabul Basin prepared in cooperation with the Afghanistan Geological Survey. Depths to water below land surface range from a minimum of 1.47 meters (m) in the Shomali subbasin to a maximum of 73.34 m in the Central Kabul subbasin. The Logar subbasin had the smallest range in depth to water below land surface (1.5 to 12.4 m), whereas the Central Kabul subbasin had the largest range (2.64 to 73.34 m). Seasonal water-level fluctuations can be estimated from the hydrographs in this report for wells that have depth-to-water measurements collected under static conditions. The seasonal water-level fluctuations range from less than 1 m to a little more than 7 m during the monitoring period. In general, the hydrographs for the Deh Sabz, Logar, Paghman and Upper Kabul, and Shomali subbasins show relatively little change in the water-level trend during the period of record, whereas hydrographs for the Central Kabul subbasin show water level decreases of several meters to about 25 m.

  18. Acoustic measurement of sediment dynamics in the coastal zones using wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Sudhakaran, A., II; Paramasivam, A.; Seshachalam, S.; A, C.

    2014-12-01

    Analyzing of the impact of constructive or low energy waves and deconstructive or high energy waves in the ocean are very much significant since they deform the geometry of seashore. The deformation may lead to productive result and also to the end of deteriorate damage. Constructive waves results deposition of sediment which widens the beach where as deconstructive waves results erosion which narrows the beach. Validation of historic sediment transportation and prediction of the direction of movement of seashore is essential to prevent unrecoverable damages by incorporating precautionary measurements to identify the factors that influence sediment transportation if feasible. The objective of this study is to propose a more reliable and energy efficient Information and communication system to model the Coastal Sediment Dynamics. Various factors influencing the sediment drift at a particular region is identified. Consequence of source depth and frequency dependencies of spread pattern in the presence of sediments is modeled. Property of source depth and frequency on sensitivity to values of model parameters are determined. Fundamental physical reasons for these sediment interaction effects are given. Shallow to deep water and internal and external wave model of ocean is obtained intended to get acoustic data assimilation (ADA). Signal processing algorithms are used over the observed data to form a full field acoustic propagation model and construct sound speed profile (SSP). The inversions of data due to uncertainties at various depths are compared. The impact of sediment drift over acoustic data is identified. An energy efficient multipath routing scheme Wireless sensor networks (WSN) is deployed for the well-organized communication of data. The WSN is designed considering increased life time, decreased power consumption, free of threats and attacks. The practical data obtained from the efficient system to model the ocean sediment dynamics are evaluated with remote sensing data and the reasons of deviations and uncertainties are unbiased. The probability of changes and impact of sediment drift over ocean dynamic model over the long running of years is estimated.

  19. On Wiener polarity index of bicyclic networks.

    PubMed

    Ma, Jing; Shi, Yongtang; Wang, Zhen; Yue, Jun

    2016-01-11

    Complex networks are ubiquitous in biological, physical and social sciences. Network robustness research aims at finding a measure to quantify network robustness. A number of Wiener type indices have recently been incorporated as distance-based descriptors of complex networks. Wiener type indices are known to depend both on the network's number of nodes and topology. The Wiener polarity index is also related to the cluster coefficient of networks. In this paper, based on some graph transformations, we determine the sharp upper bound of the Wiener polarity index among all bicyclic networks. These bounds help to understand the underlying quantitative graph measures in depth.

  20. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Assessment of the effect of air pollution controls on trends in shortwave radiation over the United States from 1995 through 2010 from multiple observation networks

    EPA Science Inventory

    Long-term data sets of all-sky and clear-sky downwelling shortwave (SW) radiation, cloud cover fraction, and aerosol optical depth (AOD) were analyzed together with surface concentrations from several networks (e.g., Surface Radiation Budget Network (SURFRAD), Clean Air Status an...

  2. The Impact of Network Relationships, Prison Experiences, and Internal Transformation on Women's Success after Prison Release

    ERIC Educational Resources Information Center

    Bui, Hoan N.; Morash, Merry

    2010-01-01

    Using data obtained from retrospective, in-depth interviews with 20 successful female parolees, the present study examines the effects of women offenders' relationships with people in their social networks (i.e., their network relationships) before, during, and after incarceration on their postrelease desistence from crime. Because women's social…

  3. Evaluation of bursal depth as an indicator of age class of harlequin ducks

    USGS Publications Warehouse

    Mather, D.D.; Esler, Daniel N.

    1999-01-01

    We contrasted the estimated age class of recaptured Harlequin Ducks (Histrionicus histrionicus) (n = 255) based on bursal depth with expected age class based on bursal depth at first capture and time since first capture. Although neither estimated nor expected ages can be assumed to be correct, rates of discrepancies between the two for within-year recaptures indicate sampling error, while between-year recaptures test assumptions about rates of bursal involution. Within-year, between-year, and overall discrepancy rates were 10%, 24%, and 18%, respectively. Most (86%) between-year discrepancies occurred for birds expected to be after-third-year (ATY) but estimated to be third-year (TY). Of these ATY-TY discrepancies, 22 of 25 (88%) birds had bursal depths of 2 or 3 mm. Further, five of six between-year recaptures that were known to be ATY but estimated to be TY had 2 mm bursas. Reclassifying birds with 2 or 3 mm bursas as ATY resulted in reduction in between-year (24% to 10%) and overall (18% to 11%) discrepancy rates. We conclude that age determination of Harlequin Ducks based on bursal depth, particularly using our modified criteria, is a relatively consistent and reliable technique.

  4. How does the host population's network structure affect the estimation accuracy of epidemic parameters?

    NASA Astrophysics Data System (ADS)

    Yashima, Kenta; Ito, Kana; Nakamura, Kazuyuki

    2013-03-01

    When an Infectious disease where to prevail throughout the population, epidemic parameters such as the basic reproduction ratio, initial point of infection etc. are estimated from the time series data of infected population. However, it is unclear how does the structure of host population affects this estimation accuracy. In other words, what kind of city is difficult to estimate its epidemic parameters? To answer this question, epidemic data are simulated by constructing a commuting network with different network structure and running the infection process over this network. From the given time series data for each network structure, we would like to analyzed estimation accuracy of epidemic parameters.

  5. Remote Estimation of River Discharge and Bathymetry: Sensitivity to Turbulent Dissipation and Bottom Friction

    NASA Astrophysics Data System (ADS)

    Simeonov, J.; Holland, K. T.

    2016-12-01

    We investigated the fidelity of a hierarchy of inverse models that estimate river bathymetry and discharge using measurements of surface currents and water surface elevation. Our most comprehensive depth inversion was based on the Shiono and Knight (1991) model that considers the depth-averaged along-channel momentum balance between the downstream pressure gradient due to gravity, the bottom drag and the lateral stresses induced by turbulence. The discharge was determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. The bottom friction coefficient was assumed to be known or determined by alternative means. We also considered simplifications of the comprehensive inversion model that exclude the lateral mixing term from the momentum balance and assessed the effect of neglecting this term on the depth and discharge estimates for idealized in-bank flow in symmetric trapezoidal channels with width/depth ratio of 40 and different side-wall slopes. For these simple gravity-friction models, we used two different bottom friction parameterizations - a constant Darcy-Weisbach local friction and a depth-dependent friction related to the local depth and a constant Manning (roughness) coefficient. Our results indicated that the Manning gravity-friction model provides accurate estimates of the depth and the discharge that are within 1% of the assumed values for channels with side-wall slopes between 1/2 and 1/17. On the other hand, the constant Darcy-Weisbach friction model underpredicted the true depth and discharge by 7% and 9%, respectively, for the channel with side-wall slope of 1/17. These idealized modeling results suggest that a depth-dependent parameterization of the bottom friction is important for accurate inversion of depth and discharge and that the lateral turbulent mixing is not important. We also tested the comprehensive and the simplified inversion models for the Kootenai River near Bonners Ferry (Idaho) using in situ and remote sensing measurements of surface currents and water surface elevation obtained during a 2010 field experiment.

  6. The maximum economic depth of groundwater abstraction for irrigation

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.

  7. Estimates of velocity structure and source depth using multiple P waves from aftershocks of the 1987 Elmore Ranch and Superstition Hills, California, earthquakes

    USGS Publications Warehouse

    Mori, J.

    1991-01-01

    Event record sections, which are constructed by plotting seismograms from many closely spaced earthquakes recorded on a few stations, show multiple free-surface reflections (PP, PPP, PPPP) of the P wave in the Imperial Valley. The relative timing of these arrivals is used to estimate the strength of the P-wave velocity gradient within the upper 5 km of the sediment layer. Consistent with previous studies, a velocity model with a value of 1.8 km/sec at the surface increasing linearly to 5.8 km/sec at a depth of 5.5 km fits the data well. The relative amplitudes of the P and PP arrivals are used to estimate the source depth for the aftershock distributions of the Elmore Ranch and Superstition Hills main shocks. Although the depth determination has large uncertainties, both the Elmore Ranch and Superstition Hills aftershock sequencs appear to have similar depth distribution in the range of 4 to 10 km. -Author

  8. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.

    PubMed

    Li, Jielin; Hassebrook, Laurence G; Guan, Chun

    2003-01-01

    Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.

  9. Modeling intracavitary heating of the uterus by means of a balloon catheter

    NASA Astrophysics Data System (ADS)

    Olsrud, Johan; Friberg, Britt; Rioseco, Juan; Ahlgren, Mats; Persson, Bertil R. R.

    1999-01-01

    Balloon thermal endometrial destruction (TED) is a recently developed method to treat heavy menstrual bleeding (menorrhagia). Numerical simulations of this treatment by use of the finite element method were performed. The mechanical deformation and the resulting stress distribution when a balloon catheter is expanded within the uterine cavity was estimated from structural analysis. Thermal analysis was then performed to estimate the depth of tissue coagulation (temperature > 55 degree(s)C) in the uterus during TED. The estimated depth of coagulation, after 30 min heating with an intracavity temperature of 75 degree(s)C, was approximately 9 mm when blood flow was disregarded. With uniform normal blood flow, the depth of coagulation decreased to 3 - 4 mm. Simulations with varying intracavity temperatures and blood flow rates showed that both parameters should be of major importance to the depth of coagulation. The influence of blood flow was less when the pressure due to the balloon was also considered (5 - 6 mm coagulation depth with normal blood flow).

  10. Multiplicity of the 660-km discontinuity beneath the Izu-Bonin area

    NASA Astrophysics Data System (ADS)

    Zhou, Yuan-Ze; Yu, Xiang-Wei; Yang, Hui; Zang, Shao-Xian

    2012-05-01

    The relatively simple subducting slab geometry in the Izu-Bonin region provides a valuable opportunity to study the multiplicity of the 660-km discontinuity and the related response of the subducting slab on the discontinuity. Vertical short-period recordings of deep events with simple direct P phases beneath the Izu-Bonin region were retrieved from two seismic networks in the western USA and were used to study the structure of the 660-km discontinuity. After careful selection and pre-processing, 23 events from the networks, forming 32 pairs of event-network records, were processed. Related vespagrams were produced using the N-th root slant stack method for detecting weak down-going SdP phases that were inverted to the related conversion points. From depth histograms and the spatial distribution of the conversion points, there were three clear interfaces at depths of 670, 710 and 730 km. These interfaces were depressed approximately 20-30 km in the northern region. In the southern region, only two layers were identified in the depth histograms, and no obvious layered structure could be observed from the distribution of the conversion points.

  11. Estimation of the proteomic cancer co-expression sub networks by using association estimators.

    PubMed

    Erdoğan, Cihat; Kurt, Zeyneb; Diri, Banu

    2017-01-01

    In this study, the association estimators, which have significant influences on the gene network inference methods and used for determining the molecular interactions, were examined within the co-expression network inference concept. By using the proteomic data from five different cancer types, the hub genes/proteins within the disease-associated gene-gene/protein-protein interaction sub networks were identified. Proteomic data from various cancer types is collected from The Cancer Proteome Atlas (TCPA). Correlation and mutual information (MI) based nine association estimators that are commonly used in the literature, were compared in this study. As the gold standard to measure the association estimators' performance, a multi-layer data integration platform on gene-disease associations (DisGeNET) and the Molecular Signatures Database (MSigDB) was used. Fisher's exact test was used to evaluate the performance of the association estimators by comparing the created co-expression networks with the disease-associated pathways. It was observed that the MI based estimators provided more successful results than the Pearson and Spearman correlation approaches, which are used in the estimation of biological networks in the weighted correlation network analysis (WGCNA) package. In correlation-based methods, the best average success rate for five cancer types was 60%, while in MI-based methods the average success ratio was 71% for James-Stein Shrinkage (Shrink) and 64% for Schurmann-Grassberger (SG) association estimator, respectively. Moreover, the hub genes and the inferred sub networks are presented for the consideration of researchers and experimentalists.

  12. Reconstruction of financial networks for robust estimation of systemic risk

    NASA Astrophysics Data System (ADS)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-03-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks.

  13. Viscoelastic coupling model of the San Andreas fault along the big bend, southern California

    USGS Publications Warehouse

    Savage, J.C.; Lisowski, M.

    1997-01-01

    The big bend segment of the San Andreas fault is the 300-km-long segment in southern California that strikes about N65??W, roughly 25?? counterclockwise from the local tangent to the small circle about the Pacific-North America pole of rotation. The broad distribution of deformation of trilateration networks along this segment implies a locking depth of at least 25 km as interpreted by the conventional model of strain accumulation (continuous slip on the fault below the locking depth at the rate of relative plate motion), whereas the observed seismicity and laboratory data on fault strength suggest that the locking depth should be no greater than 10 to 15 km. The discrepancy is explained by the viscoelastic coupling model which accounts for the viscoelastic response of the lower crust. Thus the broad distribution of deformation observed across the big bend segment can be largely associated with the San Andreas fault itself, not subsidiary faults distributed throughout the region. The Working Group on California Earthquake Probabilities [1995] in using geodetic data to estimate the seismic risk in southern California has assumed that strain accumulated off the San Andreas fault is released by earthquakes located off the San Andreas fault. Thus they count the San Andreas contribution to total seismic moment accumulation more than once, leading to an overestimate of the seismicity for magnitude 6 and greater earthquakes in their Type C zones.

  14. Landscape unit based digital elevation model development for the freshwater wetlands within the Arthur C. Marshall Loxahatchee National Wildlife Refuge, Southeastern Florida

    USGS Publications Warehouse

    Xie, Zhixiao; Liu, Zhongwei; Jones, John W.; Higer, Aaron L.; Telis, Pamela A.

    2011-01-01

    The hydrologic regime is a critical limiting factor in the delicate ecosystem of the greater Everglades freshwater wetlands in south Florida that has been severely altered by management activities in the past several decades. "Getting the water right" is regarded as the key to successful restoration of this unique wetland ecosystem. An essential component to represent and model its hydrologic regime, specifically water depth, is an accurate ground Digital Elevation Model (DEM). The Everglades Depth Estimation Network (EDEN) supplies important hydrologic data, and its products (including a ground DEM) have been well received by scientists and resource managers involved in Everglades restoration. This study improves the EDEN DEMs of the Loxahatchee National Wildlife Refuge, also known as Water Conservation Area 1 (WCA1), by adopting a landscape unit (LU) based interpolation approach. The study first filtered the input elevation data based on newly available vegetation data, and then created a separate geostatistical model (universal kriging) for each LU. The resultant DEMs have encouraging cross-validation and validation results, especially since the validation is based on an independent elevation dataset (derived by subtracting water depth measurements from EDEN water surface elevations). The DEM product of this study will directly benefit hydrologic and ecological studies as well as restoration efforts. The study will also be valuable for a broad range of wetland studies.

  15. Quantifying organic aerosol single scattering albedo over tropical biomass burning regions using ground-based observation

    NASA Astrophysics Data System (ADS)

    Chu, J. E.

    2016-12-01

    Despite growing evidence of light-absorbing organic aerosols (OAs), OA light absorption has been poorly understood due to difficulties in aerosol light absorption measurements. In this study, we developed an empirical method to quantify OA single scattering albedo (SSA), the ratio of light scattering to extinction, using ground-based Aerosol Robotic Network (AERONET) observation. Our method includes partitioning fine-mode aerosol optical depth (fAOD) to individual aerosol's optical depth (AOD), separating black carbon and OA absorption aerosol optical depths, and finally binding OA SSA and sulfate+nitrate AOD. Our best estimate of OA SSA over tropical biomass burning region is 0.91 at 550nm with a range of 0.82-0.93. It implies the common OA SSA values of 0.96-1.0 in aerosol CTMs and GCMs significantly underrepresent OA light absorption. Model experiments with prescribed OA SSA showed that the enhanced absorption of solar radiation due to light absorbing OA yields global mean radiative forcing is +0.09 Wm-2 at the TOA, +0.21 Wm-2 at the atmosphere, and -0.12 Wm-2 at the surface. Compared to the previous assessment of OA radiative forcing reported in AeroCom II project, our result indicate that OA light absorption causes TOA radiative forcing by OA to change from negative (i.e., cooling effect) to positive (warming effect).

  16. Spatial Variability of Soil-Water Storage in the Southern Sierra Critical Zone Observatory: Measurement and Prediction

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Bales, R. C.; Zheng, Z.; Glaser, S. D.

    2017-12-01

    Predicting the spatial distribution of soil moisture in mountain environments is confounded by multiple factors, including complex topography, spatial variably of soil texture, sub-surface flow paths, and snow-soil interactions. While remote-sensing tools such as passive-microwave monitoring can measure spatial variability of soil moisture, they only capture near-surface soil layers. Large-scale sensor networks are increasingly providing soil-moisture measurements at high temporal resolution across a broader range of depths than are accessible from remote sensing. It may be possible to combine these in-situ measurements with high-resolution LIDAR topography and canopy cover to estimate the spatial distribution of soil moisture at high spatial resolution at multiple depths. We study the feasibility of this approach using six years (2009-2014) of daily volumetric water content measurements at 10-, 30-, and 60-cm depths from the Southern Sierra Critical Zone Observatory. A non-parametric, multivariate regression algorithm, Random Forest, was used to predict the spatial distribution of depth-integrated soil-water storage, based on the in-situ measurements and a combination of node attributes (topographic wetness, northness, elevation, soil texture, and location with respect to canopy cover). We observe predictable patterns of predictor accuracy and independent variable ranking during the six-year study period. Predictor accuracy is highest during the snow-cover and early recession periods but declines during the dry period. Soil texture has consistently high feature importance. Other landscape attributes exhibit seasonal trends: northness peaks during the wet-up period, and elevation and topographic-wetness index peak during the recession and dry period, respectively.

  17. 3D registration of depth data of porous surface coatings based on 3D phase correlation and the trimmed ICP algorithm

    NASA Astrophysics Data System (ADS)

    Loftfield, Nina; Kästner, Markus; Reithmeier, Eduard

    2017-06-01

    A critical factor of endoprostheses is the quality of the tribological pairing. The objective of this research project is to manufacture stochastically porous aluminum oxide surface coatings with high wear resistance and an active friction minimization. There are many experimental and computational techniques from mercury porosimetry to imaging methods for studying porous materials, however, the characterization of disordered pore networks is still a great challenge. To meet this challenge it is striven to gain a three dimensional high resolution reconstruction of the surface. In this work, the reconstruction is approached by repeatedly milling down the surface by a fixed decrement while measuring each layer using a confocal laser scanning microscope (CLSM). The so acquired depth data of the successive layers is then registered pairwise. Within this work a direct registration approach is deployed and implemented in two steps, a coarse and a fine alignment. The coarse alignment of the depth data is limited to a translational shift which occurs in horizontal direction due to placing the sample in turns under the CLSM and the milling machine and in vertical direction due to the milling process itself. The shift is determined by an approach utilizing 3D phase correlation. The fine alignment is implemented by the Trimmed Iterative Closest Point algorithm, matching the most likely common pixels roughly specified by an estimated overlap rate. With the presented two-step approach a proper 3D registration of the successive depth data of the layer is obtained.

  18. Regional characteristics of the relationship between columnar AOD and surface PM2.5: Application of lidar aerosol extinction profiles over Baltimore-Washington Corridor during DISCOVER-AQ

    NASA Astrophysics Data System (ADS)

    Chu, D. Allen; Ferrare, Richard; Szykman, James; Lewis, Jasper; Scarino, Amy; Hains, Jennifer; Burton, Sharon; Chen, Gao; Tsai, Tzuchin; Hostetler, Chris; Hair, Johnathan; Holben, Brent; Crawford, James

    2015-01-01

    The first field campaign of DISCOVER-AQ (Deriving Information on Surface conditions from COlumn and VERtically resolved observations relevant to Air Quality) took place in July 2011 over Baltimore-Washington Corridor (BWC). A suite of airborne remote sensing and in-situ sensors was deployed along with ground networks for mapping vertical and horizontal distribution of aerosols. Previous researches were based on a single lidar station because of the lack of regional coverage. This study uses the unique airborne HSRL (High Spectral Resolution Lidar) data to baseline PM2.5 (particulate matter of aerodynamic diameter less than 2.5 μm) estimates and applies to regional air quality with satellite AOD (Aerosol Optical Depth) retrievals over BWC (∼6500 km2). The linear approximation takes into account aerosols aloft above AML (Aerosol Mixing Layer) by normalizing AOD with haze layer height (i.e., AOD/HLH). The estimated PM2.5 mass concentrations by HSRL AOD/HLH are shown within 2 RMSE (Root Mean Square Error ∼9.6 μg/m3) with correlation ∼0.88 with the observed over BWC. Similar statistics are shown when applying HLH data from a single location over the distance of 100 km. In other words, a single lidar is feasible to cover the range of 100 km with expected uncertainties. The employment of MPLNET-AERONET (MicroPulse Lidar NETwork - AErosol RObotic NETwork) measurements at NASA GSFC produces similar statistics of PM2.5 estimates as those derived by HSRL. The synergy of active and passive remote sensing aerosol measurements provides the foundation for satellite application of air quality on a daily basis. For the optimal range of 10 km, the MODIS-estimated PM2.5 values are found satisfactory at 27 (out of 36) sunphotometer locations with mean RMSE of 1.6-3.3 μg/m3 relative to PM2.5 estimated by sunphotometers. The remaining 6 of 8 marginal sites are found in the coastal zone, for which associated large RMSE values ∼4.5-7.8 μg/m3 are most likely due to overestimated AOD because of water-contaminated pixels.

  19. Structural and functional correlates of hypnotic depth and suggestibility.

    PubMed

    McGeown, William Jonathan; Mazzoni, Giuliana; Vannucci, Manila; Venneri, Annalena

    2015-02-28

    This study explores whether self-reported depth of hypnosis and hypnotic suggestibility are associated with individual differences in neuroanatomy and/or levels of functional connectivity. Twenty-nine people varying in suggestibility were recruited and underwent structural, and after a hypnotic induction, functional magnetic resonance imaging at rest. We used voxel-based morphometry to assess the correlation of grey matter (GM) and white matter (WM) against the independent variables: depth of hypnosis, level of relaxation and hypnotic suggestibility. Functional networks identified with independent components analysis were regressed with the independent variables. Hypnotic depth ratings were positively correlated with GM volume in the frontal cortex and the anterior cingulate cortex (ACC). Hypnotic suggestibility was positively correlated with GM volume in the left temporal-occipital cortex. Relaxation ratings did not correlate significantly with GM volume and none of the independent variables correlated with regional WM volume measures. Self-reported deeper levels of hypnosis were associated with less connectivity within the anterior default mode network. Taken together, the results suggest that the greater GM volume in the medial frontal cortex and ACC, and lower connectivity in the DMN during hypnosis facilitate experiences of greater hypnotic depth. The patterns of results suggest that hypnotic depth and hypnotic suggestibility should not be considered synonyms. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. The effect of tracking network configuration on GPS baseline estimates for the CASA Uno experiment

    NASA Technical Reports Server (NTRS)

    Wolf, S. Kornreich; Dixon, T. H.; Freymueller, J. T.

    1990-01-01

    The effect of the tracking network on long (greater than 100 km) GPS baseline estimates was estimated using various subsets of the global tracking network initiated by the first Central and South America (CASA Uno) experiment. It was found that best results could be obtained with a global tacking network consisting of three U.S. stations, two sites in the southwestern Pacific, and two sites in Europe. In comparison with smaller subsets, this global network improved the baseline repeatability, the resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates.

  1. NIED seismic moment tensor catalogue for regional earthquakes around Japan: quality test and application

    NASA Astrophysics Data System (ADS)

    Kubo, Atsuki; Fukuyama, Eiichi; Kawai, Hiroyuki; Nonomura, Ken'ichi

    2002-10-01

    We have examined the quality of the National Research Institute for Earth Science and Disaster Prevention (NIED) seismic moment tensor (MT) catalogue obtained using a regional broadband seismic network (FREESIA). First, we examined using synthetic waveforms the robustness of the solutions with regard to data noise as well as to errors in the velocity structure and focal location. Then, to estimate the reliability, robustness and validity of the catalogue, we compared it with the Harvard centroid moment tensor (CMT) catalogue as well as the Japan Meteorological Agency (JMA) focal mechanism catalogue. We found out that the NIED catalogue is consistent with Harvard and JMA catalogues within the uncertainty of 0.1 in moment magnitude, 10 km in depth, and 15° in direction of the stress axes. The NIED MT catalogue succeeded in reducing to 3.5 the lower limit of moment magnitude above which the moment tensor could be reliably estimated. Finally, we estimated the stress tensors in several different regions by using the NIED MT catalogue. This enables us to elucidate the stress/deformation field in and around the Japanese islands to understand the mode of deformation and applied stress. Moreover, we identified a region of abnormal stress in a swarm area from stress tensor estimates.

  2. Analytical magmatic source modelling from a joint inversion of ground deformation and focal mechanisms data

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Scandura, Danila; Palano, Mimmo; Musumeci, Carla

    2014-05-01

    Seismicity and ground deformation represent the principal geophysical methods for volcano monitoring and provide important constraints on subsurface magma movements. The occurrence of migrating seismic swarms, as observed at several volcanoes worldwide, are commonly associated with dike intrusions. In addition, on active volcanoes, (de)pressurization and/or intrusion of magmatic bodies stress and deform the surrounding crustal rocks, often causing earthquakes randomly distributed in time within a volume extending about 5-10 km from the wall of the magmatic bodies. Despite advances in space-based, geodetic and seismic networks have significantly improved volcano monitoring in the last decades on an increasing worldwide number of volcanoes, quantitative models relating deformation and seismicity are not common. The observation of several episodes of volcanic unrest throughout the world, where the movement of magma through the shallow crust was able to produce local rotation of the ambient stress field, introduces an opportunity to improve the estimate of the parameters of a deformation source. In particular, during these episodes of volcanic unrest a radial pattern of P-axes of the focal mechanism solutions, similar to that of ground deformation, has been observed. Therefore, taking into account additional information from focal mechanisms data, we propose a novel approach to volcanic source modeling based on the joint inversion of deformation and focal plane solutions assuming that both observations are due to the same source. The methodology is first verified against a synthetic dataset of surface deformation and strain within the medium, and then applied to real data from an unrest episode occurred before the May 13th 2008 eruption at Mt. Etna (Italy). The main results clearly indicate as the joint inversion improves the accuracy of the estimated source parameters of about 70%. The statistical tests indicate that the source depth is the parameter with the highest increment of accuracy. In addition a sensitivity analysis confirms that displacements data are more useful to constrain the pressure and the horizontal location of the source than its depth, while the P-axes better constrain the depth estimation.

  3. The AMSR2 Satellite-based Microwave Snow Algorithm (SMSA) to estimate regional to global snow depth and snow water equivalent

    NASA Astrophysics Data System (ADS)

    Kelly, R. E. J.; Saberi, N.; Li, Q.

    2017-12-01

    With moderate to high spatial resolution (<1 km) regional to global snow water equivalent (SWE) observation approaches yet to be fully scoped and developed, the long-term satellite passive microwave record remains an important tool for cryosphere-climate diagnostics. A new satellite microwave remote sensing approach is described for estimating snow depth (SD) and snow water equivalent (SWE). The algorithm, called the Satellite-based Microwave Snow Algorithm (SMSA), uses Advanced Microwave Scanning Radiometer - 2 (AMSR2) observations aboard the Global Change Observation Mission - Water mission launched by the Japan Aerospace Exploration Agency in 2012. The approach is unique since it leverages observed brightness temperatures (Tb) with static ancillary data to parameterize a physically-based retrieval without requiring parameter constraints from in situ snow depth observations or historical snow depth climatology. After screening snow from non-snow surface targets (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]), moderate and shallow snow depths are estimated by minimizing the difference between Dense Media Radiative Transfer model estimates (Tsang et al., 2000; Picard et al., 2011) and AMSR2 Tb observations to retrieve SWE and SD. Parameterization of the model combines a parsimonious snow grain size and density approach originally developed by Kelly et al. (2003). Evaluation of the SMSA performance is achieved using in situ snow depth data from a variety of standard and experiment data sources. Results presented from winter seasons 2012-13 to 2016-17 illustrate the improved performance of the new approach in comparison with the baseline AMSR2 algorithm estimates and approach the performance of the model assimilation-based approach of GlobSnow. Given the variation in estimation power of SWE by different land surface/climate models and selected satellite-derived passive microwave approaches, SMSA provides SWE estimates that are independent of real or near real-time in situ and model data.

  4. Techniques for estimating flood-depth frequency relations for streams in West Virginia

    USGS Publications Warehouse

    Wiley, J.B.

    1987-01-01

    Multiple regression analyses are applied to data from 119 U.S. Geological Survey streamflow stations to develop equations that estimate baseline depth (depth of 50% flow duration) and 100-yr flood depth on unregulated streams in West Virginia. Drainage basin characteristics determined from the 100-yr flood depth analysis were used to develop 2-, 10-, 25-, 50-, and 500-yr regional flood depth equations. Two regions with distinct baseline depth equations and three regions with distinct flood depth equations are delineated. Drainage area is the most significant independent variable found in the central and northern areas of the state where mean basin elevation also is significant. The equations are applicable to any unregulated site in West Virginia where values of independent variables are within the range evaluated for the region. Examples of inapplicable sites include those in reaches below dams, within and directly upstream from bridge or culvert constrictions, within encroached reaches, in karst areas, and where streams flow through lakes or swamps. (Author 's abstract)

  5. Making Initial Earthquake Catalogs from a Temporary Seismic Network for Monitoring Aftershocks

    NASA Astrophysics Data System (ADS)

    Park, J.; Kang, T. S.; Kim, K. H.; Rhie, J.; Kim, Y.

    2017-12-01

    The ML 5.1 foreshock and the ML 5.8 mainshock earthquakes occurred consecutively in Gyeongju, the southeastern part of the Korean Peninsula, on September 12, 2016. A temporary seismic network was installed quickly to observe aftershocks followed this mainshock event in the vicinity of the epicenter. The network was consisting of 27 stations equipped with broadband sensors initially and it has been operated in off-line system which required a periodic manual backup of the recorded data. We detected P-triggers and associated events by using SeisComP3 to make an initial catalogue of aftershock events rapidly. If necessary, manual picking was performed to obtain precise P- and S-arrival times from a module, scolv, included in SeisComP3. For cross-checking of reliable identification of seismic phases, a seismic python package, PhasePApy, was applied in parallel with SeisComP3. Then we get the precise relocated coordinates and depth of the aftershock events using the velellipse algorithm. The resulting dataset comprises of an initial aftershock catalog. The catalog will provide the means to address some important questions and issues on seismogenesis in this intraplate seismicity region including the 2016 Gyeongju earthquake sequence and to improve seismic hazard estimation of the region.

  6. Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network

    NASA Astrophysics Data System (ADS)

    Singh, U. K.; Tiwari, R. K.; Singh, S. B.

    2010-02-01

    The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.

  7. Catalog of Residential Depth-Damage Functions Used by the Army Corps of Engineers in Flood Damage Estimation

    DTIC Science & Technology

    1992-05-01

    regression analysis. The strength of any one variable can be estimated along with the strength of the entire model in explaining the variance of percent... applicable a set of damage functions is to a particular situation. Sometimes depth- damage functions are embedded in computer programs which calculate...functions. Chapter Six concludes with recommended policies on the development and application of depth-damage functions. 5 6 CHAPTER TWO CONSTRUCTION OF

  8. National Network of Eisenhower Regional Consortia and Clearinghouse: Supporting the Improvement of Mathematics and Science in America's Schools. Evaluation Summary Report for 1995-2000 with In-Depth Evaluation of Training and Technical Assistance, Dissemination, and Collaboration and Networking Services.

    ERIC Educational Resources Information Center

    National Network of Eisenhower Regional Consortia and National Clearinghouse.

    This report, addressed to sponsors and partners of the Eisenhower consortia and clearinghouse network as well as the staff of those organizations, contains the evaluation summary report of the National Network of Eisenhower Regional Consortia and Clearinghouse. It summarizes network outcomes over the 5-year period between 1995-2000. The report…

  9. The INGV seismic monitoring system: activities during the first month of the 2016 Amatrice seismic sequence.

    NASA Astrophysics Data System (ADS)

    Scognamiglio, L.; Margheriti, L.; Moretti, M.; Pintore, S.

    2016-12-01

    At 01:36:32 UTC on August 24, 2016 an earthquake of ML=6.0 occurred in Central Italy, near Amatrice village; 21 s after the origin time, the first automatic location became available while the first magnitude estimate followed 47s after. The INGV seismologists on duty provided the alert to the Italian Civil Protection Department and thereby triggered the seismic emergency protocol In the hours after the earthquake, hundreds of events were recorded by the Italian Seismic Network of the INGV. SISMIKO, the coordinating body of the emergency seismic network, was activated few minutes after the mainshock. The main goal of this emergency group is to install temporary dense seismic network integrated with the existing permanent networks in the epicentral area to better constrain the aftershock hypocenters. From August the 24th to the 30th, SISMIKO deployed 18 seismic stations, generally six components (equipped with both seismometer and accelerometer), 13 of which were transmitting in real-time to the INGV seismic surveillance room in Rome. All data acquired are available at the European Integrated Data Archive (EIDA). The seismic sequence in the first month generated thousands of earthquakes which were processed and detected by the INGV automated localization system. We analyzed the performance of this system. Hundreds of those events were located by seismologists on shifts, the others were left to be analyzed by the Bollettino Sismico Italiano (BSI). The procedures of the BSI revise and integrate all available data. This allows for a better constrained location and for a more realistic hypocentral depth estimation. The first eight hours of August 24th were the most critical for the INGV surveillance room. Data recorded in these hours were carefully re-analyzed by BSI operators and the number of located events increased from 133 to 408, while the magnitude of completeness dropped significantly from about 3.5 to 2.7.

  10. Low Loss Polymer Nanoparticle Composites for RF Applications

    DTIC Science & Technology

    2014-09-17

    size of nanoparticles below a critical dimension ( skin depth).6 It is possible to increase the skin depth of the hybrid material by decreasing the...filled with elastomers,[10-12] polymer-nanoparticle composites,[13, 14] liquid metal filled microfluidic channels,[4, 15] conductive networks on pre

  11. Estimation of bathymetric depth and slope from data assimilation of swath altimetry into a hydrodynamic model

    NASA Astrophysics Data System (ADS)

    Durand, Michael; Andreadis, Konstantinos M.; Alsdorf, Douglas E.; Lettenmaier, Dennis P.; Moller, Delwyn; Wilson, Matthew

    2008-10-01

    The proposed Surface Water and Ocean Topography (SWOT) mission would provide measurements of water surface elevation (WSE) for characterization of storage change and discharge. River channel bathymetry is a significant source of uncertainty in estimating discharge from WSE measurements, however. In this paper, we demonstrate an ensemble-based data assimilation (DA) methodology for estimating bathymetric depth and slope from WSE measurements and the LISFLOOD-FP hydrodynamic model. We performed two proof-of-concept experiments using synthetically generated SWOT measurements. The experiments demonstrated that bathymetric depth and slope can be estimated to within 3.0 microradians or 50 cm, respectively, using SWOT WSE measurements, within the context of our DA and modeling framework. We found that channel bathymetry estimation accuracy is relatively insensitive to SWOT measurement error, because uncertainty in LISFLOOD-FP inputs (such as channel roughness and upstream boundary conditions) is likely to be of greater magnitude than measurement error.

  12. Field estimates of groundwater circulation depths in two mountainous watersheds in the western U.S. and the effect of deep circulation on solute concentrations in streamflow

    NASA Astrophysics Data System (ADS)

    Frisbee, Marty D.; Tolley, Douglas G.; Wilson, John L.

    2017-04-01

    Estimates of groundwater circulation depths based on field data are lacking. These data are critical to inform and refine hydrogeologic models of mountainous watersheds, and to quantify depth and time dependencies of weathering processes in watersheds. Here we test two competing hypotheses on the role of geology and geologic setting in deep groundwater circulation and the role of deep groundwater in the geochemical evolution of streams and springs. We test these hypotheses in two mountainous watersheds that have distinctly different geologic settings (one crystalline, metamorphic bedrock and the other volcanic bedrock). Estimated circulation depths for springs in both watersheds range from 0.6 to 1.6 km and may be as great as 2.5 km. These estimated groundwater circulation depths are much deeper than commonly modeled depths suggesting that we may be forcing groundwater flow paths too shallow in models. In addition, the spatial relationships of groundwater circulation depths are different between the two watersheds. Groundwater circulation depths in the crystalline bedrock watershed increase with decreasing elevation indicative of topography-driven groundwater flow. This relationship is not present in the volcanic bedrock watershed suggesting that both the source of fracturing (tectonic versus volcanic) and increased primary porosity in the volcanic bedrock play a role in deep groundwater circulation. The results from the crystalline bedrock watershed also indicate that relatively deep groundwater circulation can occur at local scales in headwater drainages less than 9.0 km2 and at larger fractions than commonly perceived. Deep groundwater is a primary control on streamflow processes and solute concentrations in both watersheds.

  13. Node Self-Deployment Algorithm Based on an Uneven Cluster with Radius Adjusting for Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Wu, Feng

    2016-01-01

    Existing move-restricted node self-deployment algorithms are based on a fixed node communication radius, evaluate the performance based on network coverage or the connectivity rate and do not consider the number of nodes near the sink node and the energy consumption distribution of the network topology, thereby degrading network reliability and the energy consumption balance. Therefore, we propose a distributed underwater node self-deployment algorithm. First, each node begins the uneven clustering based on the distance on the water surface. Each cluster head node selects its next-hop node to synchronously construct a connected path to the sink node. Second, the cluster head node adjusts its depth while maintaining the layout formed by the uneven clustering and then adjusts the positions of in-cluster nodes. The algorithm originally considers the network reliability and energy consumption balance during node deployment and considers the coverage redundancy rate of all positions that a node may reach during the node position adjustment. Simulation results show, compared to the connected dominating set (CDS) based depth computation algorithm, that the proposed algorithm can increase the number of the nodes near the sink node and improve network reliability while guaranteeing the network connectivity rate. Moreover, it can balance energy consumption during network operation, further improve network coverage rate and reduce energy consumption. PMID:26784193

  14. Estimates of Cutoff Depths of Seismogenic Layer in Kanto Region from the High-Resolution Relocated Earthquake Catalog

    NASA Astrophysics Data System (ADS)

    Takeda, T.; Yano, T. E.; Shiomi, K.

    2013-12-01

    The highly-developed active fault evaluation is necessary particularly at the Kanto metropolitan area, where multiple major active fault zones exist. The cutoff depth of active faults is one of important parameters since it is a good indicator to define fault dimensions and hence its maximum expected magnitude. The depth is normally estimated from microseismicity, thermal structure, and depths of Curie point and Conrad discontinuity. For instance, Omuralieva et al. (2012) has estimated the cutoff depths of the whole Japan by creating a 3-D relocated hypocenter catalog. However its spatial resolution could be insufficient for the robustness of the active faults evaluation since precision within 15 km that is comparable to the minimum evaluated fault size is preferred. Therefore the spatial resolution of the earthquake catalog to estimate the cutoff depth is required to be smaller than 15 km. This year we launched the Japan Unified hIgh-resolution relocated Catalog for Earthquakes (JUICE) Project (Yano et al., this fall meeting), of which objective is to create precise and reliable earthquake catalog for all of Japan, using waveform cross-correlation data and Double-Difference relocation method (Waldhauser and Ellsworth, 2000). This catalog has higher precision of hypocenter determination than the routine one. In this study, we estimate high-resolution cutoff depths of seismogenic layer using this catalog of the Kanto region where preliminary JUICE analysis has been already done. D90, the cutoff depths which contain 90% of the occuring earthquake is often used as a reference to understand the seismogenic layer. The reason of choosing 90% is because it relies on uncertainties based on the amount of depth errors of hypocenters.. In this study we estimate D95 because more precise and reliable catalog is now available by the JUICE project. First we generate 10 km equally spaced grid in our study area. Second we pick hypocenters within a radius of 10 km from each grid point and arrange into hypocenter groups. Finally we estimate D95 from the hypocenter groups at each grid point. During the analysis we use three conditions; (1) the depths of the hypocenters we used are less than 25 km; (2) the minimum number of the hypocenter group is 25; and (3) low frequency earthquakes are excluded. Our estimate of D95 shows undulated and fine features, such as having a different profile along the same fault. This can be seen at two major fault zones: (1) Tachikawa fault zone, and (2) the northwest marginal fault zone of the Kanto basin. The D95 gets deeper from northwest to southwest along these fault zones, , suggesting that the constant cutoff depth cannot be used even along the same fault zone. One of patters of our D95 shows deeper in the south Kanto region. The reason for this pattern could be that hypocenters we used in this study may be contaminated by seismicity near the plate boundary between the Philippine Sea plate and the Eurasian plate. Therefore we should carefully interpret D95 in the south Kanto.

  15. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  16. Evolutionary features of academic articles co-keyword network and keywords co-occurrence network: Based on two-mode affiliation network

    NASA Astrophysics Data System (ADS)

    Li, Huajiao; An, Haizhong; Wang, Yue; Huang, Jiachen; Gao, Xiangyun

    2016-05-01

    Keeping abreast of trends in the articles and rapidly grasping a body of article's key points and relationship from a holistic perspective is a new challenge in both literature research and text mining. As the important component, keywords can present the core idea of the academic article. Usually, articles on a single theme or area could share one or some same keywords, and we can analyze topological features and evolution of the articles co-keyword networks and keywords co-occurrence networks to realize the in-depth analysis of the articles. This paper seeks to integrate statistics, text mining, complex networks and visualization to analyze all of the academic articles on one given theme, complex network(s). All 5944 ;complex networks; articles that were published between 1990 and 2013 and are available on the Web of Science are extracted. Based on the two-mode affiliation network theory, a new frontier of complex networks, we constructed two different networks, one taking the articles as nodes, the co-keyword relationships as edges and the quantity of co-keywords as the weight to construct articles co-keyword network, and another taking the articles' keywords as nodes, the co-occurrence relationships as edges and the quantity of simultaneous co-occurrences as the weight to construct keyword co-occurrence network. An integrated method for analyzing the topological features and evolution of the articles co-keyword network and keywords co-occurrence networks is proposed, and we also defined a new function to measure the innovation coefficient of the articles in annual level. This paper provides a useful tool and process for successfully achieving in-depth analysis and rapid understanding of the trends and relationships of articles in a holistic perspective.

  17. MSE-impact of PPP-RTK ZTD estimation strategies

    NASA Astrophysics Data System (ADS)

    Wang, K.; Khodabandeh, A.; Teunissen, P. J. G.

    2018-06-01

    In PPP-RTK network processing, the wet component of the zenith tropospheric delay (ZTD) cannot be precisely modelled and thus remains unknown in the observation equations. For small networks, the tropospheric mapping functions of different stations to a given satellite are almost equal to each other, thereby causing a near rank-deficiency between the ZTDs and satellite clocks. The stated near rank-deficiency can be solved by estimating the wet ZTD components relatively to that of the reference receiver, while the wet ZTD component of the reference receiver is constrained to zero. However, by increasing network scale and humidity around the reference receiver, enlarged mismodelled effects could bias the network and the user solutions. To consider both the influences of the noise and the biases, the mean-squared errors (MSEs) of different network and user parameters are studied analytically employing both the ZTD estimation strategies. We conclude that for a certain set of parameters, the difference in their MSE structures using both strategies is only driven by the square of the reference wet ZTD component and the formal variance of its solution. Depending on the network scale and the humidity condition around the reference receiver, the ZTD estimation strategy that delivers more accurate solutions might be different. Simulations are performed to illustrate the conclusions made by analytical studies. We find that estimating the ZTDs relatively in large networks and humid regions (for the reference receiver) could significantly degrade the network ambiguity success rates. Using ambiguity-fixed network-derived PPP-RTK corrections, for networks with an inter-station distance within 100 km, the choices of the ZTD estimation strategy is not crucial for single-epoch ambiguity-fixed user positioning. Using ambiguity-float network corrections, for networks with inter-station distances of 100, 300 and 500 km in humid regions (for the reference receiver), the root-mean-squared errors (RMSEs) of the estimated user coordinates using relative ZTD estimation could be higher than those under the absolute case with differences up to millimetres, centimetres and decimetres, respectively.

  18. Lithospheric structure beneath the Caribbean- South American plate boundary from S receiver functions

    NASA Astrophysics Data System (ADS)

    Masy, J.; Levander, A.; Niu, F.

    2010-12-01

    We have analyzed teleseismic S-wave data recorded by the permanent national seismic network of Venezuela and the BOLIVAR broadband array (Broadband Onshore-offshore Lithospheric Investigation of Venezuela and the Antilles arc Region) deployed from 2003 to 2005. A total of 28 events with Mw > 5.7 occurring at epicentral distances from 55° to 85° were used. We made Sp receiver functions to estimate the rapid variations of lithospheric structure in the southern Caribbean plate boundary region to try to better understand the complicated tectonic history of the region. Estimated Moho depth ranges from ~20 km beneath the Caribbean Large Igneous Provinces to ~50 km beneath the Mérida Andes in western Venezuela and the Sierra del Interior in northeastern Venezuela. These results are consistent with previous receiver functions studies (Niu et al., 2007) and active source profiles (Schmitz et al., 2001; Bezada et al., 2007; Clark et al., 2008; Guedez, 2008; Magnani et al., 2009). Beneath the Maracaibo Block we observe a signal at a depth of 100 km dipping ~24° towards the continent, which we interpret as the top of the oceanic Caribbean slab that is subducting beneath South America from the west. The deeper part of the slab was previously imaged using P-wave tomography (Bezada et al, 2010), and the upper part inferred from intermediate depth seismicity (Malavé and Suarez, 1995). These studies indicate flat slab subduction beneath northern Colombia and northwestern Venezuela with the slab dipping between 20° - 30° beneath Lake Maracaibo. Like others we attribute the flat slab subduction to the uplift of the Mérida Andes (for example Kellogg and Bonini, 1982). In eastern Venezuela beneath the Sierra del Interior we also observe a deep signal that we interpret as deep South American lithosphere that is detaching from the overriding plate as the Atlantic subducts and tears away from SA (Bezada et al., 2010; Clark et al, 2008). The lithosphere-asthenosphere boundary (LAB) is not a continuous feature under the entire region, instead it is seen beneath the Cordillera de la Costa in central Venezuela at ~130 km, also under the Perijá Range and the Sierra del Interior. Under the Guayana Shield we observe two distinct regions with LAB depths at ~150 km depth. We also see the LAB at this depth in places north of the Orinoco River, suggesting the presence of cratonic structures north of the river. These results are in good agreement with the structures observed by Miller et al. (2009) in Rayleigh wave tomography images.

  19. a New Algorithm for the Aod Inversion from Noaa/avhrr Data

    NASA Astrophysics Data System (ADS)

    Sun, L.; Li, R.; Yu, H.

    2018-04-01

    The advanced very high resolution radiometer (AVHRR) data from the National Oceanic and Atmospheric Administration satellite is one of the earliest data applied in aerosol research. The dense dark vegetation (DDV) algorithm is a popular method for the present land aerosol retrieval. One of the most crucial steps in the DDV algorithm with AVHRR data is estimating the land surface reflectance (LSR). However, LSR cannot be easily estimated because of the lack of a 2.13 μm band. In this article, the moderate resolution imaging spectroradiometer (MODIS) vegetation index product (MYD13) is introduced to support the estimation of AVHRR LSR. The relationship between MODIS NDVI and the AVHRR LSR of the visible band is analysed to retrieve aerosol optical depth (AOD) from AVHRR data. Retrieval experiments are carried out in mid-eastern America. The AOD data from AErosol RObotic NETwork (AERONET) measurements are used to evaluate the aerosol retrieval from AVHRR data, the results indicate that about 74 % of the retrieved AOD are within the expected error range of ±(0.05 + 0.2), and a cross comparison of the AOD retrieval results with the MODIS aerosol product (MYD04) shows that the AOD datasets have a similar spatial distribution.

  20. Importance of model parameterization in finite fault inversions: Application to the 1974 Mw 8.0 Peru Earthquake

    USGS Publications Warehouse

    Hartzell, Stephen; Langer, Charley

    1993-01-01

    The spatial and temporal slip distributions for the October 3, 1974 (Mw = 8.0), Peru subduction zone earthquake and its largest aftershock on November 9 (Ms = 7.1) are calculated and analyzed in terms of the inversion parameterization and tectonic significance. Teleseismic, long-period World-Wide Standard Seismograph Network, P and SH waveforms are inverted to obtain the rupture histories. We demonstrate that erroneous results are obtained if a parameterization is used that does not allow for a sufficiently complex source, involving spatial variation in slip amplitude, risetime, and rupture time. The inversion method utilizes a parameterization of the fault that allows for a discretized source risetime and rupture time. Well-located aftershocks recorded on a local network have the same general pattern as teleseismically determined hypocenters and help to constrain the geometry of the subduction zone. For the main shock a hinged fault is preferred having a shallow plane with a dip of 11° and a deeper, landward plane with a dip of 30°. The preferred nucleation depth lies between 11 and 15 km. A bilateral rupture is obtained with two major concentrations of slip, one 60 to 70 km to the northwest of the epicenter and a second 80 to 100 km to the south and southeast of the epicenter. For these source regions, risetimes vary from 6 to 18 s. Our estimates of risetimes are consistent with the time for the rupture to traverse the dominant local asperity. The slip distribution for the November 9 aftershock falls within a conspicuous hole in the main shock rupture pattern, near the hypocenter of the main shock. The November 9 event has a simple risetime function with a duration of 2 s. Aftershocks recorded by the local network are shown to cluster near the hypocenter of the impending November 9 event and downdip from the largest main shock source region. Slip during the main shock is concentrated at shallow depths above 15 km and extends updip from the hypocenter to near the plate boundary at the trench axis. The large amount of slip at shallow depths is attributed to the absence of any significant accretionary wedge of sediments, and the relatively young age and high convergence rate of the subducted plate, which results in good seismic coupling near the trench axis.

  1. The performance of the stations of the Romanian seismic network in monitoring the local seismic activity

    NASA Astrophysics Data System (ADS)

    Ardeleanu, Luminita Angela; Neagoe, Cristian

    2014-05-01

    The seismic survey of the territory of Romania is mainly performed by the national seismic network operated by the National Institute for Earth Physics of Bucharest. After successive developments and upgrades, the network consists at present of 123 permanent stations equipped with high quality digital instruments (Kinemetrics K2, Quantera Q330, Quantera Q330HR, PS6-24 and Basalt digitizers) - 102 real time and 20 off-line stations - which cover the whole territory of the country. All permanent stations are supplied with 3 component accelerometers (episenzor type), while the real time stations are in addition provided with broadband (CMG3ESP, CMG40T, KS2000, KS54000, KS2000, CMG3T, STS2) or short period (SH-1, S13, Mark l4c, Ranger, GS21, L22_VEL) velocity sensors. Several communication systems are currently used for the real time data transmission: an analog line in UHF band, a line through GPRS (General Packet Radio Service), a dedicated line through satellite, and a dedicated line provided by the Romanian Special Telecommunication Service. During the period January 1, 2006 - June 30, 2013, 5936 shallow depth seismic events - earthquakes and quarry blasts - with local magnitude ML ≥ 1.2 were localized on the Romanian territory, or in its immediate vicinity, using the records of the national seismic network; 1467 subcrustal earthquakes (depth ≥ 60 km) with magnitude ML ≥ 1.9 were also localized in the Vrancea region, at the bend of the Eastern Carpathians. The goal of the present study is to evaluate the individual contribution of the real time seismic stations to the monitoring of the local seismicity. The performance of each station is estimated by taking into consideration the fraction of events that are localised using the station records, compared to the total number of events of the catalogue, occurred during the time of station operation. Taking into account the nonuniform space distribution of earthquakes, the location of the site and the recovery rate of reliable data are defining elements for the usefulness of a particular station. Our analysis provides a measure of station reliability, essential indicator for decisions regarding the increasing of effectiveness and future development of the Romanian seismic network.

  2. A multi-scale automatic observatory of soil moisture and temperature served for satellite product validation in Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Tang, S.; Dong, L.; Lu, P.; Zhou, K.; Wang, F.; Han, S.; Min, M.; Chen, L.; Xu, N.; Chen, J.; Zhao, P.; Li, B.; Wang, Y.

    2016-12-01

    Due to the lack of observing data which match the satellite pixel size, the inversion accuracy of satellite products in Tibetan Plateau(TP) is difficult to be evaluated. Hence, the in situ observations are necessary to support the calibration and validation activities. Under the support of the Third Tibetan Plateau Atmospheric Scientific Experiment (TIPEX-III) projec a multi-scale automatic observatory of soil moisture and temperature served for satellite product validation (TIPEX-III-SMTN) were established in Tibetan Plateau. The observatory consists of two regional scale networks, including the Naqu network and the Geji network. The Naqu network is located in the north of TP, and characterized by alpine grasslands. The Geji network is located in the west of TP, and characterized by marshes. Naqu network includes 33 stations, which are deployed in a 75KM*75KM region according to a pre-designed pattern. At Each station, soil moisture and temperature are measured by five sensors at five soil depths. One sensor is vertically inserted into 0 2 cm depth to measure the averaged near-surface soil moisture and temperature. The other four sensors are horizontally inserted at 5, 10, 20, and 30 cm depths, respectively. The data are recorded every 10 minutes. A wireless transmission system is applied to transmit the data in real time, and a dual power supply system is adopted to keep the continuity of the observation. The construction of Naqu network has been accomplished in August, 2015, and Geji network will be established before Oct., 2016. Observations acquired from TIPEX-III-SMTN can be used to validate satellite products with different spatial resolution, and TIPEX-III-SMTN can also be used as a complementary of the existing similar networks in this area, such as CTP-SMTMN (the multiscale Soil Moistureand Temperature Monitoring Network on the central TP) . Keywords: multi-scale soil moisture soil temperature, Tibetan Plateau Acknowledgments: This work was jointly supported by CMA Special Fund for Scientific Research in the Public Interest (Grant No. GYHY201406001, GYHY201206008-01), and Climate change special fund (QHBH2014)'

  3. Teleseismic depth estimation of the 2015 Gorkha-Nepal aftershocks

    NASA Astrophysics Data System (ADS)

    Letort, Jean; Bollinger, Laurent; Lyon-Caen, Helene; Guilhem, Aurélie; Cano, Yoann; Baillard, Christian; Adhikari, Lok Bijaya

    2016-12-01

    The depth of 61 aftershocks of the 2015 April 25 Gorkha, Nepal earthquake, that occurred within the first 20 d following the main shock, is constrained using time delays between teleseismic P phases and depth phases (pP and sP). The detection and identification of these phases are automatically processed using the cepstral method developed by Letort et al., and are validated with computed radiation patterns from the most probable focal mechanisms. The events are found to be relatively shallow (13.1 ± 3.9 km). Because depth estimations could potentially be biased by the method, velocity model or selected data, we also evaluate the depth resolution of the events from local catalogues by extracting 138 events with assumed well-constrained depth estimations. Comparison between the teleseismic depths and the depths from local and regional catalogues helps decrease epistemic uncertainties, and shows that the seismicity is clustered in a narrow band between 10 and 15 km depth. Given the geometry and depth of the major tectonic structures, most aftershocks are probably located in the immediate vicinity of the Main Himalayan Thrust (MHT) shear zone. The mid-crustal ramp of the flat/ramp MHT system is not resolved indicating that its height is moderate (less than 5-10 km) in the trace of the sections that ruptured on April 25. However, the seismicity depth range widens and deepens through an adjacent section to the east, a region that failed on 2015 May 12 during an Mw 7.3 earthquake. This deeper seismicity could reflect a step-down of the basal detachment of the MHT, a lateral structural variation which probably acted as a barrier to the dynamic rupture propagation.

  4. Stakeholder Perspectives. Proceedings of the Ed-ICT International Network Symposium (Dawson College Montreal, Quebec, Canada, May 30-Jun 1, 2017)

    ERIC Educational Resources Information Center

    Jorgensen, Mary; Fichten, Catherine; King, Laura; Havel, Alice

    2018-01-01

    The purpose of these conference proceedings is to provide an in-depth understanding of what was presented and discussed at the Ed-ICT International Network Montreal Symposium: Stakeholder Perspectives. The focus of the Ed-ICT International Network is to explore the role that information and communication technologies (ICTs)--including computers,…

  5. Characteristics and Impact of the Further Mathematics Knowledge Networks: Analysis of an English Professional Development Initiative on the Teaching of Advanced Mathematics

    ERIC Educational Resources Information Center

    Ruthven, Kenneth

    2014-01-01

    Reports from 13 Further Mathematics Knowledge Networks supported by the National Centre for Excellence in the Teaching of Mathematics [NCETM] are analysed. After summarizing basic characteristics of the networks regarding leadership, composition and pattern of activity, each of the following aspects is examined in greater depth: Developmental aims…

  6. Information Computer Communications Policy, 2: The Usage of International Data Networks in Europe.

    ERIC Educational Resources Information Center

    Organisation for Economic Cooperation and Development, Paris (France).

    This study of the development of international data networks, a phenomena of the 1970's, and policy issues arising from their use is an in depth investigation of 24 private and six public European networks commissioned from Logica Limited and sponsored by the governments of France, Germany, the Netherlands, Norway, Spain, and Sweden. The report…

  7. Small scatterers in the lower mantle observed at German broadband arrays

    USGS Publications Warehouse

    Thomas, C.; Weber, M.; Wicks, C.W.; Scherbaum, F.

    1999-01-01

    Seismograms of earthquakes from the South Pacific recorded at a German broadband array and network show precursors to PKPdf. These precursors mainly originate from off-path scattering of PKPab or a nearby PKPbc to P (for receiver-side scattering) or from scattering of P to PKPab or PKPbc on the PKPdf path (for source-side scattering). Standard array processing techniques based on plane wave approximations (such as vespagram or frequency-wavenumber analysis) are inadequate for investigating these precursors since scattered waves cannot be approximated as plane waves for arrays and networks larger than 300 x 300 km for short-period waves. We therefore develop a migration method to estimate the location of scatterers in the mantle, at the core-mantle boundary and at the top of the outer core. With our method we are able to find isolated scatterers at the source side and the receiver side, although the depth of the scatterer is not well constrained. However, from looking at the first possible arrival time of precursors at different depth and the region where scattering can take place (scattering volume), we believe that the location of the scatterers is in the lowermost mantle. Since we have detected scatterers in regions where ultralow-velocity zones have been discovered recently, we think that the precursor energy possibly originates from scattering at partial melt at the base of the mantle. Comparing results from broadband and band-pass-filtered data the detection of small-scale structure of the ultralow-velocity zones becomes possible. Copyright 1999 by the American Geophysical Union.

  8. Cross-country transferability of multi-variable damage models

    NASA Astrophysics Data System (ADS)

    Wagenaar, Dennis; Lüdtke, Stefan; Kreibich, Heidi; Bouwer, Laurens

    2017-04-01

    Flood damage assessment is often done with simple damage curves based only on flood water depth. Additionally, damage models are often transferred in space and time, e.g. from region to region or from one flood event to another. Validation has shown that depth-damage curve estimates are associated with high uncertainties, particularly when applied in regions outside the area where the data for curve development was collected. Recently, progress has been made with multi-variable damage models created with data-mining techniques, i.e. Bayesian Networks and random forest. However, it is still unknown to what extent and under which conditions model transfers are possible and reliable. Model validations in different countries will provide valuable insights into the transferability of multi-variable damage models. In this study we compare multi-variable models developed on basis of flood damage datasets from Germany as well as from The Netherlands. Data from several German floods was collected using computer aided telephone interviews. Data from the 1993 Meuse flood in the Netherlands is available, based on compensations paid by the government. The Bayesian network and random forest based models are applied and validated in both countries on basis of the individual datasets. A major challenge was the harmonization of the variables between both datasets due to factors like differences in variable definitions, and regional and temporal differences in flood hazard and exposure characteristics. Results of model validations and comparisons in both countries are discussed, particularly in respect to encountered challenges and possible solutions for an improvement of model transferability.

  9. Microbial community dynamics in soil aggregates shape biogeochemical gas fluxes from soil profiles - upscaling an aggregate biophysical model.

    PubMed

    Ebrahimi, Ali; Or, Dani

    2016-09-01

    Microbial communities inhabiting soil aggregates dynamically adjust their activity and composition in response to variations in hydration and other external conditions. These rapid dynamics shape signatures of biogeochemical activity and gas fluxes emitted from soil profiles. Recent mechanistic models of microbial processes in unsaturated aggregate-like pore networks revealed a highly dynamic interplay between oxic and anoxic microsites jointly shaped by hydration conditions and by aerobic and anaerobic microbial community abundance and self-organization. The spatial extent of anoxic niches (hotspots) flicker in time (hot moments) and support substantial anaerobic microbial activity even in aerated soil profiles. We employed an individual-based model for microbial community life in soil aggregate assemblies represented by 3D angular pore networks. Model aggregates of different sizes were subjected to variable water, carbon and oxygen contents that varied with soil depth as boundary conditions. The study integrates microbial activity within aggregates of different sizes and soil depth to obtain estimates of biogeochemical fluxes from the soil profile. The results quantify impacts of dynamic shifts in microbial community composition on CO2 and N2 O production rates in soil profiles in good agreement with experimental data. Aggregate size distribution and the shape of resource profiles in a soil determine how hydration dynamics shape denitrification and carbon utilization rates. Results from the mechanistic model for microbial activity in aggregates of different sizes were used to derive parameters for analytical representation of soil biogeochemical processes across large scales of practical interest for hydrological and climate models. © 2016 John Wiley & Sons Ltd.

  10. Estimation of Sea Ice Thickness Distributions through the Combination of Snow Depth and Satellite Laser Altimetry Data

    NASA Technical Reports Server (NTRS)

    Kurtz, Nathan T.; Markus, Thorsten; Cavalieri, Donald J.; Sparling, Lynn C.; Krabill, William B.; Gasiewski, Albin J.; Sonntag, John G.

    2009-01-01

    Combinations of sea ice freeboard and snow depth measurements from satellite data have the potential to provide a means to derive global sea ice thickness values. However, large differences in spatial coverage and resolution between the measurements lead to uncertainties when combining the data. High resolution airborne laser altimeter retrievals of snow-ice freeboard and passive microwave retrievals of snow depth taken in March 2006 provide insight into the spatial variability of these quantities as well as optimal methods for combining high resolution satellite altimeter measurements with low resolution snow depth data. The aircraft measurements show a relationship between freeboard and snow depth for thin ice allowing the development of a method for estimating sea ice thickness from satellite laser altimetry data at their full spatial resolution. This method is used to estimate snow and ice thicknesses for the Arctic basin through the combination of freeboard data from ICESat, snow depth data over first-year ice from AMSR-E, and snow depth over multiyear ice from climatological data. Due to the non-linear dependence of heat flux on ice thickness, the impact on heat flux calculations when maintaining the full resolution of the ICESat data for ice thickness estimates is explored for typical winter conditions. Calculations of the basin-wide mean heat flux and ice growth rate using snow and ice thickness values at the 70 m spatial resolution of ICESat are found to be approximately one-third higher than those calculated from 25 km mean ice thickness values.

  11. First Quarter Hanford Seismic Report for Fiscal Year 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohay, Alan C.; Sweeney, Mark D.; Hartshorn, Donald C.

    2010-03-29

    The Hanford Seismic Network and the Eastern Washington Regional Network consist of 44 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Assessment Team. The Hanford Seismic Network recorded 81 local earthquakes during the first quarter of FY 2010. Sixty-five of these earthquakes were detected in the vicinity of Wooded Island, located about eight miles north of Richland just west of the Columbia River. The Wooded Island events recorded this quarter is a continuation of the swarm events observed during fiscal year 2009 and reported in previous quarterly and annual reports (Rohay et al; 2009a, 2009b,more » 2009c, and 2009d). Most of the events were considered minor (coda-length magnitude [Mc] less than 1.0) with only 1 event in the 2.0-3.0 range; the maximum magnitude event (2.5 Mc) occurred on December 22 at depth 2.1 km. The average depth of the Wooded Island events during the quarter was 1.4 km with a maximum depth estimated at 3.1 km. This placed the Wooded Island events within the Columbia River Basalt Group (CRBG). The low magnitude of the Wooded Island events has made them undetectable to all but local area residents. The Hanford SMA network was triggered several times by these events and the SMA recordings are discussed in section 6.0. During the last year some Hanford employees working within a few miles of the swarm area and individuals living directly across the Columbia River from the swarm center have reported feeling many of the larger magnitude events. Strong motion accelerometer (SMA) units installed directly above the swarm area at ground surface measured peak ground accelerations approaching 15% g, the largest values recorded at Hanford. This corresponds to strong shaking of the ground, consistent with what people in the local area have reported. However, the duration and magnitude of these swarm events should not result in any structural damage to facilities. The USGS performed a geophysical survey using satellite interferometry that detected approximately 1 inch uplift in surface deformation along an east-west transect within the swarm area. The uplift is thought to be caused by the release of pressure that has built up in sedimentary layers, cracking the brittle basalt layers with the Columbia River Basalt Formation (CRBG) and causing the earthquakes. Similar earthquake swarms have been recorded near this same location in 1970, 1975 and 1988 but not with SMA readings or satellite imagery. Prior to the 1970s, swarming may have occurred, but equipment was not in place to record those events. The Wooded Island swarm, due its location and the limited magnitude of the events, does not appear to pose any significant risk to Hanford waste storage facilities. Since swarms of the past did not intensify in magnitude, seismologists do not expect that these events will persist or increase in intensity. However, Pacific Northwest National Laboratory (PNNL) will continue to monitor the activity. Outside of the Wooded Island swarm, sixteen earthquakes were recorded, all minor events. Seven earthquakes were located at intermediate depths (between 4 and 9 km), most likely in the pre-basalt sediments and nine earthquakes at depths greater than 9 km, within the basement. Geographically, seven earthquakes were located in known swarm areas and nine earthquakes were classified as random events.« less

  12. Condition Monitoring of Off-Highway Truck Tires at Sungun Copper Mine Using Neural Networks / Monitorowanie Stanu Technicznego Opon W CIĘŻKICH Pojazdach Terenowych Eksploatowanych W Kopalni Miedzi Sungun, Przy UŻYCIU Sieci Neuronowych

    NASA Astrophysics Data System (ADS)

    Morad, Amin Moniri; Sattarvand, Javad

    2013-12-01

    Maintenance cost of the equipment is one of the most important portions of the operating expenditures in mines; therefore, any change in the equipment productivity can lead to major changes in the unit cost of the production. This clearly shows the importance and necessity of using novel maintenance methods instead of traditional approaches, in order to reach the minimum sudden occurrence of the equipment failure. For instance, the tires are costly components in maintenance which should be regularly inspected and replaced among different axles. The paper investigates the current condition of equipment tires at Sungun Copper Mine and uses neural networks to estimate the wear of the tires. The Input parameters of the network composed of initial tread depth, time of inspection and consumed tread depth by the time of inspection. The output of the network is considered as the residual service time ratio of the tires. The network trained by the feed-forward back propagation learning algorithm. Results revealed a good coincidence between the real and estimated values as 96.6% of correlation coefficient. Hence, better decisions could be made about the tires to reduce the sudden failures and equipment breakdowns. Koszty użytkowania sprzętu stanowią jedną z najpoważniejszych pozycji w zestawieniu kosztów eksploatacyjnych kopalni, dlatego też każda poprawa wydajności sprzętu powoduje w efekcie zmianę jednostkowego kosztu produkcji. Wyraźnie pokazuje to wagę i konieczność stosowania nowoczesnych metod eksploatacji w miejsce podejścia tradycyjnego w celu minimalizacji ryzyka wystąpienia awarii sprzętu. Przykładowo, opony są elementami kosztownymi w eksploatacji, wymagają regularnego przeglądu i ponownego mocowania na osi. W artykule przebadano stan techniczny opon w maszynach i urządzeniach eksploatowanych w kopalni miedzi Sungun. Przy zastosowaniu metod wykorzystujących sieci neuronowe określano zużycie opon. Parametry wejściowe sieci to początkowa głębokość bieżnika, okres pomiędzy przeglądami, zużycie bieżnika do czasu przeglądu. Parametr wyjściowy to współczynnik określającyczas serwisowania opon. Sieć uczono przy użyciu algorytmu propagacji wstecznej z wyprzedzeniem (feedforward back-propagation algorithm). Uzyskane wyniki wskazują wysoką zbieżność pomiędzy wartościami rzeczywistymi a estymowanymi, współczynnik korelacji kształtuje się na poziomie 96.6%. Umożliwia to podejmowanie lepszych decyzji w odniesieniu do eksploatacji opon, tak by zapobiec nagłym uszkodzeniom i awariom sprzętu.

  13. Deep Learning for Real-Time Capable Object Detection and Localization on Mobile Platforms

    NASA Astrophysics Data System (ADS)

    Particke, F.; Kolbenschlag, R.; Hiller, M.; Patiño-Studencki, L.; Thielecke, J.

    2017-10-01

    Industry 4.0 is one of the most formative terms in current times. Subject of research are particularly smart and autonomous mobile platforms, which enormously lighten the workload and optimize production processes. In order to interact with humans, the platforms need an in-depth knowledge of the environment. Hence, it is required to detect a variety of static and non-static objects. Goal of this paper is to propose an accurate and real-time capable object detection and localization approach for the use on mobile platforms. A method is introduced to use the powerful detection capabilities of a neural network for the localization of objects. Therefore, detection information of a neural network is combined with depth information from a RGB-D camera, which is mounted on a mobile platform. As detection network, YOLO Version 2 (YOLOv2) is used on a mobile robot. In order to find the detected object in the depth image, the bounding boxes, predicted by YOLOv2, are mapped to the corresponding regions in the depth image. This provides a powerful and extremely fast approach for establishing a real-time-capable Object Locator. In the evaluation part, the localization approach turns out to be very accurate. Nevertheless, it is dependent on the detected object itself and some additional parameters, which are analysed in this paper.

  14. Testing the applicability of a benthic foraminiferal-based transfer function for the reconstruction of paleowater depth changes in Rhodes (Greece) during the early Pleistocene.

    PubMed

    Milker, Yvonne; Weinkauf, Manuel F G; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J; Schmiedl, Gerhard

    2017-01-01

    We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0-200 m) but far less accuracy of 130 m for deeper water depths (200-1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene.

  15. Catchment-scale snow depth monitoring with balloon photogrammetry

    NASA Astrophysics Data System (ADS)

    Durand, M. T.; Li, D.; Wigmore, O.; Vanderjagt, B. J.; Molotch, N. P.; Bales, R. C.

    2016-12-01

    Field campaigns and permanent in-situ facilities provide extensive measurements of snowpack properties at catchment (or smaller) scales, and have consistently improved our understanding of snow processes and the estimation of snow water resources. However, snow depth, one of the most important snow states, has been measured almost entirely with discrete point-scale samplings in field measurements; spatiotemporally continuous snow depth measurements are nearly nonexistent, mainly due to the high cost of airborne flights and the ban of Unmanned Aerial Systems in many areas (e.g. in all the national parks). In this study, we estimate spatially continuous snow depth from photogrammetric reconstruction of aerial photos taken from a weather balloon. The study was conducted in a 0.2 km2 watershed in Wolverton, Sequoia National Park, California. We tied a point-and-shoot camera on a helium-inflated weather balloon to take aerial images; the camera was scripted to automatically capture images every 3 seconds and to record the camera position and orientation at the imaging times using a built-in GPS. With the 2D images of the snow-covered ground and the camera position and orientation data, the 3D coordinates of the snow surface were reconstructed at 10 cm resolution using photogrammetry software PhotoScan. Similar measurements were taken for the snow-free ground after snowmelt, and the snow depth was estimated from the difference between the snow-on and snow-off measurements. Comparing the photogrammetric-estimated snow depths with the 32 manually measured depths, taken at the same time as the snow-on balloon flight, we find the RMSE of the photogrammetric snow depth is 7 cm, which is 2% of the long-term peak snow depth in the study area. This study suggests that the balloon photogrammetry is a repeatable, economical, simple, and environmental-friendly method to continuously monitor snow at small-scales. Spatiotemporally continuous snow depth could be regularly measured in future field measurements to supplement traditional snow property observations. In addition, since the process of collecting and processing balloon photogrammetry data is straightforward, the photogrammetric snow depth could be shared with the public in real time using our cloud platform that is currently under development.

  16. Testing the applicability of a benthic foraminiferal-based transfer function for the reconstruction of paleowater depth changes in Rhodes (Greece) during the early Pleistocene

    PubMed Central

    Weinkauf, Manuel F. G.; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J.; Schmiedl, Gerhard

    2017-01-01

    We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0–200 m) but far less accuracy of 130 m for deeper water depths (200–1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene. PMID:29166653

  17. Linkages of fracture network geometry and hydro-mechanical properties to spatio-temporal variations of seismicity in Koyna-Warna Seismic Zone

    NASA Astrophysics Data System (ADS)

    Selles, A.; Mikhailov, V. O.; Arora, K.; Ponomarev, A.; Gopinadh, D.; Smirnov, V.; Srinu, Y.; Satyavani, N.; Chadha, R. K.; Davulluri, S.; Rao, N. P.

    2017-12-01

    Well logging data and core samples from the deep boreholes in the Koyna-Warna Seismic Zone (KWSZ) provided a glimpse of the 3-D fracture network responsible for triggered earthquakes in the region. The space-time pattern of earthquakes during the last five decades show strong linkage of favourably oriented fractures system deciphered from airborne LiDAR and borehole structural logging to the seismicity. We used SAR interferometry data on surface displacements to estimate activity of the inferred faults. The failure in rocks at depths is largely governed by overlying lithostatic and pore fluid pressure in the rock matrix which are subject to change in space and time. While lithostatic pressure tends to increase with depth pore pressure is prone to fluctuations due to any change in the hydrological regime. Based on the earthquake catalogue data, the seasonal variations in seismic activity associated with annual fluctuations in the reservoir water level were analyzed over the time span of the entire history of seismological observations in this region. The regularities in the time changes in the structure of seasonal variations are revealed. An increase in pore fluid pressure can result in rock fracture and oscillating pore fluid pressures due to a reservoir loading and unloading cycles can cause iterative and cumulative damage, ultimately resulting in brittle failure under relatively low effective mean stress conditions. These regularities were verified by laboratory physical modeling. Based on our observations of main trends of spatio-temporal variations in seismicity as well as the spatial distribution of fracture network a conceptual model is presented to explain the triggered earthquakes in the KWSZ. The work was supported under the joint Russian-Indian project of the Russian Science Foundation (RSF) and the Department of Science and Technology (DST) of India (RSF project no. 16-47-02003 and DST project INT/RUS/RSF/P-13).

  18. Use of microearthquakes in the study of the mechanics of earthquake generation along the San Andreas fault in central California

    USGS Publications Warehouse

    Eaton, J.P.; Lee, W.H.K.; Pakiser, L.C.

    1970-01-01

    A small, dense network of independently recording portable seismograph stations was used to delineate the slip surface associated with the 1966 Parkfield-Cholame earthquake by precise three dimensional mapping of the hypocenters of its aftershocks. The aftershocks were concentrated in a very narrow vertical zone beneath or immediately adjacent to the zone of surf ace fracturing that accompanied the main shock. Focal depths ranged from less than 1 km to a maximum of 15 km. The same type of portable network was used to study microearthquakes associated with an actively creeping section of the San Andreas fault south of Hollister during the summer of 1967. Microearthquake activity during the 6-week operation of this network was dominated by aftershocks of a magnitude-4 earthquake that occurred within the network near Bear Valley on July 23. Most of the aftershocks were concentrated in an equidimensional region about 2 1 2km across that contained the hypocenter of the main shock. The zone of the concentrated aftershocks was centered near the middle of the rift zone at a depth of about 3 1 2km. Hypocenters of other aftershocks outlined a 25 km long zone of activity beneath the actively creeping strand of the fault and extending from the surface to a depth of about 13 km. A continuing study of microearthquakes along the San Andreas, Hayward, and Calaveras faults between Hollister and San Francisco has been under way for about 2 years. The permanent telemetered network constructed for this purpose has grown from about 30 stations in early 1968 to about 45 stations in late 1969. Microearthquakes between Hollister and San Francisco are heavily concentrated in narrow, nearly vertical zones along sections of the Sargent, San Andreas, and Calaveras faults. Focal depths range from less than 1 km to about 14 km. ?? 1970.

  19. Estimation of the proteomic cancer co-expression sub networks by using association estimators

    PubMed Central

    Kurt, Zeyneb; Diri, Banu

    2017-01-01

    In this study, the association estimators, which have significant influences on the gene network inference methods and used for determining the molecular interactions, were examined within the co-expression network inference concept. By using the proteomic data from five different cancer types, the hub genes/proteins within the disease-associated gene-gene/protein-protein interaction sub networks were identified. Proteomic data from various cancer types is collected from The Cancer Proteome Atlas (TCPA). Correlation and mutual information (MI) based nine association estimators that are commonly used in the literature, were compared in this study. As the gold standard to measure the association estimators’ performance, a multi-layer data integration platform on gene-disease associations (DisGeNET) and the Molecular Signatures Database (MSigDB) was used. Fisher's exact test was used to evaluate the performance of the association estimators by comparing the created co-expression networks with the disease-associated pathways. It was observed that the MI based estimators provided more successful results than the Pearson and Spearman correlation approaches, which are used in the estimation of biological networks in the weighted correlation network analysis (WGCNA) package. In correlation-based methods, the best average success rate for five cancer types was 60%, while in MI-based methods the average success ratio was 71% for James-Stein Shrinkage (Shrink) and 64% for Schurmann-Grassberger (SG) association estimator, respectively. Moreover, the hub genes and the inferred sub networks are presented for the consideration of researchers and experimentalists. PMID:29145449

  20. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    PubMed Central

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933

  1. Population size estimation of men who have sex with men through the network scale-up method in Japan.

    PubMed

    Ezoe, Satoshi; Morooka, Takeo; Noda, Tatsuya; Sabin, Miriam Lewis; Koike, Soichi

    2012-01-01

    Men who have sex with men (MSM) are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel) they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.

  2. Third Quarter Hanford Seismic Report for Fiscal Year 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohay, Alan C.; Sweeney, Mark D.; Hartshorn, Donald C.

    2009-09-30

    The Hanford Seismic Assessment Program (HSAP) provides an uninterrupted collection of high-quality raw and processed seismic data from the Hanford Seismic Network for the U.S. Department of Energy and its contractors. The HSAP is responsible for locating and identifying sources of seismic activity and monitoring changes in the historical pattern of seismic activity at the Hanford Site. The data are compiled, archived, and published for use by the Hanford Site for waste management, natural phenomena hazards assessments, and engineering design and construction. In addition, the HSAP works with the Hanford Site Emergency Services Organization to provide assistance in the eventmore » of a significant earthquake on the Hanford Site. The Hanford Seismic Network and the Eastern Washington Regional Network consist of 44 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Assessment Team. The Hanford Seismic Network recorded 771 local earthquakes during the third quarter of FY 2009. Nearly all of these earthquakes were detected in the vicinity of Wooded Island, located about eight miles north of Richland just west of the Columbia River. The Wooded Island events recorded this quarter is a continuation of the swarm events observed during the January – March 2009 time period and reported in the previous quarterly report (Rohay et al, 2009). The frequency of Wooded Island events has subsided with 16 events recorded during June 2009. Most of the events were considered minor (magnitude (Mc) less than 1.0) with 25 events in the 2.0-3.0 range. The estimated depths of the Wooded Island events are shallow (averaging less than 1.0 km deep) with a maximum depth estimated at 2.2 km. This places the Wooded Island events within the Columbia River Basalt Group (CRBG). The low magnitude of the Wooded Island events has made them undetectable to all but local area residents. However, some Hanford employees working within a few miles of the area of highest activity and individuals living in homes directly across the Columbia River from the swarm center have reported feeling many of the larger magnitude events. The Hanford Strong Motion Accelerometer (SMA) network was triggered numerous times by the Wooded Island swarm events. The maximum acceleration value recorded by the SMA network was approximately 3 times lower than the reportable action level for Hanford facilities (2% g) and no action was required. The swarming is likely due to pressure that has built up, cracking the brittle basalt layers within the Columbia River Basalt Formation (CRBG). Similar earthquake “swarms” have been recorded near this same location in 1970, 1975 and 1988. Prior to the 1970s, swarming may have occurred, but equipment was not in place to record those events. Quakes of this limited magnitude do not pose a risk to Hanford cleanup efforts or waste storage facilities. Since swarms of the past did not intensify in magnitude, seismologists do not expect that these events will increase in intensity. However, Pacific Northwest National Laboratory (PNNL) will continue to monitor the activity.« less

  3. Spatial Topography of Individual-Specific Cortical Networks Predicts Human Cognition, Personality, and Emotion.

    PubMed

    Kong, Ru; Li, Jingwei; Orban, Csaba; Sabuncu, Mert R; Liu, Hesheng; Schaefer, Alexander; Sun, Nanbo; Zuo, Xi-Nian; Holmes, Avram J; Eickhoff, Simon B; Yeo, B T Thomas

    2018-06-06

    Resting-state functional magnetic resonance imaging (rs-fMRI) offers the opportunity to delineate individual-specific brain networks. A major question is whether individual-specific network topography (i.e., location and spatial arrangement) is behaviorally relevant. Here, we propose a multi-session hierarchical Bayesian model (MS-HBM) for estimating individual-specific cortical networks and investigate whether individual-specific network topography can predict human behavior. The multiple layers of the MS-HBM explicitly differentiate intra-subject (within-subject) from inter-subject (between-subject) network variability. By ignoring intra-subject variability, previous network mappings might confuse intra-subject variability for inter-subject differences. Compared with other approaches, MS-HBM parcellations generalized better to new rs-fMRI and task-fMRI data from the same subjects. More specifically, MS-HBM parcellations estimated from a single rs-fMRI session (10 min) showed comparable generalizability as parcellations estimated by 2 state-of-the-art methods using 5 sessions (50 min). We also showed that behavioral phenotypes across cognition, personality, and emotion could be predicted by individual-specific network topography with modest accuracy, comparable to previous reports predicting phenotypes based on connectivity strength. Network topography estimated by MS-HBM was more effective for behavioral prediction than network size, as well as network topography estimated by other parcellation approaches. Thus, similar to connectivity strength, individual-specific network topography might also serve as a fingerprint of human behavior.

  4. Description and properties of a resistive network applied to emission tomography detector readouts

    NASA Astrophysics Data System (ADS)

    Boisson, F.; Bekaert, V.; Sahr, J.; Brasse, D.

    2017-11-01

    Over the last twenty years, PET systems have used discrete crystal detector modules coupled to multi-channel photodetectors, mostly to improve the spatial resolution. Although reading each readout channels individually would be of great interest, costs associated with the electronics would, in most cases, be too expensive. It is therefore essential to propose lower-cost solutions that do not degrade the overall system's performance. One possible solution to reduce the development costs of a PET system without degrading performance is the use of a resistive network which reduces the total number of readout channels. In this study, we present a symmetric charge division resistive network and associated software methods to assess the performance of a PET detector. Our approach consists in keeping the n lines and n columns information provided by a symmetric charge division circuit (SCD). We provided equations relative to output currents of the network, which enable estimation of the charge. We propose a novel approach to reconstruct the charge distribution from the lines and columns projection using a maximum likelihood expectation maximization (MLEM) approach which takes the non-uniformity of the photodetector channel gains into account. We also introduce a mathematical proof of the relation between the sigma of the reconstructed charge distribution and the Ratio between the line of interest (maximum value) and the background signal charges. To the best of our knowledge, this is the first study reporting these equations. Preliminary results obtained with a resistive network used in readout of a monolithic 50 × 50 × 8mm3 LYSO crystal coupled to a H9500 PMT validated the effectiveness of the reconstructed charge distribution to optimize both the x and y spatial resolution and the energy resolution. We obtained a mean x and y spatial resolution of 1.10 mm FWHM and a 14.7% energy resolution by calculating the integral of the reconstructed charge distribution. Finally, the relation between the ratio and the sigma of the reconstructed charge distribution may provide new opportunities in terms of Depth-of-Interaction estimation when using a monolithic crystal coupled to a multi-channel photodetector.

  5. Neurometric assessment of intraoperative anesthetic

    DOEpatents

    Kangas, L.J.; Keller, P.E.

    1998-07-07

    The present invention is a method and apparatus for collecting EEG data, reducing the EEG data into coefficients, and correlating those coefficients with a depth of unconsciousness or anesthetic depth, and which obtains a bounded first derivative of anesthetic depth to indicate trends. The present invention provides a developed artificial neural network based method capable of continuously analyzing EEG data to discriminate between awake and anesthetized states in an individual and continuously monitoring anesthetic depth trends in real-time. The present invention enables an anesthesiologist to respond immediately to changes in anesthetic depth of the patient during surgery and to administer the correct amount of anesthetic. 7 figs.

  6. Neurometric assessment of intraoperative anesthetic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kangas, L.J.; Keller, P.E.

    1998-07-07

    The present invention is a method and apparatus for collecting EEG data, reducing the EEG data into coefficients, and correlating those coefficients with a depth of unconsciousness or anesthetic depth, and which obtains a bounded first derivative of anesthetic depth to indicate trends. The present invention provides a developed artificial neural network based method capable of continuously analyzing EEG data to discriminate between awake and anesthetized states in an individual and continuously monitoring anesthetic depth trends in real-time. The present invention enables an anesthesiologist to respond immediately to changes in anesthetic depth of the patient during surgery and to administermore » the correct amount of anesthetic. 7 figs.« less

  7. Moving from theory to practice: A participatory social network mapping approach to address unmet need for family planning in Benin.

    PubMed

    Igras, Susan; Diakité, Mariam; Lundgren, Rebecka

    2017-07-01

    In West Africa, social factors influence whether couples with unmet need for family planning act on birth-spacing desires. Tékponon Jikuagou is testing a social network-based intervention to reduce social barriers by diffusing new ideas. Individuals and groups judged socially influential by their communities provide entrée to networks. A participatory social network mapping methodology was designed to identify these diffusion actors. Analysis of monitoring data, in-depth interviews, and evaluation reports assessed the methodology's acceptability to communities and staff and whether it produced valid, reliable data to identify influential individuals and groups who diffuse new ideas through their networks. Results indicated the methodology's acceptability. Communities were actively and equitably engaged. Staff appreciated its ability to yield timely, actionable information. The mapping methodology also provided valid and reliable information by enabling communities to identify highly connected and influential network actors. Consistent with social network theory, this methodology resulted in the selection of informal groups and individuals in both informal and formal positions. In-depth interview data suggest these actors were diffusing new ideas, further confirming their influence/connectivity. The participatory methodology generated insider knowledge of who has social influence, challenging commonly held assumptions. Collecting and displaying information fostered staff and community learning, laying groundwork for social change.

  8. A Typology to Explain Changing Social Networks Post Stroke.

    PubMed

    Northcott, Sarah; Hirani, Shashivadan P; Hilari, Katerina

    2018-05-08

    Social network typologies have been used to classify the general population but have not previously been applied to the stroke population. This study investigated whether social network types remain stable following a stroke, and if not, why some people shift network type. We used a mixed methods design. Participants were recruited from two acute stroke units. They completed the Stroke Social Network Scale (SSNS) two weeks and six months post stroke and in-depth interviews 8-15 months following the stroke. Qualitative data was analysed using Framework Analysis; k-means cluster analysis was applied to the six-month data set. Eighty-seven participants were recruited, 71 were followed up at six months, and 29 completed in-depth interviews. It was possible to classify all 29 participants into one of the following network types both prestroke and post stroke: diverse; friends-based; family-based; restricted-supported; restricted-unsupported. The main shift that took place post stroke was participants moving out of a diverse network into a family-based one. The friends-based network type was relatively stable. Two network types became more populated post stroke: restricted-unsupported and family-based. Triangulatory evidence was provided by k-means cluster analysis, which produced a cluster solution (for n = 71) with comparable characteristics to the network types derived from qualitative analysis. Following a stroke, a person's social network is vulnerable to change. Explanatory factors for shifting network type included the physical and also psychological impact of having a stroke, as well as the tendency to lose contact with friends rather than family.

  9. Network structure and travel time perception.

    PubMed

    Parthasarathi, Pavithra; Levinson, David; Hochmair, Hartwig

    2013-01-01

    The purpose of this research is to test the systematic variation in the perception of travel time among travelers and relate the variation to the underlying street network structure. Travel survey data from the Twin Cities metropolitan area (which includes the cities of Minneapolis and St. Paul) is used for the analysis. Travelers are classified into two groups based on the ratio of perceived and estimated commute travel time. The measures of network structure are estimated using the street network along the identified commute route. T-test comparisons are conducted to identify statistically significant differences in estimated network measures between the two traveler groups. The combined effect of these estimated network measures on travel time is then analyzed using regression models. The results from the t-test and regression analyses confirm the influence of the underlying network structure on the perception of travel time.

  10. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  11. Oscillation of a Shallow Hydrothermal Fissure Inferred from Long-Period Seismic Events at Taal Volcano, the Philippines

    NASA Astrophysics Data System (ADS)

    Maeda, Y.; Kumagai, H.; Lacson, R.; Figueroa, M. S.; Yamashina, T.

    2012-12-01

    We installed a multi-parameter monitoring network including five broadband seismometers at Taal volcano, the Philippines, where a high risk of near-future eruption is expected. The network detected more than 40,000 long-period (LP) seismic events which have a peak frequency of 0.8 Hz and a Q value of 6. Most of the events occurred in a 2-month-long swarm period of ~600 events/day. Our travel time analysis pointed to a shallow source (100-200 m) beneath the northeastern flank of the active volcanic island. To determine the source mechanism of the LP events, we performed waveform inversion. We first fixed the source location to that obtained by the travel time analysis, and performed inversion using waveforms with and without site amplification corrections and assuming four simple source geometries (a vertical crack, a horizontal crack, a vertical pipe, and a sphere). We obtained the minimum AIC value for the vertical crack source geometry using the corrected waveforms. We next performed a grid search for dip, azimuth, and the location of the tensile crack source using the corrected waveforms. We obtained small residuals for crack dips between 30 and 60 degrees at similar locations to that of the travel time analysis. We used the fluid-filled crack model to interpret the observed complex frequencies of the events. The observed waveforms of the events show a small Q value (= 6), which may be explained by bubbly basalt, bubbly water, or gas. However, since the source location is estimated to be shallow (100-200 m) and we have no evidence for an ascent of magma to such a shallow depth in the swarm period, bubbly basalt seems to be unrealistic. It seems difficult to maintain bubbly water in the inclined crack. For bubbly water, a peak frequency variation is expected to occur due to a variation of the bubble content, whereas the observed peak frequencies of the events are almost constant. The constant frequency is more easily realized by gas in a crack. We therefore examine H2O gas (vapor) for simplicity. We calculated far-field waveforms generated by an oscillation of a crack containing vapor, and applied the Sompi method to estimate Q and nondimensional frequency. The estimated Q of a fundamental longitudinal mode oscillation was similar to the observation. We obtained a reasonable crack size (188 m) from a comparison of the observed peak frequency (0.8 Hz) with the calculated nondimensional frequency of the mode. In the swarm period of the LP events, other anomalies such as large volcano deformation and significant increase of gas emission from the main crater were not observed. This feature and the crack model result above suggest an active and localized vapor supply from magma at depth to the LP source. Such a localized supply may be realized by a transportation of vapor through a fissure. If we assume that the estimated crack volume (10^2 m^3) corresponds to vapor supplied to the LP source for each event, the total vapor mass supplied throughout the swarm period is ~10^7 kg. If we assume that this amount of vapor was originated by degassing from the magma and transported to the LP source through the fissure, we can estimate a magma volume of ~10^6 m^3. We thus suggest that the LP events at Taal were triggered by degassing and transportation of vapor from a deep magma to a shallow depth through a fissure.

  12. Monitoring waterbird abundance in wetlands: The importance of controlling results for variation in water depth

    USGS Publications Warehouse

    Bolduc, F.; Afton, A.D.

    2008-01-01

    Wetland use by waterbirds is highly dependent on water depth, and depth requirements generally vary among species. Furthermore, water depth within wetlands often varies greatly over time due to unpredictable hydrological events, making comparisons of waterbird abundance among wetlands difficult as effects of habitat variables and water depth are confounded. Species-specific relationships between bird abundance and water depth necessarily are non-linear; thus, we developed a methodology to correct waterbird abundance for variation in water depth, based on the non-parametric regression of these two variables. Accordingly, we used the difference between observed and predicted abundances from non-parametric regression (analogous to parametric residuals) as an estimate of bird abundance at equivalent water depths. We scaled this difference to levels of observed and predicted abundances using the formula: ((observed - predicted abundance)/(observed + predicted abundance)) ?? 100. This estimate also corresponds to the observed:predicted abundance ratio, which allows easy interpretation of results. We illustrated this methodology using two hypothetical species that differed in water depth and wetland preferences. Comparisons of wetlands, using both observed and relative corrected abundances, indicated that relative corrected abundance adequately separates the effect of water depth from the effect of wetlands. ?? 2008 Elsevier B.V.

  13. Estimating post-fire organic soil depth in the Alaskan boreal forest using the Normalized Burn Ratio

    Treesearch

    D. Verbyla; R. Lord

    2008-01-01

    As part of a long-term moose browse/fire severity study, we used the Normalized Burn Ratio (NBR) with historic Landsat Thematic Mapper (TM) imagery to estimate fire severity from a 1983 wildfire in interior Alaska. Fire severity was estimated in the field by measuring the depth of the organic soil at 57 sites during the summer of 2006. Sites were selected for field...

  14. Evaluation of uncertainty in field soil moisture estimations by cosmic-ray neutron sensing

    NASA Astrophysics Data System (ADS)

    Scheiffele, Lena Maria; Baroni, Gabriele; Schrön, Martin; Ingwersen, Joachim; Oswald, Sascha E.

    2017-04-01

    Cosmic-ray neutron sensing (CRNS) has developed into a valuable, indirect and non-invasive method to estimate soil moisture at the field scale. It provides continuous temporal data (hours to days), relatively large depth (10-70 cm), and intermediate spatial scale measurements (hundreds of meters), thereby overcoming some of the limitations in point measurements (e.g., TDR/FDR) and of remote sensing products. All these characteristics make CRNS a favorable approach for soil moisture estimation, especially for applications in cropped fields and agricultural water management. Various studies compare CRNS measurements to soil sensor networks and show a good agreement. However, CRNS is sensitive to more characteristics of the land-surface, e.g. additional hydrogen pools, soil bulk density, and biomass. Prior to calibration the standard atmospheric corrections are accounting for the effects of air pressure, humidity and variations in incoming neutrons. In addition, the standard calibration approach was further extended to account for hydrogen in lattice water and soil organic material. Some corrections were also proposed to account for water in biomass. Moreover, the sensitivity of the probe was found to decrease with distance and a weighting procedure for the calibration datasets was introduced to account for the sensors' radial sensitivity. On the one hand, all the mentioned corrections showed to improve the accuracy in estimated soil moisture values. On the other hand, they require substantial additional efforts in monitoring activities and they could inherently contribute to the overall uncertainty of the CRNS product. In this study we aim (i) to quantify the uncertainty in the field soil moisture estimated by CRNS and (ii) to understand the role of the different sources of uncertainty. To this end, two experimental sites in Germany were equipped with a CRNS probe and compared to values of a soil moisture network. The agricultural fields were cropped with winter wheat (Pforzheim, 2013) and maize (Braunschweig, 2014) and differ in soil type and management. The results confirm a general good agreement between soil moisture estimated by CRNS and the soil moisture network. However, several sources of uncertainty were identified i.e., overestimation of dry conditions, strong effects of the additional hydrogen pools and an influence of the vertical soil moisture profile. Based on that, a global sensitivity analysis based on Monte Carlo sampling can be performed and evaluated in terms of soil moisture and footprint characteristics. The results allow quantifying the role of the different factors and identifying further improvements in the method.

  15. Wrinkle Ridge Detachment Depth and Undetected Shortening at Solis Planum, Mars

    NASA Astrophysics Data System (ADS)

    Colton, S. L.; Smart, K. J.; Ferrill, D. A.

    2006-03-01

    Martian wrinkle ridges have estimated detachment depths of 0.25 to 60 km. Our alternative method for determining detachment depth reveals differences and has implications for the predominant scale of deformation at Solis Planum.

  16. Micro-Seismic Monitoring During Stimulation at Paralana-2 South Australia

    NASA Astrophysics Data System (ADS)

    Hasting, M. A.; Albaric, J.; Oye, V.; Reid, P.; Messeiller, M.; Llanos, E.

    2011-12-01

    In 2009 the Paralana JV, drilled the Paralana-2 (P2) Enhanced Geothermal System (EGS) borehole east of the Flinders Range in South Australia. Drilling started on 30 Jun and reached a total depth of 4,003m (G.L AHD) on 9 Nov. A 7- inch casing was set and cemented to a depth of 3,725m and P2 was officially completed on the 9th Dec 2009. On 2 Jan 2011 a six meter zone was perforated between 3,679 and 3,685mRT. A stimulation of P2 was carried out on 3 Jan by injecting approximately 14,668l of fluid at pressure of up to 8.7kpsi and various rates up to 2bpm. During the stimulation 125 micro-earthquakes (MEQ) were triggered in the formation. Most MEQ events occurred in an area about 100m wide and 220m deep at an average depth of 3,850m. The largest event, ML1.4, occurred after the shut-in. Between 11 and 15 July 2011, the main fracture stimulation was carried out with ~3M litres injected at pressures up to 9kpsi and rates up to 10bpm. Over 10,000 MEQ were detected by the seismic monitoring network. This network consisted of 12 surface and 8 borehole stations with sensor depths of 40m, 200m and 1,800m. Four accelerometers were also installed to record ground motions near key facilities in the case of a larger seismic event. MEQ were automatically triggered and located in near-real-time with the software MIMO provided by NORSAR. A traffic light system was in operation and none of the detected events came close to the threshold value. More than 1/2 of the detected events could be processed and located reliably in the full automatic mode. Selected MEQ events were manually picked on site in order to improve the location accuracy. A total of 1,875 events were located to form the final picture of the stimulation fracture. Results show that fracturing occurred in three swarms. The 1st swarm occurs near the well and deepened with time from 3.7km to over 4.1km. The 2nd swarm occurred a few days in and shows as a circular patch extending a few hundred meters east of the 1st one. The 3rd swarm occurred after shut-in and extends downwards to the NNW to a depth of 4.4km. All events form along an ENE trending structure that is steeply dipping to the NNW with a total length of over 900m and width of 400m to 600m. Assuming that injected fluids went into opening of new fractures a volume change must occur. Using Brune's formula for estimating seismic moment and converting to ML we estimated that a total ML3.10 to 3.24 is required to accommodate the fluids. Summing the ML of 1,875 events yields a ML3.12. As such most of the fluids must have gone into the opening of fractures and have created a new geothermal reservoir.

  17. Nonlinear calibration for petroleum water content measurement using PSO

    NASA Astrophysics Data System (ADS)

    Li, Mingbao; Zhang, Jiawei

    2008-10-01

    A new algorithmic for strapdown inertial navigation system (SINS) state estimation based on neural networks is introduced. In training strategy, the error vector and its delay are introduced. This error vector is made of the position and velocity difference between the estimations of system and the outputs of GPS. After state prediction and state update, the states of the system are estimated. After off-line training, the network can approach the status switching of SINS and after on-line training, the state estimate precision can be improved further by reducing network output errors. Then the network convergence is discussed. In the end, several simulations with different noise are given. The results show that the neural network state estimator has lower noise sensitivity and better noise immunity than Kalman filter.

  18. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  19. Grower Communication Networks: Information Sources for Organic Farmers

    ERIC Educational Resources Information Center

    Crawford, Chelsi; Grossman, Julie; Warren, Sarah T.; Cubbage, Fred

    2015-01-01

    This article reports on a study to determine which information sources organic growers use to inform farming practices by conducting in-depth semi-structured interviews with 23 organic farmers across 17 North Carolina counties. Effective information sources included: networking, agricultural organizations, universities, conferences, Extension, Web…

  20. Water-level data for the Albuquerque Basin and adjacent areas, central New Mexico, period of record through September 30, 2014

    USGS Publications Warehouse

    Beman, Joseph E.

    2015-10-21

    An initial network of wells was established by the U.S. Geological Survey (USGS) in cooperation with the City of Albuquerque from April 1982 through September 1983 to monitor changes in groundwater levels throughout the basin. This network consisted of 6 wells with analog-to-digital recorders and 27 wells where water levels were measured monthly in 1983. The network currently (2014) consists of 125 wells and piezometers. (A piezometer is a specialized well open to a specific depth in the aquifer, often of small diameter and nested with other piezometers open to different depths.) The USGS, in cooperation with the Albuquerque Bernalillo County Water Utility Authority, currently (2014) measures and reports water levels from the 125 wells and piezometers in the network; this report presents water-level data collected by USGS personnel at those 125 sites through water year 2014 (October 1, 2013, to September 30, 2014).

Top