The maximum economic depth of groundwater abstraction for irrigation
NASA Astrophysics Data System (ADS)
Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.
2017-12-01
Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.
This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrallymore » buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.« less
Blakely, Richard J.
1981-01-01
Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.
Estimating maximum depth distribution of seagrass using underwater videography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, J.G.; Wyllie-Echeverria, S.
1997-06-01
The maximum depth distribution of eelgrass (Zostera marina) beds in Willapa Bay, Washington appears to be limited by light penetration which is likely related to water turbidity. Using underwater videographic techniques we estimated that the maximum depth penetration in the less turbid outer bay was -5.85 ft (MILW) and in the more turbid inner bay was only -1.59 ft (MLLW). Eelgrass beds had well defined deepwater edges and no eelgrass was observed in the deep channels of the bay. The results from this study suggest that aerial photographs taken during low tide periods are capable of recording the majority ofmore » eelgrass beds in Willapa Bay.« less
Rock Cutting Depth Model Based on Kinetic Energy of Abrasive Waterjet
NASA Astrophysics Data System (ADS)
Oh, Tae-Min; Cho, Gye-Chun
2016-03-01
Abrasive waterjets are widely used in the fields of civil and mechanical engineering for cutting a great variety of hard materials including rocks, metals, and other materials. Cutting depth is an important index to estimate operating time and cost, but it is very difficult to predict because there are a number of influential variables (e.g., energy, geometry, material, and nozzle system parameters). In this study, the cutting depth is correlated to the maximum kinetic energy expressed in terms of energy (i.e., water pressure, water flow rate, abrasive feed rate, and traverse speed), geometry (i.e., standoff distance), material (i.e., α and β), and nozzle system parameters (i.e., nozzle size, shape, and jet diffusion level). The maximum kinetic energy cutting depth model is verified with experimental test data that are obtained using one type of hard granite specimen for various parameters. The results show a unique curve for a specific rock type in a power function between cutting depth and maximum kinetic energy. The cutting depth model developed here can be very useful for estimating the process time when cutting rock using an abrasive waterjet.
A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-02-08
Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.
A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination
2018-01-01
Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759
Determination of the maximum-depth to potential field sources by a maximum structural index method
NASA Astrophysics Data System (ADS)
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-05-18
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.
Scour at bridge sites in Delaware, Maryland, and Virginia
Hayes, Donald C.
1996-01-01
Scour data were obtained from discharge measure- ments to develop and evaluate the reliability of constriction-scour and local-scour equations for rivers in Delaware, Maryland, and Virginia. No independent constriction-scour or local-scour equations were developed from the data because no significant relation was deter-mined between measured scour and streamflow, streambed, and bridge characteristics. Two existing equations were evaluated for prediction of constriction scour and 14 existing equations were evaluated for prediction of local scour. Constriction-scour data were obtained from historical stream discharge measurements, field surveys, and bridge plans at nine bridge sites in the three-State area. Constriction scour was computed by subtracting the average-streambed elevation in the constricted reach from an uncontracted-channel reference elevation. Hydraulic conditions were estimated for the measurements with the greatest discharges by use of the Water-Surface Profile computation model. Measured and calculated constriction-scour data were used to evaluate the reliability of Laursen's clear-water constriction-scour equation and Laursen's live-bed constriction-scour equation. Laursen's clear-water constriction-scour equation underestimated 21 of 23 scour measure- ments made at three sites. A sensitivity analysis showed that the equation is extremely sensitive to estimates of the channel-bottom width. Reduction in estimates of bottom width by one-third resulted in predictions of constriction scour slightly greater than measured values for all scour measurements. Laursen's live-bed constriction- scour equation underestimated 10 of 14 scour measurements made at one site. The error between measured and predicted constriction scour was less than 1.0 ft (feet) for 12 measure-ments and less than 0.5 ft for 8 measurements. Local-scour data were obtained from stream discharge measurements, field surveys, and bridge plans at 15 bridge sites in the three-State area. The reliability of 14 local-scour equations were evaluated. From visual inspection of the plotted data, the Colorado State University, Froehlich design, Laursen, and Mississippi pier-scour equations appeared to be the best predictors of local scour. The Colorado State University equation underestimated 11 scour depths in clear-water scour conditions by a maximum of 2.4 ft, and underestimated 3 scour depth in live-bed scour conditions by a maximum of 1.3 ft. The Froehlich design equation under- estimated two scour depth in clear-water scour conditions by a maximum of 1.2 ft, and under- estimated one scour depth in live-bed scour conditions by a maximum of 0.4 ft. Laursen's equation overestimated the maximum scour depth in clear-water scour conditions by approximately one-half pier width or approximately 1.5 ft, and overestimated the maximum scour depth in live-bed scour conditions by approximately one-pier width or approximately 3 ft. The Mississippi equation underestimated six scour depths in clear-water scour conditions by a maximum of 1.2 ft, and underestimated one scour depth in live-bed scour conditions by 1.6 ft. In both clear-water and live-bed scour conditions, the upper limit for the depth of scour to pier-width ratio for all local scour measurements was 2.1. An accurate pier- approach velocity is necessary to use many local pier-scour equations for bridge design. Velocity data from all the discharge measurements reviewed for this investigation were used to develop a design curve to estimate pier-approach velocity from mean cross-sectional velocity. A least- squares regression and offset were used to envelop the velocity data.
NASA Astrophysics Data System (ADS)
Hayden, T. G.; Kominz, M. A.; Magens, D.; Niessen, F.
2009-12-01
We have estimated ice thicknesses at the AND-1B core during the Last Glacial Maximum by adapting an existing technique to calculate overburden. As ice thickness at Last Glacial Maximum is unknown in existing ice sheet reconstructions, this analysis provides constraint on model predictions. We analyze the porosity as a function of depth and lithology from measurements taken on the AND-1B core, and compare these results to a global dataset of marine, normally compacted sediments compiled from various legs of ODP and IODP. Using this dataset we are able to estimate the amount of overburden required to compact the sediments to the porosity observed in AND-1B. This analysis is a function of lithology, depth and porosity, and generates estimates ranging from zero to 1,000 meters. These overburden estimates are based on individual lithologies, and are translated into ice thickness estimates by accounting for both sediment and ice densities. To do this we use a simple relationship of Xover * (ρsed/ρice) = Xice; where Xover is the overburden thickness, ρsed is sediment density (calculated from lithology and porosity), ρice is the density of glacial ice (taken as 0.85g/cm3), and Xice is the equalivant ice thickness. The final estimates vary considerably, however the “Best Estimate” behavior of the 2 lithologies most likely to compact consistently is remarkably similar. These lithologies are the clay and silt units (Facies 2a/2b) and the diatomite units (Facies 1a) of AND-1B. These lithologies both produce best estimates of approximately 1,000 meters of ice during Last Glacial Maximum. Additionally, while there is a large range of possible values, no combination of reasonable lithology, compaction, sediment density, or ice density values result in an estimate exceeding 1,900 meters of ice. This analysis only applies to ice thicknesses during Last Glacial Maximum, due to the overprinting effect of Last Glacial Maximum on previous ice advances. Analysis of the AND-2A core is underway, and results will be compared to those of AND-1B.
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector
2018-01-01
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644
NASA Astrophysics Data System (ADS)
Dondurur, Derman
2005-11-01
The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.
Modeling an exhumed basin: A method for estimating eroded overburden
Poelchau, H.S.
2001-01-01
The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Palevsky, Hilary I.; Doney, Scott C.
2018-05-01
Estimated rates and efficiency of ocean carbon export flux are sensitive to differences in the depth horizons used to define export, which often vary across methodological approaches. We evaluate sinking particulate organic carbon (POC) flux rates and efficiency (e-ratios) in a global earth system model, using a range of commonly used depth horizons: the seasonal mixed layer depth, the particle compensation depth, the base of the euphotic zone, a fixed depth horizon of 100 m, and the maximum annual mixed layer depth. Within this single dynamically consistent model framework, global POC flux rates vary by 30% and global e-ratios by 21% across different depth horizon choices. Zonal variability in POC flux and e-ratio also depends on the export depth horizon due to pronounced influence of deep winter mixing in subpolar regions. Efforts to reconcile conflicting estimates of export need to account for these systematic discrepancies created by differing depth horizon choices.
Abundance of adult saugers across the Wind River watershed, Wyoming
Amadio, C.J.; Hubert, W.A.; Johnson, K.; Oberlie, D.; Dufek, D.
2006-01-01
The abundance of adult saugers Sander canadensis was estimated over 179 km of continuous lotic habitat across a watershed on the western periphery of their natural distribution in Wyoming. Three-pass depletions with raft-mounted electrofishing gear were conducted in 283 pools and runs among 19 representative reaches totaling 51 km during the late summer and fall of 2002. From 2 to 239 saugers were estimated to occur among the 19 reaches of 1.6-3.8 km in length. The estimates were extrapolated to a total population estimate (mean ?? 95% confidence interval) of 4,115 ?? 308 adult saugers over 179 km of lotie habitat. Substantial variation in mean density (range = 1.0-32.5 fish/ha) and mean biomass (range = 0.5-16.8 kg/ha) of adult saugers in pools and runs was observed among the study reaches. Mean density and biomass were highest in river reaches with pools and runs that had maximum depths of more than 1 m, mean daily summer water temperatures exceeding 20??C, and alkalinity exceeding 130 mg/L. No saugers were captured in the 39 pools or runs with maximum water depths of 0.6 m or less. Multiple-regression analysis and the information-theoretic approach were used to identify watershed-scale and instream habitat features accounting for the variation in biomass among the 244 pools and runs across the watershed with maximum depths greater than 0.6 m. Sauger biomass was greater in pools than in runs and increased as mean daily summer water temperature, maximum depth, and mean summer alkalinity increased and as dominant substrate size decreased. This study provides an estimate of adult sauger abundance and identifies habitat features associated with variation in their density and biomass across a watershed, factors important to the management of both populations and habitat. ?? Copyright by the American Fisheries Society 2006.
Estimation of River Bathymetry from ATI-SAR Data
NASA Astrophysics Data System (ADS)
Almeida, T. G.; Walker, D. T.; Farquharson, G.
2013-12-01
A framework for estimation of river bathymetry from surface velocity observation data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs) including bottom friction. We start with with a cost function defined by the error between observed and estimated surface velocities, and introduce the SWEs as a constraint on the velocity field. The constrained minimization problem is converted to an unconstrained minimization through the use of Lagrange multipliers, and an adjoint SWE model is developed. The adjoint model solution is used to calculate the gradient of the cost function with respect to river bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In applying the algorithm, the 2D depth-averaged flow is computed assuming a known, constant discharge rate and a known, uniform bottom-friction coefficient; a correlation relating surface velocity and depth-averaged velocity is also used. Observation data was collected using a dual beam squinted along-track-interferometric, synthetic-aperture radar (ATI-SAR) system, which provides two independent components of the surface velocity, oriented roughly 30 degrees fore and aft of broadside, offering high-resolution bank-to-bank velocity vector coverage of the river. Data and bathymetry estimation results are presented for two rivers, the Snohomish River near Everett, WA and the upper Sacramento River, north of Colusa, CA. The algorithm results are compared to available measured bathymetry data, with favorable results. General trends show that the water-depth estimates are most accurate in shallow regions, and performance is sensitive to the accuracy of the specified discharge rate and bottom friction coefficient. The results also indicate that, for a given reach, the estimated water depth reaches a maximum that is smaller than the true depth; this apparent maximum depth scales with the true river depth and discharge rate, so that the deepest parts of the river show the largest bathymetry errors.
Burns, W. Matthew; Hayba, Daniel O.; Rowan, Elisabeth L.; Houseknecht, David W.
2007-01-01
The reconstruction of burial and thermal histories of partially exhumed basins requires an estimation of the amount of erosion that has occurred since the time of maximum burial. We have developed a method for estimating eroded thickness by using porosity-depth trends derived from borehole sonic logs of wells in the Colville Basin of northern Alaska. Porosity-depth functions defined from sonic-porosity logs in wells drilled in minimally eroded parts of the basin provide a baseline for comparison with the porosity-depth trends observed in other wells across the basin. Calculated porosities, based on porosity-depth functions, were fitted to the observed data in each well by varying the amount of section assumed to have been eroded from the top of the sedimentary column. The result is an estimate of denudation at the wellsite since the time of maximum sediment accumulation. Alternative methods of estimating exhumation include fission-track analysis and projection of trendlines through vitrinite-reflectance profiles. In the Colville Basin, the methodology described here provides results generally similar to those from fission-track analysis and vitrinite-reflectance profiles, but with greatly improved spatial resolution relative to the published fission-track data and with improved reliability relative to the vitrinite-reflectance data. In addition, the exhumation estimates derived from sonic-porosity logs are independent of the thermal evolution of the basin, allowing these estimates to be used as independent variables in thermal-history modeling.
Estimate of Boundary-Layer Depth Over Beijing, China, Using Doppler Lidar Data During SURF-2015
NASA Astrophysics Data System (ADS)
Huang, Meng; Gao, Zhiqiu; Miao, Shiguang; Chen, Fei; LeMone, Margaret A.; Li, Ju; Hu, Fei; Wang, Linlin
2017-03-01
Planetary boundary-layer (PBL) structure was investigated using observations from a Doppler lidar and the 325-m Institute of Atmospheric Physics (IAP) meteorological tower in the centre of Beijing during the summer 2015 Study of Urban-impacts on Rainfall and Fog/haze (SURF-2015) field campaign. Using six fair-weather days of lidar and tower data under clear to cloudy skies, we evaluate the ability of the Doppler lidar to probe the urban boundary-layer structure, and then propose a composite method for estimating the diurnal cycle of the PBL depth using the Doppler lidar. For the convective boundary layer (CBL), a threshold method using vertical velocity variance (σ _w^2 >0.1 m2s^{-2}) is used, since it provides more reliable CBL depths than a conventional maximum wind-shear method. The nocturnal boundary-layer (NBL) depth is defined as the height at which σ _w^2 decreases to 10 % of its near-surface maximum minus a background variance. The PBL depths determined by combining these methods have average values ranging from ≈ 270 to ≈ 1500 m for the six days, with the greatest maximum depths associated with clear skies. Release of stored and anthropogenic heat contributes to the maintenance of turbulence until late evening, keeping the NBL near-neutral and deeper at night than would be expected over a natural surface. The NBL typically becomes more shallow with time, but grows in the presence of low-level nocturnal jets. While current results are promising, data over a broader range of conditions are needed to fully develop our PBL-depth algorithms.
The maximum depth of colonization (Zc) is a useful measure of seagrass growth that describes response to light attenuation in the water column. However, lack of standardization among methods for estimating Zc has limited the description of habitat requirements at spatial scales m...
Dynamic Response and Residual Helmet Liner Crush Using Cadaver Heads and Standard Headforms.
Bonin, S J; Luck, J F; Bass, C R; Gardiner, J C; Onar-Thomas, A; Asfour, S S; Siegmund, G P
2017-03-01
Biomechanical headforms are used for helmet certification testing and reconstructing helmeted head impacts; however, their biofidelity and direct applicability to human head and helmet responses remain unclear. Dynamic responses of cadaver heads and three headforms and residual foam liner deformations were compared during motorcycle helmet impacts. Instrumented, helmeted heads/headforms were dropped onto the forehead region against an instrumented flat anvil at 75, 150, and 195 J. Helmets were CT scanned to quantify maximum liner crush depth and crush volume. General linear models were used to quantify the effect of head type and impact energy on linear acceleration, head injury criterion (HIC), force, maximum liner crush depth, and liner crush volume and regression models were used to quantify the relationship between acceleration and both maximum crush depth and crush volume. The cadaver heads generated larger peak accelerations than all three headforms, larger HICs than the International Organization for Standardization (ISO), larger forces than the Hybrid III and ISO, larger maximum crush depth than the ISO, and larger crush volumes than the DOT. These significant differences between the cadaver heads and headforms need to be accounted for when attempting to estimate an impact exposure using a helmet's residual crush depth or volume.
Stereoscopic perception of real depths at large distances.
Palmisano, Stephen; Gillam, Barbara; Govan, Donovan G; Allison, Robert S; Harris, Julie M
2010-06-01
There has been no direct examination of stereoscopic depth perception at very large observation distances and depths. We measured perceptions of depth magnitude at distances where it is frequently reported without evidence that stereopsis is non-functional. We adapted methods pioneered at distances up to 9 m by R. S. Allison, B. J. Gillam, and E. Vecellio (2009) for use in a 381-m-long railway tunnel. Pairs of Light Emitting Diode (LED) targets were presented either in complete darkness or with the environment lit as far as the nearest LED (the observation distance). We found that binocular, but not monocular, estimates of the depth between pairs of LEDs increased with their physical depths up to the maximum depth separation tested (248 m). Binocular estimates of depth were much larger with a lit foreground than in darkness and increased as the observation distance increased from 20 to 40 m, indicating that binocular disparity can be scaled for much larger distances than previously realized. Since these observation distances were well beyond the range of vertical disparity and oculomotor cues, this scaling must rely on perspective cues. We also ran control experiments at smaller distances, which showed that estimates of depth and distance correlate poorly and that our metric estimation method gives similar results to a comparison method under the same conditions.
The effect of motorcycle helmet fit on estimating head impact kinematics from residual liner crush.
Bonin, Stephanie J; Gardiner, John C; Onar-Thomas, Arzu; Asfour, Shihab S; Siegmund, Gunter P
2017-09-01
Proper helmet fit is important for optimizing head protection during an impact, yet many motorcyclists wear helmets that do not properly fit their heads. The goals of this study are i) to quantify how a mismatch in headform size and motorcycle helmet size affects headform peak acceleration and head injury criteria (HIC), and ii) to determine if peak acceleration, HIC, and impact speed can be estimated from the foam liner's maximum residual crush depth or residual crush volume. Shorty-style helmets (4 sizes of a single model) were tested on instrumented headforms (4 sizes) during linear impacts between 2.0 and 10.5m/s to the forehead region. Helmets were CT scanned to quantify residual crush depth and volume. Separate linear regression models were used to quantify how the response variables (peak acceleration (g), HIC, and impact speed (m/s)) were related to the predictor variables (maximum crush depth (mm), crush volume (cm 3 ), and the difference in circumference between the helmet and headform (cm)). Overall, we found that increasingly oversized helmets reduced peak headform acceleration and HIC for a given impact speed for maximum residual crush depths less than 7.9mm and residual crush volume less than 40cm 3 . Below these levels of residual crush, we found that peak headform acceleration, HIC, and impact speed can be estimated from a helmet's residual crush. Above these crush thresholds, large variations in headform kinematics are present, possibly related to densification of the foam liner during the impact. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stability numerical analysis of soil cave in karst area to drawdown of underground water level
NASA Astrophysics Data System (ADS)
Mo, Yizheng; Xiao, Rencheng; Deng, Zongwei
2018-05-01
With the underground water level falling, the reliable estimates of the stability and deformation characteristics of soil caves in karst region area are required for analysis used for engineering design. Aimed at this goal, combined with practical engineering and field geotechnical test, detail analysis on vertical maximum displacement of top, vertical maximum displacement of surface, maximum principal stress and maximum shear stress were conducted by finite element software, with an emphasis on two varying factors: the size and the depth of soil cave. The calculations on the soil cave show that, its stability of soil cave is affected by both the size and depth, and only when extending a certain limit, the collapse occurred along with the falling of underground water; Additionally, its maximum shear stress is in arch toes, and its deformation curve trend of maximum displacement is similar to the maximum shear stress, which further verified that the collapse of soil cave was mainly due to shear-failure.
An entropy-based method for determining the flow depth distribution in natural channels
NASA Astrophysics Data System (ADS)
Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.
2013-08-01
A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.
On-line depth measurement for laser-drilled holes based on the intensity of plasma emission
NASA Astrophysics Data System (ADS)
Ho, Chao-Ching; Chiu, Chih-Mu; Chang, Yuan-Jen; Hsu, Jin-Chen; Kuo, Chia-Lung
2014-09-01
The direct time-resolved depth measurement of blind holes is extremely difficult due to the short time interval and the limited space inside the hole. This work presents a method that involves on-line plasma emission acquisition and analysis to obtain correlations between the machining processes and the optical signal output. Given that the depths of laser-machined holes can be estimated on-line using a coaxial photodiode, this was employed in our inspection system. Our experiments were conducted in air under normal atmospheric conditions without gas assist. The intensity of radiation emitted from the vaporized material was found to correlate with the depth of the hole. The results indicate that the estimated depths of the laser-drilled holes were inversely proportional to the maximum plasma light emission measured for a given laser pulse number.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hee Jung; Department of Biomedical Engineering, Seoul National University, Seoul; Department of Radiation Oncology, Soonchunhyang University Hospital, Seoul
2015-01-01
To investigate how accurately treatment planning systems (TPSs) account for the tongue-and-groove (TG) effect, Monte Carlo (MC) simulations and radiochromic film (RCF) measurements were performed for comparison with TPS results. Two commercial TPSs computed the TG effect for Varian Millennium 120 multileaf collimator (MLC). The TG effect on off-axis dose profile at 3 depths of solid water was estimated as the maximum depth and the full width at half maximum (FWHM) of the dose dip at an interleaf position. When compared with the off-axis dose of open field, the maximum depth of the dose dip for MC and RCF rangedmore » from 10.1% to 20.6%; the maximum depth of the dose dip gradually decreased by up to 8.7% with increasing depths of 1.5 to 10 cm and also by up to 4.1% with increasing off-axis distances of 0 to 13 cm. However, TPS results showed at most a 2.7% decrease for the same depth range and a negligible variation for the same off-axis distances. The FWHM of the dose dip was approximately 0.19 cm for MC and 0.17 cm for RCF, but 0.30 cm for Eclipse TPS and 0.45 cm for Pinnacle TPS. Accordingly, the integrated value of TG dose dip for TPS was larger than that for MC and RCF and almost invariant along the depths and off-axis distances. We concluded that the TG dependence on depth and off-axis doses shown in the MC and RCF results could not be appropriately modeled by the TPS versions in this study.« less
Applications of flood depth from rapid post-event footprint generation
NASA Astrophysics Data System (ADS)
Booth, Naomi; Millinship, Ian
2015-04-01
Immediately following large flood events, an indication of the area flooded (i.e. the flood footprint) can be extremely useful for evaluating potential impacts on exposed property and infrastructure. Specifically, such information can help insurance companies estimate overall potential losses, deploy claims adjusters and ultimately assists the timely payment of due compensation to the public. Developing these datasets from remotely sensed products seems like an obvious choice. However, there are a number of important drawbacks which limit their utility in the context of flood risk studies. For example, external agencies have no control over the region that is surveyed, the time at which it is surveyed (which is important as the maximum extent would ideally be captured), and how freely accessible the outputs are. Moreover, the spatial resolution of these datasets can be low, and considerable uncertainties in the flood extents exist where dry surfaces give similar return signals to water. Most importantly of all, flood depths are required to estimate potential damages, but generally cannot be estimated from satellite imagery alone. In response to these problems, we have developed an alternative methodology for developing high-resolution footprints of maximum flood extent which do contain depth information. For a particular event, once reports of heavy rainfall are received, we begin monitoring real-time flow data and extracting peak values across affected areas. Next, using statistical extreme value analyses of historic flow records at the same measured locations, the return periods of the maximum event flow at each gauged location are estimated. These return periods are then interpolated along each river and matched to JBA's high-resolution hazard maps, which already exist for a series of design return periods. The extent and depth of flooding associated with the event flow is extracted from the hazard maps to create a flood footprint. Georeferenced ground, aerial and satellite images are used to establish defence integrity, highlight breach locations and validate our footprint. We have implemented this method to create seven flood footprints, including river flooding in central Europe and coastal flooding associated with Storm Xaver in the UK (both in 2013). The inclusion of depth information allows damages to be simulated and compared to actual damage and resultant loss which become available after the event. In this way, we can evaluate depth-damage functions used in catastrophe models and reduce their associated uncertainty. In further studies, the depth data could be used at an individual property level to calibrate property type specific depth-damage functions.
NASA Astrophysics Data System (ADS)
Gusman, Aditya Riadi; Mulia, Iyan E.; Satake, Kenji
2018-01-01
The 2017 Tehuantepec earthquake (
Altimetry data and the elastic stress tensor of subduction zones
NASA Technical Reports Server (NTRS)
Caputo, Michele
1987-01-01
The maximum shear stress (mss) field due to mass anomalies is estimated in the Apennines, the Kermadec-Tonga Trench, and the Rio Grande Rift areas and the results for each area are compared to observed seismicity. A maximum mss of 420 bar was calculated in the Kermadec-Tonga Trench region at a depth of 28 km. Two additional zones with more than 300 bar mss were also observed in the Kermadec-Tonga Trench study. Comparison of the calculated mss field with the observed seismicity in the Kermadec-Tonga showed two zones of well correlated activity. The Rio Grande Rift results showed a maximum mss of 700 bar occurring east of the rift and at a depth of 6 km. Recorded seismicity in the region was primarily constrained to a depth of approximately 5 km, correlating well to the results of the stress calculations. Two areas of high mss are found in the Apennine region: 120 bar at a depth of 55 km, and 149 bar at the surface. Seismic events observed in the Apennine area compare favorably with the mss field calculated, exhibiting two zones of activity. The case of loading by seamounts and icecaps are also simulated. Results for this study show that the mss reaches a maximum of about 1/3 that of the applied surface stress for both cases, and is located at a depth related to the diameter of the surface mass anomaly.
Defining the ecologically relevant mixed-layer depth for Antarctica's coastal seas
NASA Astrophysics Data System (ADS)
Carvalho, Filipa; Kohut, Josh; Oliver, Matthew J.; Schofield, Oscar
2017-01-01
Mixed-layer depth (MLD) has been widely linked to phytoplankton dynamics in Antarctica's coastal regions; however, inconsistent definitions have made intercomparisons among region-specific studies difficult. Using a data set with over 20,000 water column profiles corresponding to 32 Slocum glider deployments in three coastal Antarctic regions (Ross Sea, Amundsen Sea, and West Antarctic Peninsula), we evaluated the relationship between MLD and phytoplankton vertical distribution. Comparisons of these MLD estimates to an applied definition of phytoplankton bloom depth, as defined by the deepest inflection point in the chlorophyll profile, show that the maximum of buoyancy frequency is a good proxy for an ecologically relevant MLD. A quality index is used to filter profiles where MLD is not determined. Despite the different regional physical settings, we found that the MLD definition based on the maximum of buoyancy frequency best describes the depth to which phytoplankton can be mixed in Antarctica's coastal seas.
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.
2016-01-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R
2016-11-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.
NASA Astrophysics Data System (ADS)
Akbar, Somaieh; Fathianpour, Nader
2016-12-01
The Curie point depth is of great importance in characterizing geothermal resources. In this study, the Curie iso-depth map was provided using the well-known method of dividing the aeromagnetic dataset into overlapping blocks and analyzing the power spectral density of each block separately. Determining the optimum block dimension is vital in improving the resolution and accuracy of estimating Curie point depth. To investigate the relation between the optimal block size and power spectral density, a forward magnetic modeling was implemented on an artificial prismatic body with specified characteristics. The top, centroid, and bottom depths of the body were estimated by the spectral analysis method for different block dimensions. The result showed that the optimal block size could be considered as the smallest possible block size whose corresponding power spectrum represents an absolute maximum in small wavenumbers. The Curie depth map of the Sabalan geothermal field and its surrounding areas, in the northwestern Iran, was produced using a grid of 37 blocks with different dimensions from 10 × 10 to 50 × 50 km2, which showed at least 50% overlapping with adjacent blocks. The Curie point depth was estimated in the range of 5 to 21 km. The promising areas with the Curie point depths less than 8.5 km are located around Mountain Sabalan encompassing more than 90% of known geothermal resources in the study area. Moreover, the Curie point depth estimated by the improved spectral analysis is in good agreement with the depth calculated from the thermal gradient data measured in one of the exploratory wells in the region.
Impact of downward-mixing ozone on surface ozone accumulation in southern Taiwan.
Lin, Ching-Ho
2008-04-01
The ozone that initially presents in the previous day's afternoon mixing layer can remain in the nighttime atmosphere and then be carried over to the next morning. Finally, this ozone can be brought to the ground by downward mixing as mixing depth increases during the daytime, thereby increasing surface ozone concentrations. Variation of ozone concentration during each of these periods is investigated in this work. First, ozone concentrations existing in the daily early morning atmosphere at the altitude range of the daily maximum mixing depth (residual ozone concentrations) were measured using tethered ozonesondes on 52 experimental days during 2004-2005 in southern Taiwan. Daily downward-mixing ozone concentrations were calculated by a box model coupling the measured daily residual ozone concentrations and daily mixing depth variations. The ozone concentrations upwind in the previous day's afternoon mixing layer were estimated by the combination of back air trajectory analysis and known previous day's surface ozone distributions. Additionally, the relationship between daily downward-mixing ozone concentration and daily photochemically produced ozone concentration was examined. The latter was calculated by removing the former from daily surface maximum ozone concentration. The measured daily residual ozone concentrations distributed at 12-74 parts per billion (ppb) with an average of 42 +/- 17 ppb are well correlated with the previous upwind ozone concentration (R2 = 0.54-0.65). Approximately 60% of the previous upwind ozone was estimated to be carried over to the next morning and became the observed residual ozone. The daily downward-mixing ozone contributes 48 +/- 18% of the daily surface maximum ozone concentration, indicating that the downward-mixing ozone is as important as daily photochemically produced ozone to daily surface maximum ozone accumulation. The daily downward-mixing ozone is poorly correlated with the daily photochemically produced ozone and contributes significantly to the daily variation of surface maximum ozone concentrations (R2 = 0.19). However, the contribution of downward-mixing ozone to daily ozone variation is not included in most existing statistical models developed for predicting daily ozone variation. Finally, daily surface maximum ozone concentration is positively correlated with daily afternoon mixing depth, attributable to the downward-mixing ozone.
Scour around vertical wall abutment in cohesionless sediment bed
NASA Astrophysics Data System (ADS)
Pandey, M.; Sharma, P. K.; Ahmad, Z.
2017-12-01
At the time of floods, failure of bridges is the biggest disaster and mainly sub-structure (bridge abutments and piers) are responsible for this failure of bridges. It is very risky if these sub structures are not constructed after proper designing and analysis. Scour is a natural phenomenon in rivers or streams caused by the erosive action of the flowing water on the bed and banks. The abutment undermines due to river-bed erosion and scouring, which generally recognized as the main cause of abutment failure. Most of the previous studies conducted on scour around abutment have concerned with the prediction of the maximum scour depth (Lim, 1994; Melvill, 1992, 1997 and Dey and Barbhuiya, 2005). Dey and Barbhuiya (2005) proposed a relationship for computing maximum scour depth near an abutment, based on laboratory experiments, for computing maximum scour depth around vertical wall abutment, which was confined to their experimental data only. However, this relationship needs to be also verified by the other researchers data in order to support the reliability to the relationship and its wider applicability. In this study, controlled experimentations have been carried out on the scour near a vertical wall abutment. The collected data in this study along with data of the previous investigators have been carried out on the scour near vertical wall abutment. The collected data in this study along with data of the previous have been used to check the validity of the existing equation (Lim, 1994; Melvill, 1992, 1997 and Dey and Barbhuiya, 2005) of maximum scour depth around the vertical wall abutment. A new relationship is proposed to estimate the maximum scour depth around vertical wall abutment, it gives better results all relationships.
Temperature measurements at IODP 337 Expedition, off Shimokita, NE Japan.
NASA Astrophysics Data System (ADS)
Yamada, Y.; Sanada, Y.; Moe, K.; Kubo, Y.; Inagaki, F.
2014-12-01
Precise estimation of underground temperature is a challenging issue, since direct measurements require drill holes that disturb the original underground environment. During IODP 337 expedition, we have obtained in-situ temperature datasets for several times by using geophysical logging tools. A common procedure to estimate the undisturbed maximum underground temperature is by approximating that the 'build-up' pattern of measured values in the borehole should reach to the equilibrium temperature. At the Shimokita site, this was 63.7 oC at a depth of 2466 m. We have much more measurement dataset and all of these were used to analyze detailed in-site temperatures at various depths. The result shows a non-linear temperature profile to the depth and this may be reflected by the thermal properties of the surrounding rocks.
Kinoshita, S; Suzuki, T; Yamashita, S; Muramatsu, T; Ide, M; Dohi, Y; Nishimura, K; Miyamae, T; Yamamoto, I
1992-01-01
A new radionuclide technique for the calculation of left ventricular (LV) volume by the first-pass (FP) method was developed and examined. Using a semi-geometric count-based method, the LV volume can be measured by the following equation: CV = CM/(L/d). V = (CT/CV) x d3 = (CT/CM) x L x d2. (V = LV volume, CV = voxel count, CM = the maximum LV count, CT = the total LV count, L = LV depth where the maximum count was obtained, and d = pixel size.) This theorem was applied to FP LV images obtained in the 30-degree right anterior oblique position. Frame-mode acquisition was performed and the LV end-diastolic maximum count and total count were obtained. The maximum LV depth was obtained as the maximum width of the LV on the FP end-diastolic image, using the assumption that the LV cross-section is circular. These values were substituted in the above equation and the LV end-diastolic volume (FP-EDV) was calculated. A routine equilibrium (EQ) study was done, and the end-diastolic maximum count and total count were obtained. The LV maximum depth was measured on the FP end-diastolic frame, as the maximum length of the LV image. Using these values, the EQ-EDV was calculated and the FP-EDV was compared to the EQ-EDV. The correlation coefficient for these two values was r = 0.96 (n = 23, p less than 0.001), and the standard error of the estimated volume was 10 ml.(ABSTRACT TRUNCATED AT 250 WORDS)
A Water Temperature Simulation Model for Rice Paddies With Variable Water Depths
NASA Astrophysics Data System (ADS)
Maruyama, Atsushi; Nemoto, Manabu; Hamasaki, Takahiro; Ishida, Sachinobu; Kuwagata, Tsuneo
2017-12-01
A water temperature simulation model was developed to estimate the effects of water management on the thermal environment in rice paddies. The model was based on two energy balance equations: for the ground and for the vegetation, and considered the water layer and changes in the aerodynamic properties of its surface with water depth. The model was examined with field experiments for water depths of 0 mm (drained conditions) and 100 mm (flooded condition) at two locations. Daily mean water temperatures in the flooded condition were mostly higher than in the drained condition in both locations, and the maximum difference reached 2.6°C. This difference was mainly caused by the difference in surface roughness of the ground. Heat exchange by free convection played an important role in determining water temperature. From the model simulation, the temperature difference between drained and flooded conditions was more apparent under low air temperature and small leaf area index conditions; the maximum difference reached 3°C. Most of this difference occurred when the range of water depth was lower than 50 mm. The season-long variation in modeled water temperature showed good agreement with an observation data set from rice paddies with various rice-growing seasons, for a diverse range of water depths (root mean square error of 0.8-1.0°C). The proposed model can estimate water temperature for a given water depth, irrigation, and drainage conditions, which will improve our understanding of the effect of water management on plant growth and greenhouse gas emissions through the thermal environment of rice paddies.
Sampling strategies to improve passive optical remote sensing of river bathymetry
Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.
2018-01-01
Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.
Changes in Cirrus Cloudiness and their Relationship to Contrails
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Ayers, J. Kirk; Palikonda, Rabindra; Doelling, David R.; Schumann, Ulrich; Gierens, Klaus
2001-01-01
Condensation trails, or contrails, formed in the wake of high-altitude aircraft have long been suspected of causing the formation of additional cirrus cloud cover. More cirrus is possible because 10 - 20% of the atmosphere at typical commercial flight altitudes is clear but ice-saturated. Since they can affect the radiation budget like natural cirrus clouds of equivalent optical depth and microphysical properties, contrail -generated cirrus clouds are another potential source of anthropogenic influence on climate. Initial estimates of contrail radiative forcing (CRF) were based on linear contrail coverage and optical depths derived from a limited number of satellite observations. Assuming that such estimates are accurate, they can be considered as the minimum possible CRF because contrails often develop into cirrus clouds unrecognizable as contrails. These anthropogenic cirrus are not likely to be identified as contrails from satellites and would, therefore, not contribute to estimates of contrail coverage. The mean lifetime and coverage of spreading contrails relative to linear contrails are needed to fully assess the climatic effect of contrails, but are difficult to measure directly. However, the maximum possible impact can be estimated using the relative trends in cirrus coverage over regions with and without air traffic. In this paper, the upper bound of CRF is derived by first computing the change in cirrus coverage over areas with heavy air traffic relative to that over the remainder of the globe assuming that the difference between the two trends is due solely to contrails. This difference is normalized to the corresponding linear contrail coverage for the same regions to obtain an average spreading factor. The maximum contrail-cirrus coverage, estimated as the product of the spreading factor and the linear contrail coverage, is then used in the radiative model to estimate the maximum potential CRF for current air traffic.
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
Gajewski, Jan; Michalski, Radosław; Buśko, Krzysztof; Mazur-Różycka, Joanna; Staniak, Zbigniew
2018-01-01
The aim of this study was to identify the determinants of peak power achieved during vertical jumps in order to clarify relationship between the height of jump and the ability to exert maximum power. One hundred young (16.8±1.8 years) sportsmen participated in the study (body height 1.861 ± 0.109 m, body weight 80.3 ± 9.2 kg). Each participant performed three jump tests: countermovement jump (CMJ), akimbo countermovement jump (ACMJ), and spike jump (SPJ). A force plate was used to measure ground reaction force and to determine peak power output. The following explanatory variables were included in the model: jump height, body mass, and the lowering of the centre of mass before launch (countermovement depth). A model was created using multiple regression analysis and allometric scaling. The model was used to calculate the expected power value for each participant, which correlated strongly with real values. The value of the coefficient of determination R2 equalled 0.89, 0.90 and 0.98, respectively, for the CMJ, ACMJ, and SPJ jumps. The countermovement depth proved to be a variable strongly affecting the maximum power of jump. If the countermovement depth remains constant, the relative peak power is a simple function of jump height. The results suggest that the jump height of an individual is an exact indicator of their ability to produce maximum power. The presented model has a potential to be utilized under field condition for estimating the maximum power output of vertical jumps.
Observation of Snow cover glide on Sub-Alpine Coniferous Forests in Mount Zao, Northeastern Japan
NASA Astrophysics Data System (ADS)
Sasaki, A.; Suzuki, K.
2017-12-01
This is the study to clarify the snow cover glide behavior in the sub-alpine coniferous forests on Mount Zao, Northeastern Japan, in the winter of 2014-2015. We installed the glide-meter which is sled type, and measured the glide motion on the slope of Abies mariesii forest and its surrounding slope. In addition, we observed the air temperature, snow depth, density of snow, and snow temperature to discuss relationship between weather conditions and glide occurrence. The snow cover of the 2014-15 winter started on November 13th and disappeared on April 21st. The maximum snow depth was 242 cm thick, it was recorded at February 1st. The snow cover glide in the surrounding slope was occurred first at February 10th, although maximum snow depth recorded on February 1st. The glide motion in the surrounding slope is continuing and its velocity was 0.4 cm per day. The glide in the surrounding slope stopped at March 16th. The cumulative amount of the glide was 21.1 cm. The snow cover glide in the A. mariesii forest was even later occurred first at February 21st. The glide motion of it was intermittent and extremely small. On sub-alpine zone of Mount Zao, snow cover glide intensity is estimated to be 289 kg/m2 on March when snow water equivalent is maximum. At same period, maximum snow cover glide intensity is estimated to be about 1000 kg/m2 at very steep slopes where the slope angle is about 35 degree. Although potential of snow cover glide is enough high, the snow cover glide is suppressed by stem of A. mariesii trees, in the sub-alpine coniferous forest.
Energy dissipation of slot-type flip buckets
NASA Astrophysics Data System (ADS)
Wu, Jian-hua; Li, Shu-fang; Ma, Fei
2018-03-01
The energy dissipation is a key index in the evaluation of energy dissipation elements. In the present work, a flip bucket with a slot, called the slot-type flip bucket, is theoretically and experimentally investigated by the method of estimating the energy dissipation. The theoretical analysis shows that, in order to have the energy dissipation, it is necessary to determine the sequent flow depth h 1 and the flow speed V 1 at the corresponding position through the flow depth h 2 after the hydraulic jump. The relative flow depth h 2 / h 。 is a function of the approach flow Froude number Fr 。, the relative slot width b/B 。, and the relative slot angle θ/β. The expression for estimating the energy dissipation is developed, and the maximum error is not larger than 9.21%.
Energy dissipation of slot-type flip buckets
NASA Astrophysics Data System (ADS)
Wu, Jian-hua; Li, Shu-fang; Ma, Fei
2018-04-01
The energy dissipation is a key index in the evaluation of energy dissipation elements. In the present work, a flip bucket with a slot, called the slot-type flip bucket, is theoretically and experimentally investigated by the method of estimating the energy dissipation. The theoretical analysis shows that, in order to have the energy dissipation, it is necessary to determine the sequent flow depth h 1 and the flow speed V 1 at the corresponding position through the flow depth h 2 after the hydraulic jump. The relative flow depth h 2 / h o is a function of the approach flow Froude number Fr o, the relative slot width b/ B o, and the relative slot angle θ/ β. The expression for estimating the energy dissipation is developed, and the maximum error is not larger than 9.21%.
Sediment chronology in San Francisco Bay, California, defined by 210Pb, 234Th, 137Cs, and 239,340Pu
Fuller, C.C.; van Geen, Alexander; Baskaran, M.; Anima, R.
1999-01-01
Sediment chronologies based on radioisotope depth profiles were developed at two sites in the San Francisco Bay estuary to provide a framework for interpreting historical trends in organic compound and metal contaminant inputs. At Richardson Bay near the estuary mouth, sediments are highly mixed by biological and/or physical processes. Excess penetration ranged from 2 to more than 10 cm at eight coring sites, yielding surface sediment mixing coefficients ranging from 12 to 170 cm2/year. At the site chosen for contaminant analyses, excess activity was essentially constant over the upper 25 cm of the core with an exponential decrease below to the supported activity between 70 and 90 cm. Both and penetrated to 57-cm depth and have broad subsurface maxima between 33 and 41 cm. The best fit of the excess profile to a steady state sediment accumulation and mixing model yielded an accumulation rate of 0.825 g/cm2/year (0.89 cm/year at sediment surface), surface mixing coefficient of 71 cm2/year, and 33-cm mixed zone with a half-Gaussian depth dependence parameter of 9 cm. Simulations of and profiles using these parameters successfully predicted the maximum depth of penetration and the depth of maximum and activity. Profiles of successive 1-year hypothetical contaminant pulses were generated using this parameter set to determine the age distribution of sediments at any depth horizon. Because of mixing, sediment particles with a wide range of deposition dates occur at each depth. A sediment chronology was derived from this age distribution to assign the minimum age of deposition and a date of maximum deposition to a depth horizon. The minimum age of sediments in a given horizon is used to estimate the date of first appearance of a contaminant from its maximum depth of penetration. The date of maximum deposition is used to estimate the peak year of input for a contaminant from the depth interval with the highest concentration of that contaminant. Because of the extensive mixing, sediment-bound constituents are rapidly diluted with older material after deposition. In addition, contaminants persist in the mixed zone for many years after deposition. More than 75 years are required to bury 90% of a deposited contaminant below the mixed zone. Reconstructing contaminant inputs is limited to changes occurring on a 20-year time scale. In contrast, mixing is much lower relative to accumulation at a site in San Pablo Bay. Instead, periods of rapid deposition and/or erosion occurred as indicated by frequent sand-silt laminae in the X-radiograph. , , and excess activity all penetrated to about 120 cm. The distinct maxima in the fallout radionuclides at 105–110 cm yielded overall linear sedimentation rates of 3.9 to 4.1 cm/year, which are comparable to a rate of 4.5±1.5 cm/year derived from the excess profile.
NASA Astrophysics Data System (ADS)
Herrero, I.; Ezcurra, A.; Areitio, J.; Diaz-Argandoña, J.; Ibarra-Berastegi, G.; Saenz, J.
2013-11-01
Storms developed under local instability conditions are studied in the Spanish Basque region with the aim of establishing precipitation-lightning relationships. Those situations may produce, in some cases, flash flood. Data used correspond to daily rain depth (mm) and the number of CG flashes in the area. Rain and lightning are found to be weakly correlated on a daily basis, a fact that seems related to the existence of opposite gradients in their geographical distribution. Rain anomalies, defined as the difference between observed and estimated rain depth based on CG flashes, are analysed by PCA method. Results show a first EOF explaining 50% of the variability that linearly relates the rain anomalies observed each day and that confirms their spatial structure. Based on those results, a multilinear expression has been developed to estimate the rain accumulated daily in the network based on the CG flashes registered in the area. Moreover, accumulates and maximum values of rain are found to be strongly correlated, therefore making the multilinear expression a useful tool to estimate maximum precipitation during those kind of storms.
Bultman, Mark W.; Page, William R.
2016-10-31
The upper Santa Cruz Basin is an important groundwater basin containing the regional aquifer for the city of Nogales, Arizona. This report provides data and interpretations of data aimed at better understanding the bedrock morphology and structure of the upper Santa Cruz Basin study area which encompasses the Rio Rico and Nogales 1:24,000-scale U.S. Geological Survey quadrangles. Data used in this report include the Arizona Aeromagnetic and Gravity Maps and Data referred to here as the 1996 Patagonia Aeromagnetic survey, Bouguer gravity anomaly data, and conductivity-depth transforms (CDTs) from the 1998 Santa Cruz transient electromagnetic survey (whose data are included in appendixes 1 and 2 of this report).Analyses based on magnetic gradients worked well to identify the range-front faults along the Mt. Benedict horst block, the location of possibly fault-controlled canyons to the west of Mt. Benedict, the edges of buried lava flows, and numerous other concealed faults and contacts. Applying the 1996 Patagonia aeromagnetic survey data using the horizontal gradient method produced results that were most closely correlated with the observed geology.The 1996 Patagonia aeromagnetic survey was used to estimate depth to bedrock in the upper Santa Cruz Basin study area. Three different depth estimation methods were applied to the data: Euler deconvolution, horizontal gradient magnitude, and analytic signal. The final depth to bedrock map was produced by choosing the maximum depth from each of the three methods at a given location and combining all maximum depths. In locations of rocks with a known reversed natural remanent magnetic field, gravity based depth estimates from Gettings and Houser (1997) were used.The depth to bedrock map was supported by modeling aeromagnetic anomaly data along six profiles. These cross sectional models demonstrated that by using the depth to bedrock map generated in this study, known and concealed faults, measured and estimated magnetic susceptibilities of rocks found in the study area, and estimated natural remanent magnetic intensities and directions, reasonable geologic models can be built. This indicates that the depth to bedrock map is reason-able and geologically possible.Finally, CDTs derived from the 1998 Santa Cruz Basin transient electromagnetic survey were used to help identify basin structure and some physical properties of the basin fill in the study area. The CDTs also helped to confirm depth to bedrock estimates in the Santa Cruz Basin, in particular a region of elevated bedrock in the area of Potrero Canyon, and a deep basin in the location of the Arizona State Highway 82 microbasin. The CDTs identified many concealed faults in the study area and possibly indicate deep water-saturated clay-rich sediments in the west-central portion of the study area. These sediments grade to more sand-rich saturated sediments to the south with relatively thick, possibly unsaturated, sediments at the surface. Also, the CDTs may indicate deep saturated clay-rich sediments in the Highway 82 microbasin and in the Mount Benedict horst block from Proto Canyon south to the international border.
Miranda, Leandro E.; Omer, A.R.; Killgore, K.J.
2017-01-01
The Mississippi Alluvial Valley includes hundreds of floodplain lakes that support unique fish assemblages and high biodiversity. Irrigation practices in the valley have lowered the water table, increasing the cost of pumping water, and necessitating the use of floodplain lakes as a source of water for irrigation. This development has prompted the need to regulate water withdrawals to protect aquatic resources, but it is unknown how much water can be withdrawn from lakes before ecological integrity is compromised. To estimate withdrawal limits, we examined descriptors of lake water quality (i.e., total nitrogen, total phosphorus, turbidity, Secchi visibility, chlorophyll-a) and fish assemblages (species richness, diversity, composition) relative to maximum depth in 59 floodplain lakes. Change-point regression analysis was applied to identify critical depths at which the relationships between depth and lake descriptors exhibited a rapid shift in slope, suggesting possible thresholds. All our water quality and fish assemblage descriptors showed rapid changes relative to depth near 1.2–2.0 m maximum depth. This threshold span may help inform regulatory decisions about water withdrawal limits. Alternatives to explain the triggers of the observed threshold span are considered.
Electrical conductivity of the Earth's mantle after one year of SWARM magnetic field measurements
NASA Astrophysics Data System (ADS)
Civet, François; Thebault, Erwan; Verhoeven, Olivier; Langlais, Benoit; Saturnino, Diana
2015-04-01
We present a global EM induction study using L1b Swarm satellite magnetic field measurements data down to a depth of 2000 km. Starting from raw measurements, we first derive a model for the main magnetic field, correct the data for a lithospheric field model, and further select the data to reduce the contributions of the ionospheric field. These computations allowed us to keep a full control on the data processes. We correct residual field from outliers and estimate the spherical harmonic coefficients of the transient field for periods between 2 and 256 days. We used full latitude range and all local times to keep a maximum amount of data. We perform a Bayesian inversion and construct a Markov chain during which model parameters are randomly updated at each iteration. We first consider regular layers of equal thickness and extra layers are added where conductivity contrast between successive layers exceed a threshold value. The mean and maximum likelihood of the electrical conductivity profile is then estimated from the probability density function. The obtained profile particularly shows a conductivity jump in the 600-700 km depth range, consistent with the olivine phase transition at 660 km depth. Our study is the first one to show such a conductivity increase in this depth range without any a priori informations on the internal strucutres. Assuming a pyrolitic mantle composition, this profile is interpreted in terms of temperature variations in the depth range where the probability density function is the narrowest. We finally obtained a temperature gradient in the lower mantle close to adiabatic.
DOT National Transportation Integrated Search
2012-12-01
CAPWAP analyses of open-ended steel pipe piles at 32 bridge sites in Alaska have been compiled with geotechnical and construction : information for 12- to 48-inch diameter piles embedded in predominantly cohesionless soils to maximum depths of 161-fe...
Law, George S.
2002-01-01
Periodic flooding occurs at lowlands and sinkholes in and adjacent to the flood plain of the West Fork Stones River in the western part of Murfreesboro, Tennessee. Flooding in this area commonly occurs during the winter months from December through March. The maximum water level that flood waters will reach in a lowland or sinkhole is controlled by the elevation of the land surrounding the site or the overflow outlet. Maximum water levels, independent of overflow from the river, were estimated to be reached in lowlands and sinkholes in the study area every 1 to 4 years. Minor overflow from the West Fork Stones River (less than 1 foot in depth) into the study area has been estimated to occur every 10 to 20 years. Moderate overflow from the river (1 to 2 feet in depth) occurs on average every 20 to 50 years, while major river overflow (in excess of 2 feet in depth) can be expected every 50 years. Rainfall information for the area, and streamflow and water-level measurements from the West Fork Stones River, lowlands, sinkholes, caves, and wells in the study area were used to develop a flood-prone area map, independent of overflow from the river, for the study area. Water-level duration and frequency relations, independent of overflow from the river, were estimated for several lowlands, sinkholes, and wells in the study area. These relations are used to characterize flooding in lowland areas of western Murfreesboro, Rutherford County, Tennessee.
Site Specific Probable Maximum Precipitation Estimates and Professional Judgement
NASA Astrophysics Data System (ADS)
Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.
2015-12-01
State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially interpolating 100-year dew point values rather than a more gauge-based approach. Site specific reviews demonstrated that both issues had potential for lowering the PMP estimate significantly by affecting the in-place and transposed moisture maximization value and, in turn, the final controlling storm for a given basin size and PMP estimate.
NASA Astrophysics Data System (ADS)
Ono, T.; Takahashi, T.
2017-12-01
Non-structural mitigation measures such as flood hazard map based on estimated inundation area have been more important because heavy rains exceeding the design rainfall frequently occur in recent years. However, conventional method may lead to an underestimation of the area because assumed locations of dike breach in river flood analysis are limited to the cases exceeding the high-water level. The objective of this study is to consider the uncertainty of estimated inundation area with difference of the location of dike breach in river flood analysis. This study proposed multiple flood scenarios which can set automatically multiple locations of dike breach in river flood analysis. The major premise of adopting this method is not to be able to predict the location of dike breach correctly. The proposed method utilized interval of dike breach which is distance of dike breaches placed next to each other. That is, multiple locations of dike breach were set every interval of dike breach. The 2D shallow water equations was adopted as the governing equation of river flood analysis, and the leap-frog scheme with staggered grid was used. The river flood analysis was verified by applying for the 2015 Kinugawa river flooding, and the proposed multiple flood scenarios was applied for the Akutagawa river in Takatsuki city. As the result of computation in the Akutagawa river, a comparison with each computed maximum inundation depth of dike breaches placed next to each other proved that the proposed method enabled to prevent underestimation of estimated inundation area. Further, the analyses on spatial distribution of inundation class and maximum inundation depth in each of the measurement points also proved that the optimum interval of dike breach which can evaluate the maximum inundation area using the minimum assumed locations of dike breach. In brief, this study found the optimum interval of dike breach in the Akutagawa river, which enabled estimated maximum inundation area to predict efficiently and accurately. The river flood analysis by using this proposed method will contribute to mitigate flood disaster by improving the accuracy of estimated inundation area.
Gravity survey of Dixie Valley, west-central Nevada
Schaefer, Donald H.
1983-01-01
Dixie Valley, a northeast-trending structural trough typical of valleys in the Basin and Range Province, is filled with a maximum of about 10,000 feet of alluvial and lacustrine deposits , as estimated from residual-gravity measurements obtained in this study. On the basis of gravity measurements at 300 stations on nine east-west profiles, the gravity residuals reach a maximum of 30 milligals near the south-central part of the valley. Results from a three-dimensional inversion model indicate that the central depression of the valley is offset to the west of the geographic axis. This offset is probably due to major faulting along the west side of the valley adjacent to the Stillwater Range. Comparison of depths to bedrock obtained during this study and depths obtained from a previous seismic-refraction study indicates a reasonably good correlation. A heterogeneous distribution of densities within the valley-fill deposits would account for differing depths determined by the two methods. (USGS)
Combined Gravimetric-Seismic Crustal Model for Antarctica
NASA Astrophysics Data System (ADS)
Baranov, Alexey; Tenzer, Robert; Bagherbandi, Mohammad
2018-01-01
The latest seismic data and improved information about the subglacial bedrock relief are used in this study to estimate the sediment and crustal thickness under the Antarctic continent. Since large parts of Antarctica are not yet covered by seismic surveys, the gravity and crustal structure models are used to interpolate the Moho information where seismic data are missing. The gravity information is also extended offshore to detect the Moho under continental margins and neighboring oceanic crust. The processing strategy involves the solution to the Vening Meinesz-Moritz's inverse problem of isostasy constrained on seismic data. A comparison of our new results with existing studies indicates a substantial improvement in the sediment and crustal models. The seismic data analysis shows significant sediment accumulations in Antarctica, with broad sedimentary basins. According to our result, the maximum sediment thickness in Antarctica is about 15 km under Filchner-Ronne Ice Shelf. The Moho relief closely resembles major geological and tectonic features. A rather thick continental crust of East Antarctic Craton is separated from a complex geological/tectonic structure of West Antarctica by the Transantarctic Mountains. The average Moho depth of 34.1 km under the Antarctic continent slightly differs from previous estimates. A maximum Moho deepening of 58.2 km under the Gamburtsev Subglacial Mountains in East Antarctica confirmed the presence of deep and compact orogenic roots. Another large Moho depth in East Antarctica is detected under Dronning Maud Land with two orogenic roots under Wohlthat Massif (48-50 km) and the Kottas Mountains (48-50 km) that are separated by a relatively thin crust along Jutulstraumen Rift. The Moho depth under central parts of the Transantarctic Mountains reaches 46 km. The maximum Moho deepening (34-38 km) in West Antarctica is under the Antarctic Peninsula. The Moho depth minima in East Antarctica are found under the Lambert Trench (24-28 km), while in West Antarctica the Moho depth minima are along the West Antarctic Rift System under the Bentley depression (20-22 km) and Ross Sea Ice Shelf (16-24 km). The gravimetric result confirmed a maximum extension of the Antarctic continental margins under the Ross Sea Embayment and the Weddell Sea Embayment with an extremely thin continental crust (10-20 km).
Estimation of composition of cosmic rays with E sub zero approximately equals 10(17) - 10(18) eV
NASA Technical Reports Server (NTRS)
Glushkov, A. V.; Efimov, N. N.; Efremov, N. N.; Makarov, I. T.; Pravdin, M. I.; Dedenko, L. I.
1985-01-01
Fluctuations of the shower maximum depth obtained from analysis of electron and muon fluctuations and the extensive air showers (EAS) Cerenkov light on the Yakutsk array data and data of other arrays are considered. On the basis of these the estimation of composition of primaries with E sub 0 = 5.10 to the 17th power eV is received. Estimation of gamma-quanta flux with E sub 0 10 to the 17th power eV is given on the poor-muon showers.
Estimation of alewife biomass in Lake Michigan, 1967-1978
Hatch, Richard W.; Haack, Paul M.; Brown, Edward H.
1981-01-01
The buildup of salmonid populations in Lake Michigan through annual stockings of hatchery-reared fish may become limited by the quantity of forage fish, mainly alewives Alosa pseudoharengus, available for food. As a part of a continuing examination of salmonid predator-prey relations in Lake Michigan, we traced changes in alewife biomass estimated from bottom-trawl surveys conducted in late October and early November 1967–1978. Weight of adult alewives trawled per 0.5 hectare of bottom (10-minute drag) at 16 depths along eight transects between 1973 and 1977 formed a skewed distribution: 72 of 464 drags caught no alewives; 89 drags caught less than 1 kg; and 2 drags caught more than 100 kg (maximum 159 kg). Analysis of variance in normalized catch per tow indicated highly significant differences between the main effects of years and depths, and highly significant differences in the interactions of years and transects, years and depths, and transects and depths. Five geographic and depth strata, formed by combining parts of transects wherein mean catch rate did not differ significantly, were the basis for calculating annual estimates of adult alewife biomass (with 90% confidence intervals). Estimated biomass of alewives (±90% confidence limits) in Lake Michigan proper (Green Bay and Grand Traverse Bay excluded) rose gradually from 46,000 (±9,000) t in 1967 to 114,000 (±17,000) t in 1973, declined to 45,000 (±8,000) t in 1977, and rose to 77,000 (±19,000) t in 1978.
NASA Astrophysics Data System (ADS)
Muralidhara, .; Vasa, Nilesh J.; Singaperumal, M.
2010-02-01
A micro-electro-discharge machine (Micro EDM) was developed incorporating a piezoactuated direct drive tool feed mechanism for micromachining of Silicon using a copper tool. Tool and workpiece materials are removed during Micro EDM process which demand for a tool wear compensation technique to reach the specified depth of machining on the workpiece. An in-situ axial tool wear and machining depth measurement system is developed to investigate axial wear ratio variations with machining depth. Stepwise micromachining experiments on silicon wafer were performed to investigate the variations in the silicon removal and tool wear depths with increase in tool feed. Based on these experimental data, a tool wear compensation method is proposed to reach the desired depth of micromachining on silicon using copper tool. Micromachining experiments are performed with the proposed tool wear compensation method and a maximum workpiece machining depth variation of 6% was observed.
Estimated winter wheat yield from crop growth predicted by LANDSAT
NASA Technical Reports Server (NTRS)
Kanemasu, E. T.
1977-01-01
An evapotranspiration and growth model for winter wheat is reported. The inputs are daily solar radiation, maximum temperature, minimum temperature, precipitation/irrigation and leaf area index. The meteorological data were obtained from National Weather Service while LAI was obtained from LANDSAT multispectral scanner. The output provides daily estimates of potential evapotranspiration, transpiration, evaporation, soil moisture (50 cm depth), percentage depletion, net photosynthesis and dry matter production. Winter wheat yields are correlated with transpiration and dry matter accumulation.
NASA Astrophysics Data System (ADS)
Cochachin, Alejo; Huggel, Christian; Salazar, Cesar; Haeberli, Wilfried; Frey, Holger
2015-04-01
Over timescales of hundreds to thousands of years ice masses in mountains produced erosion in bedrock and subglacial sediment, including the formation of overdeepenings and large moraine dams that now serve as basins for glacial lakes. Satellite based studies found a total of 8355 glacial lakes in Peru, whereof 830 lakes were observed in the Cordillera Blanca. Some of them have caused major disasters due to glacial lake outburst floods in the past decades. On the other hand, in view of shrinking glaciers, changing water resources, and formation of new lakes, glacial lakes could have a function as water reservoirs in the future. Here we present unprecedented bathymetric studies of 124 glacial lakes in the Cordillera Blanca, Huallanca, Huayhuash and Raura in the regions of Ancash, Huanuco and Lima. Measurements were carried out using a boat equipped with GPS, a total station and an echo sounder to measure the depth of the lakes. Autocad Civil 3D Land and ArcGIS were used to process the data and generate digital topographies of the lake bathymetries, and analyze parameters such as lake area, length and width, and depth and volume. Based on that, we calculated empirical equations for mean depth as related to (1) area, (2) maximum length, and (3) maximum width. We then applied these three equations to all 830 glacial lakes of the Cordillera Blanca to estimate their volumes. Eventually we used three relations from the literature to assess the peak discharge of potential lake outburst floods, based on lake volumes, resulting in 3 x 3 peak discharge estimates. In terms of lake topography and geomorphology results indicate that the maximum depth is located in the center part for bedrock lakes, and in the back part for lakes in moraine material. Best correlations are found for mean depth and maximum width, however, all three empirical relations show a large spread, reflecting the wide range of natural lake bathymetries. Volumes of the 124 lakes with bathymetries amount to 0.9 km3 while the volume of all glacial lakes of the Cordillera Blanca ranges between 1.15 and 1.29 km3. The small difference in volume of the large lake sample as compared to the smaller sample of bathymetrically surveyed lakes is due to the large size of the measured lakes. The different distributions for lake volume and peak discharge indicate the range of variability in such estimates, and provides valuable first-order information for management and adaptation efforts in the field of water resources and flood prevention.
NASA Astrophysics Data System (ADS)
Castellarin, A.; Montanari, A.; Brath, A.
2002-12-01
The study derives Regional Depth-Duration-Frequency (RDDF) equations for a wide region of northern-central Italy (37,200 km 2) by following an adaptation of the approach originally proposed by Alila [WRR, 36(7), 2000]. The proposed RDDF equations have a rather simple structure and allow an estimation of the design storm, defined as the rainfall depth expected for a given storm duration and recurrence interval, in any location of the study area for storm durations from 1 to 24 hours and for recurrence intervals up to 100 years. The reliability of the proposed RDDF equations represents the main concern of the study and it is assessed at two different levels. The first level considers the gauged sites and compares estimates of the design storm obtained with the RDDF equations with at-site estimates based upon the observed annual maximum series of rainfall depth and with design storm estimates resulting from a regional estimator recently developed for the study area through a Hierarchical Regional Approach (HRA) [Gabriele and Arnell, WRR, 27(6), 1991]. The second level performs a reliability assessment of the RDDF equations for ungauged sites by means of a jack-knife procedure. Using the HRA estimator as a reference term, the jack-knife procedure assesses the reliability of design storm estimates provided by the RDDF equations for a given location when dealing with the complete absence of pluviometric information. The results of the analysis show that the proposed RDDF equations represent practical and effective computational means for producing a first guess of the design storm at the available raingauges and reliable design storm estimates for ungauged locations. The first author gratefully acknowledges D.H. Burn for sponsoring the submission of the present abstract.
NASA Astrophysics Data System (ADS)
Chen, Ge; Yu, Fangjie
2015-01-01
In this study, we propose a new algorithm for estimating the annual maximum mixed layer depth (M2LD) analogous to a full range of local "ventilation" depth, and corresponding to the deepest surface to which atmospheric influence can be "felt." Two "seasonality indices" are defined, respectively, for temperature and salinity through Fourier analysis of their time series using Argo data, on the basis of which a significant local minimum of the index corresponding to a maximum penetration depth can be identified. A final M2LD is then determined by maximizing the thermal and haline effects. Unlike most of the previous schemes which use arbitrary thresholds or subjective criteria, the new algorithm is objective, robust, and property adaptive provided a significant periodic geophysical forcing such as annual cycle is available. The validity of our methodology is confirmed by the spatial correlation of the tropical dominance of saline effect (mainly related to rainfall cycle) and the extratropical dominance of thermal effect (mainly related to solar cycle). It is also recognized that the M2LD distribution is characterized by the coexistence of basin-scale zonal structures and eddy-scale local patches. In addition to the fundamental buoyancy forcing caused mainly by latitude-dependent solar radiation, the impressive two-scale pattern is found to be primarily attributable to (1) large-wave climate due to extreme winds (large scale) and (2) systematic eddy shedding as a result of persistent winds (mesoscale). Moreover, a general geographical consistency and a good quantitative agreement are found between the new algorithm and those published in the literature. However, a major discrepancy in our result is the existence of a constantly deeper M2LD band compared with other results in the midlatitude oceans of both hemispheres. Given the better correspondence of our M2LDs with the depth of the oxygen saturation limit, it is argued that there might be a systematic underestimation with existing criteria in these regions. Our results demonstrate that the M2LD may serve as an integrated proxy for studying the coherent multidisciplinary variabilities of the coupled ocean-atmosphere system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rastogi, Deeksha; Kao, Shih-Chieh; Ashfaq, Moetasim
Probable maximum precipitation (PMP), defined as the largest rainfall depth that could physically occur under a series of adverse atmospheric conditions, has been an important design criterion for critical infrastructures such as dams and nuclear power plants. To understand how PMP may respond to projected future climate forcings, we used a physics-based numerical weather simulation model to estimate PMP across various durations and areas over the Alabama-Coosa-Tallapoosa (ACT) river basin in the southeastern United States. Six sets of Weather Research and Forecasting (WRF) model experiments driven by both reanalysis and global climate model projections, with a total of 120 storms,more » were conducted. The depth-area-duration relationship was derived for each set of WRF simulations and compared with the conventional PMP estimates. Here, our results showed that PMP driven by projected future climate forcings is higher than 1981-2010 baseline values by around 20% in the 2021-2050 near-future and 44% in the 2071-2100 far-future periods. The additional sensitivity simulations of background air temperature warming also showed an enhancement of PMP, suggesting that atmospheric warming could be one important factor controlling the increase in PMP. In light of the projected increase in precipitation extremes under a warming environment, the reasonableness and role of PMP deserves more in-depth examination.« less
NASA Astrophysics Data System (ADS)
Rastogi, Deeksha; Kao, Shih-Chieh; Ashfaq, Moetasim; Mei, Rui; Kabela, Erik D.; Gangrade, Sudershan; Naz, Bibi S.; Preston, Benjamin L.; Singh, Nagendra; Anantharaj, Valentine G.
2017-05-01
Probable maximum precipitation (PMP), defined as the largest rainfall depth that could physically occur under a series of adverse atmospheric conditions, has been an important design criterion for critical infrastructures such as dams and nuclear power plants. To understand how PMP may respond to projected future climate forcings, we used a physics-based numerical weather simulation model to estimate PMP across various durations and areas over the Alabama-Coosa-Tallapoosa (ACT) River Basin in the southeastern United States. Six sets of Weather Research and Forecasting (WRF) model experiments driven by both reanalysis and global climate model projections, with a total of 120 storms, were conducted. The depth-area-duration relationship was derived for each set of WRF simulations and compared with the conventional PMP estimates. Our results showed that PMP driven by projected future climate forcings is higher than 1981-2010 baseline values by around 20% in the 2021-2050 near-future and 44% in the 2071-2100 far-future periods. The additional sensitivity simulations of background air temperature warming also showed an enhancement of PMP, suggesting that atmospheric warming could be one important factor controlling the increase in PMP. In light of the projected increase in precipitation extremes under a warming environment, the reasonableness and role of PMP deserve more in-depth examination.
Rastogi, Deeksha; Kao, Shih-Chieh; Ashfaq, Moetasim; ...
2017-04-13
Probable maximum precipitation (PMP), defined as the largest rainfall depth that could physically occur under a series of adverse atmospheric conditions, has been an important design criterion for critical infrastructures such as dams and nuclear power plants. To understand how PMP may respond to projected future climate forcings, we used a physics-based numerical weather simulation model to estimate PMP across various durations and areas over the Alabama-Coosa-Tallapoosa (ACT) river basin in the southeastern United States. Six sets of Weather Research and Forecasting (WRF) model experiments driven by both reanalysis and global climate model projections, with a total of 120 storms,more » were conducted. The depth-area-duration relationship was derived for each set of WRF simulations and compared with the conventional PMP estimates. Here, our results showed that PMP driven by projected future climate forcings is higher than 1981-2010 baseline values by around 20% in the 2021-2050 near-future and 44% in the 2071-2100 far-future periods. The additional sensitivity simulations of background air temperature warming also showed an enhancement of PMP, suggesting that atmospheric warming could be one important factor controlling the increase in PMP. In light of the projected increase in precipitation extremes under a warming environment, the reasonableness and role of PMP deserves more in-depth examination.« less
NASA Astrophysics Data System (ADS)
Simeonov, J.; Holland, K. T.
2015-12-01
We developed an inversion model for river bathymetry and discharge estimation based on measurements of surface currents, water surface elevation and shoreline coordinates. The model uses a simplification of the 2D depth-averaged steady shallow water equations based on a streamline following system of coordinates and assumes spatially uniform bed friction coefficient and eddy viscosity. The spatial resolution of the predicted bathymetry is related to the resolution of the surface currents measurements. The discharge is determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. The inversion model was tested using in situ and remote sensing measurements of the Kootenai River east of Bonners Ferry, ID. The measurements were obtained in August 2010 when the discharge was about 223 m3/s and the maximum river depth was about 6.5 m. Surface currents covering a 10 km reach with 8 m spatial resolution were estimated from airborne infrared video and were converted to depth-averaged currents using acoustic Doppler current profiler (ADCP) measurements along eight cross-stream transects. The streamwise profile of the water surface elevation was measured using real-time kinematic GPS from a drifting platform. The value of the friction coefficient was obtained from forward calibration simulations that minimized the difference between the predicted and measured velocity and water level along the river thalweg. The predicted along/cross-channel water depth variation was compared to the depth measured with a multibeam echo sounder. The rms error between the measured and predicted depth along the thalweg was found to be about 60cm and the estimated discharge was 5% smaller than the discharge measured by the ADCP.
Hirata, K.; Takahashi, H.; Geist, E.; Satake, K.; Tanioka, Y.; Sugioka, H.; Mikada, H.
2003-01-01
Micro-tsunami waves with a maximum amplitude of 4-6 mm were detected with the ocean-bottom pressure gauges on a cabled deep seafloor observatory south of Hokkaido, Japan, following the January 28, 2000 earthquake (Mw 6.8) in the southern Kuril subduction zone. We model the observed micro-tsunami and estimate the focal depth and other source parameters such as fault length and amount of slip using grid searching with the least-squares method. The source depth and stress drop for the January 2000 earthquake are estimated to be 50 km and 7 MPa, respectively, with possible ranges of 45-55 km and 4-13 MPa. Focal depth of typical inter-plate earthquakes in this region ranges from 10 to 20 km and stress drop of inter-plate earthquakes generally is around 3 MPa. The source depth and stress drop estimates suggest that the earthquake was an intra-slab event in the subducting Pacific plate, rather than an inter-plate event. In addition, for a prescribed fault width of 30 km, the fault length is estimated to be 15 km, with possible ranges of 10-20 km, which is the same as the previously determined aftershock distribution. The corresponding estimate for seismic moment is 2.7x1019 Nm with possible ranges of 2.3x1019-3.2x1019Nm. Standard tide gauges along the nearby coast did not record any tsunami signal. High-precision ocean-bottom pressure measurements offshore thus make it possible to determine fault parameters of moderate-sized earthquakes in subduction zones using open-ocean tsunami waveforms. Published by Elsevier Science B. V.
Age, extent and carbon storage of the central Congo Basin peatland complex.
Dargie, Greta C; Lewis, Simon L; Lawson, Ian T; Mitchard, Edward T A; Page, Susan E; Bocko, Yannick E; Ifo, Suspense A
2017-02-02
Peatlands are carbon-rich ecosystems that cover just three per cent of Earth's land surface, but store one-third of soil carbon. Peat soils are formed by the build-up of partially decomposed organic matter under waterlogged anoxic conditions. Most peat is found in cool climatic regions where unimpeded decomposition is slower, but deposits are also found under some tropical swamp forests. Here we present field measurements from one of the world's most extensive regions of swamp forest, the Cuvette Centrale depression in the central Congo Basin. We find extensive peat deposits beneath the swamp forest vegetation (peat defined as material with an organic matter content of at least 65 per cent to a depth of at least 0.3 metres). Radiocarbon dates indicate that peat began accumulating from about 10,600 years ago, coincident with the onset of more humid conditions in central Africa at the beginning of the Holocene. The peatlands occupy large interfluvial basins, and seem to be largely rain-fed and ombrotrophic-like (of low nutrient status) systems. Although the peat layer is relatively shallow (with a maximum depth of 5.9 metres and a median depth of 2.0 metres), by combining in situ and remotely sensed data, we estimate the area of peat to be approximately 145,500 square kilometres (95 per cent confidence interval of 131,900-156,400 square kilometres), making the Cuvette Centrale the most extensive peatland complex in the tropics. This area is more than five times the maximum possible area reported for the Congo Basin in a recent synthesis of pantropical peat extent. We estimate that the peatlands store approximately 30.6 petagrams (30.6 × 10 15 grams) of carbon belowground (95 per cent confidence interval of 6.3-46.8 petagrams of carbon)-a quantity that is similar to the above-ground carbon stocks of the tropical forests of the entire Congo Basin. Our result for the Cuvette Centrale increases the best estimate of global tropical peatland carbon stocks by 36 per cent, to 104.7 petagrams of carbon (minimum estimate of 69.6 petagrams of carbon; maximum estimate of 129.8 petagrams of carbon). This stored carbon is vulnerable to land-use change and any future reduction in precipitation.
lakemorpho: Calculating lake morphometry metrics in R.
Hollister, Jeffrey; Stachelek, Joseph
2017-01-01
Metrics describing the shape and size of lakes, known as lake morphometry metrics, are important for any limnological study. In cases where a lake has long been the subject of study these data are often already collected and are openly available. Many other lakes have these data collected, but access is challenging as it is often stored on individual computers (or worse, in filing cabinets) and is available only to the primary investigators. The vast majority of lakes fall into a third category in which the data are not available. This makes broad scale modelling of lake ecology a challenge as some of the key information about in-lake processes are unavailable. While this valuable in situ information may be difficult to obtain, several national datasets exist that may be used to model and estimate lake morphometry. In particular, digital elevation models and hydrography have been shown to be predictive of several lake morphometry metrics. The R package lakemorpho has been developed to utilize these data and estimate the following morphometry metrics: surface area, shoreline length, major axis length, minor axis length, major and minor axis length ratio, shoreline development, maximum depth, mean depth, volume, maximum lake length, mean lake width, maximum lake width, and fetch. In this software tool article we describe the motivation behind developing lakemorpho , discuss the implementation in R, and describe the use of lakemorpho with an example of a typical use case.
Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed
NASA Astrophysics Data System (ADS)
Kumar, V.; Sen, S.
2016-12-01
Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.
NASA Astrophysics Data System (ADS)
Olurin, Oluwaseun T.; Ganiyu, Saheed A.; Hammed, Olaide S.; Aluko, Taiwo J.
2016-10-01
This study presents the results of spectral analysis of magnetic data over Abeokuta area, Southwestern Nigeria, using fast Fourier transform (FFT) in Microsoft Excel. The study deals with the quantitative interpretation of airborne magnetic data (Sheet No. 260), which was conducted by the Nigerian Geological Survey Agency in 2009. In order to minimise aliasing error, the aeromagnetic data was gridded at spacing of 1 km. Spectral analysis technique was used to estimate the magnetic basement depth distributed at two levels. The result of the interpretation shows that the magnetic sources are mainly distributed at two levels. The shallow sources (minimum depth) range in depth from 0.103 to 0.278 km below ground level and are inferred to be due to intrusions within the region. The deeper sources (maximum depth) range in depth from 2.739 to 3.325 km below ground and are attributed to the underlying basement.
Water partitioning in the Earth's mantle
NASA Astrophysics Data System (ADS)
Inoue, Toru; Wada, Tomoyuki; Sasaki, Rumi; Yurimoto, Hisayoshi
2010-11-01
We have conducted H2O partitioning experiments between wadsleyite and ringwoodite and between ringwoodite and perovskite at 1673 K and 1873 K, respectively. These experiments were performed in order to constrain the relative distribution of H2O in the upper mantle, the mantle transition zone, and the lower mantle. We successfully synthesized coexisting mineral assemblages of wadsleyite-ringwoodite and ringwoodite-perovskite that were large enough to measure the H2O contents by secondary ion mass spectrometry (SIMS). Combining our previous H2O partitioning data (Chen et al., 2002) with the present results, the determined water partitioning between olivine, wadsleyite, ringwoodite, and perovskite under H2O-rich fluid saturated conditions are 6:30:15:1, respectively. Because the maximum H2O storage capacity in wadsleyite is ∼3.3 wt% (e.g. Inoue et al., 1995), the possible maximum H2O storage capacity in the olivine high-pressure polymorphs are as follows: ∼0.7 wt% in olivine (upper mantle just above 410 km depth), ∼3.3 wt% in wadsleyite (410-520 km depth), ∼1.7 wt% in ringwoodite (520-660 km depth), and ∼0.1 wt% in perovskite (lower mantle). If we assume ∼0.2 wt% of the H2O content in wadsleyite in the mantle transition zone estimated by recent electrical conductivity measurements (e.g. Dai and Karato, 2009), the estimated H2O contents throughout the mantle are as follows; ∼0.04 wt% in olivine (upper mantle just above 410 km depth), ∼0.2 wt% in wadsleyite (410-520 km depth), ∼0.1 wt% in ringwoodite (520-660 km depth) and ∼0.007 wt% in perovskite (lower mantle). Thus, the mantle transition zone should contain a large water reservoir in the Earth's mantle compared to the upper mantle and the lower mantle.
Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.
2011-01-01
During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for the 25-year-recurrence, 1-hour-duration rainfall. The largest postwildfire debris-flow volumes were estimated for Carbonate Creek and Milton Creek drainage basins, for both the 5- and 25-year-recurrence, 1-hour-duration rainfalls. Results from FLO-2D modeling of the 5-year and 25-year recurrence, 1-hour rainfalls indicate that the debris flows from the four drainage basins would reach or nearly reach the Crystal River. The model estimates maximum instantaneous depths of debris-flow material during postwildfire debris flows that exceeded 5 meters in some areas, but the differences in model results between the 5-year and 25-year recurrence, 1-hour rainfalls are small. Existing stream channels or topographic flow paths likely control the distribution of debris-flow material, and the difference in estimated debris-flow volume (about 25 percent more volume for the 25-year-recurrence, 1-hour-duration rainfall compared to the 5-year-recurrence, 1-hour-duration rainfall) does not seem to substantially affect the estimated spatial distribution of debris-flow material. Historically, the Marble area has experienced periodic debris flows in the absence of wildfire. This report estimates the probability and volume of debris flow and maximum instantaneous inundation area depths after hypothetical wildfire and rainfall. This postwildfire debris-flow report does not address the current (2010) prewildfire debris-flow hazards that exist near Marble.
Estimation of foot pressure from human footprint depths using 3D scanner
NASA Astrophysics Data System (ADS)
Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus
2016-03-01
The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.
Stereo pair design for cameras with a fovea
NASA Technical Reports Server (NTRS)
Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.
1992-01-01
We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.
Huang, Yongfang; Gang, Tieqiang; Chen, Lijie
2017-01-01
For pre-corroded aluminum alloy 7075-T6, the interacting effects of two neighboring pits on the stress concentration are comprehensively analyzed by considering various relative position parameters (inclination angle θ and dimensionless spacing parameter λ) and pit depth (d) with the finite element method. According to the severity of the stress concentration, the critical corrosion regions, bearing high susceptibility to fatigue damage, are determined for intersecting and adjacent pits, respectively. A straightforward approach is accordingly proposed to conservatively estimate the combined stress concentration factor induced by two neighboring pits, and a concrete application example is presented. It is found that for intersecting pits, the normalized stress concentration factor Ktnor increases with the increase of θ and λ and always reaches its maximum at θ = 90°, yet for adjacent pits, Ktnor decreases with the increase of λ and the maximum value appears at a slight asymmetric location. The simulations reveal that Ktnor follows a linear and an exponential relationship with the dimensionless depth parameter Rd for intersecting and adjacent cases, respectively. PMID:28772758
Abou-Taleb, W M; Hassan, M H; El Mallah, E A; Kotb, S M
2018-05-01
Photoneutron production, and the dose equivalent, in the head assembly of the 15 MV Elekta Precise medical linac; operating in the faculty of Medicine at Alexandria University were estimated with the MCNP5 code. Photoneutron spectra were calculated in air and inside a water phantom to different depths as a function of the radiation field sizes. The maximum neutron fluence is 3.346×10 -9 n/cm 2 -e for a 30×30 cm 2 field size to 2-4 cm-depth in the phantom. The dose equivalent due to fast neutron increases as the field size increases, being a maximum of 0.912 ± 0.05 mSv/Gy at depth between 2 and 4 cm in the water phantom for 40×40 cm 2 field size. Photoneutron fluence and dose equivalent are larger to 100 cm from the isocenter than to 35 cm from the treatment room wall. Copyright © 2018 Elsevier Ltd. All rights reserved.
Schwartz, Rachel S; Mueller, Rachel L
2010-01-11
Estimates of divergence dates between species improve our understanding of processes ranging from nucleotide substitution to speciation. Such estimates are frequently based on molecular genetic differences between species; therefore, they rely on accurate estimates of the number of such differences (i.e. substitutions per site, measured as branch length on phylogenies). We used simulations to determine the effects of dataset size, branch length heterogeneity, branch depth, and analytical framework on branch length estimation across a range of branch lengths. We then reanalyzed an empirical dataset for plethodontid salamanders to determine how inaccurate branch length estimation can affect estimates of divergence dates. The accuracy of branch length estimation varied with branch length, dataset size (both number of taxa and sites), branch length heterogeneity, branch depth, dataset complexity, and analytical framework. For simple phylogenies analyzed in a Bayesian framework, branches were increasingly underestimated as branch length increased; in a maximum likelihood framework, longer branch lengths were somewhat overestimated. Longer datasets improved estimates in both frameworks; however, when the number of taxa was increased, estimation accuracy for deeper branches was less than for tip branches. Increasing the complexity of the dataset produced more misestimated branches in a Bayesian framework; however, in an ML framework, more branches were estimated more accurately. Using ML branch length estimates to re-estimate plethodontid salamander divergence dates generally resulted in an increase in the estimated age of older nodes and a decrease in the estimated age of younger nodes. Branch lengths are misestimated in both statistical frameworks for simulations of simple datasets. However, for complex datasets, length estimates are quite accurate in ML (even for short datasets), whereas few branches are estimated accurately in a Bayesian framework. Our reanalysis of empirical data demonstrates the magnitude of effects of Bayesian branch length misestimation on divergence date estimates. Because the length of branches for empirical datasets can be estimated most reliably in an ML framework when branches are <1 substitution/site and datasets are > or =1 kb, we suggest that divergence date estimates using datasets, branch lengths, and/or analytical techniques that fall outside of these parameters should be interpreted with caution.
Hydro and morphodynamic simulations for probabilistic estimates of munitions mobility
NASA Astrophysics Data System (ADS)
Palmsten, M.; Penko, A.
2017-12-01
Probabilistic estimates of waves, currents, and sediment transport at underwater munitions remediation sites are necessary to constrain probabilistic predictions of munitions exposure, burial, and migration. To address this need, we produced ensemble simulations of hydrodynamic flow and morphologic change with Delft3D, a coupled system of wave, circulation, and sediment transport models. We have set up the Delft3D model simulations at the Army Corps of Engineers Field Research Facility (FRF) in Duck, NC, USA. The FRF is the prototype site for the near-field munitions mobility model, which integrates far-field and near-field field munitions mobility simulations. An extensive array of in-situ and remotely sensed oceanographic, bathymetric, and meteorological data are available at the FRF, as well as existing observations of munitions mobility for model testing. Here, we present results of ensemble Delft3D hydro- and morphodynamic simulations at Duck. A nested Delft3D simulation runs an outer grid that extends 12-km in the along-shore and 3.7-km in the cross-shore with 50-m resolution and a maximum depth of approximately 17-m. The inner nested grid extends 3.2-km in the along-shore and 1.2-km in the cross-shore with 5-m resolution and a maximum depth of approximately 11-m. The inner nested grid initial model bathymetry is defined as the most recent survey or remotely sensed estimate of water depth. Delft3D-WAVE and FLOW is driven with spectral wave measurements from a Waverider buoy in 17-m depth located on the offshore boundary of the outer grid. The spectral wave output and the water levels from the outer grid are used to define the boundary conditions for the inner nested high-resolution grid, in which the coupled Delft3D WAVE-FLOW-MORPHOLOGY model is run. The ensemble results are compared to the wave, current, and bathymetry observations collected at the FRF.
On the bandwidth of the plenoptic function.
Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin
2012-02-01
The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE
Machine vision guided sensor positioning system for leaf temperature assessment
NASA Technical Reports Server (NTRS)
Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)
2001-01-01
A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2017-07-01
Soil temperature (T s) and its thermal regime are the most important factors in plant growth, biological activities, and water movement in soil. Due to scarcity of the T s data, estimation of soil temperature is an important issue in different fields of sciences. The main objective of the present study is to investigate the accuracy of multivariate adaptive regression splines (MARS) and support vector machine (SVM) methods for estimating the T s. For this aim, the monthly mean data of the T s (at depths of 5, 10, 50, and 100 cm) and meteorological parameters of 30 synoptic stations in Iran were utilized. To develop the MARS and SVM models, various combinations of minimum, maximum, and mean air temperatures (T min, T max, T); actual and maximum possible sunshine duration; sunshine duration ratio (n, N, n/N); actual, net, and extraterrestrial solar radiation data (R s, R n, R a); precipitation (P); relative humidity (RH); wind speed at 2 m height (u 2); and water vapor pressure (Vp) were used as input variables. Three error statistics including root-mean-square-error (RMSE), mean absolute error (MAE), and determination coefficient (R 2) were used to check the performance of MARS and SVM models. The results indicated that the MARS was superior to the SVM at different depths. In the test and validation phases, the most accurate estimations for the MARS were obtained at the depth of 10 cm for T max, T min, T inputs (RMSE = 0.71 °C, MAE = 0.54 °C, and R 2 = 0.995) and for RH, V p, P, and u 2 inputs (RMSE = 0.80 °C, MAE = 0.61 °C, and R 2 = 0.996), respectively.
McGarr, A.; Fletcher, Joe B.
2000-01-01
Using the Northridge earthquake as an example, we demonstrate a new technique able to resolve apparent stress within subfaults of a larger fault plane. From the model of Wald et al. (1996), we estimated apparent stress for each subfault using τa = (G/β)/2 where G is the modulus of rigidity, β is the shear wave speed, and is the average slip rate. The image of apparent stress mapped over the Northridge fault plane supports the idea that the stresses causing fault slip are inhomogeneous, but limited by the strength of the crust. Indeed, over the depth range 5 to 17 km, maximum values of apparent stress for a given depth interval agree with τa(max)=0.06S(z), where S is the laboratory estimate of crustal strength as a function of depth z. The seismic energy from each subfault was estimated from the product τaDA, where A is subfault area and D its slip. Over the fault zone, we found that the radiated energy is quite variable spatially, with more than 50% of the total coming from just 15% of the subfaults.
Geophysical reconnaissance of Lemmon Valley, Washoe County, Nevada
Schaefer, Donald H.; Maurer, Douglas K.
1981-01-01
Rapid growth in the Lemmon Valley area, Nevada, during recent years has put increasing importance on knowledge of stored ground water for the valley. Data that would fill voids left by previous studies are depth to bedrock and depth to good-quality water beneath the two playas in the valley. Depths to bedrock calculated from a gravity survey in Lemmon Valley indicate that the western part of Lemmon Valley is considerably deeper than the eastern part. Maximum depth in the western part is about 2 ,600 feet below land surface. This depression approximately underlies the Silver Lake playa. A smaller, shallower depression with a maximum depth of about 1,500 feet below land surface exists about 2.5 miles north of the playa. The eastern area is considerably shallower. The maximum calculated depth to bedrock is about 1,000 feet below land surface, but the depth throughout most the eastern area is only about 400 feet below land surface. An electrical resistivity survey in Lemmon Valley consisting of 10 Schlumberger soundings was conducted around the playas. The maximum depth of poor-quality water (characterized by a resistivity less than 20 ohm-meters) differed considerably from place to place. Maximum depths of poor-quality water beneath the playa east of Stead varied from about 120 feet to almost 570 feet below land surface. At the Silver Lake playa, the maximum depths varied from about 40 feet in the west to 490 feet in the east. (USGS)
Simulation of Soil Frost and Thaw Fronts Dynamics with Community Land Model 4.5
NASA Astrophysics Data System (ADS)
Gao, J.; Xie, Z.
2016-12-01
Freeze-thaw processes in soils, including changes in frost and thaw fronts (FTFs) , are important physical processes. The movement of FTFs affects soil water and thermal characteristics, as well as energy and water exchanges between land surface and the atmosphere, and then the land surface hydrothermal process. In this study, a two-directional freeze and thaw algorithm for simulating FTFs is incorporated into the community land surface model CLM4.5, which is called CLM4.5-FTF. The simulated FTFs depth and soil temperature of CLM4.5-FTF compared well with the observed data both in D66 station (permafrost) and Hulugou station (seasonally frozen soil). Because the soil temperature profile within a soil layer can be estimated according to the position of FTFs, CLM4.5 performed better in soil temperature simulation. Permafrost and seasonally frozen ground conditions in China from 1980 to 2010 were simulated using the CLM4.5-FTF. Numerical experiments show that the spatial distribution of simulated maximum frost depth by CLM4.5-FTF has seasonal variation obviously. Significant positive active-layer depth trends for permafrost regions and negative maximum freezing depth trends for seasonal frozen soil regions are simulated in response to positive air temperature trends except west of Black Sea.
Seasonal variability of Internal tide energetics in the western Bay of Bengal
NASA Astrophysics Data System (ADS)
Mohanty, S.; Rao, A. D.
2017-12-01
The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, seamounts, etc. These waves are an important phenomena in the ocean due to their influence on the density structure and energy transfer into the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the western Bay of Bengal is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution observed data sets are available. The model is initially validated through the spectral estimate of density and the baroclinic velocities. From the estimate, it is found that its peak is associated with the semi-diurnal frequency at all the depths in both observations and model simulations for November-December and March-April. However in August, the estimate is found to be maximum near the inertial frequency at all available depths. EOF analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The phase speed, group speed and wavelength are found to be maximum for post-monsoon season compared to other two seasons. To understand the generation and propagation of internal tides over this region, barotropic-to-baroclinic M2 tidal energy conversion and energy flux are examined. The barotropic-to-baroclinic conversion occurs intensively along the shelf-slope regions and propagate towards the coast. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m2) in northern BoB and minimum in August (14kg/m2). The detailed energy budget calculation are made for all the seasons and results are analysed.
Neuromuscular responses during aquatic resistance exercise with different devices and depths.
Colado, Juan C; Borreani, Sebastien; Pinto, Stephanie Santana; Tella, Victor; Martin, Fernando; Flandez, Jorge; Kruel, Luiz F
2013-12-01
Little research has been reported regarding the effects of using different devices and immersion depths during the performance of resistance exercises in a water environment. The purpose of this study was to compare muscular activation of upper extremity and core muscles during shoulder extensions performed at maximum velocity with different devices and at different depths. Volunteers (N = 24) young fit male university students performed 3 repetitions of shoulder extensions at maximum velocity using 4 different devices and at 2 different depths. The maximum amplitude of the electromyographic root mean square of the latissimus dorsi (LD), rectus abdominis, and erector lumbar spinae was recorded. Electromyographic signals were normalized to the maximum voluntary isometric contraction. No significant (p > 0.05) differences were found in the neuromuscular responses between the different devices used during the performance of shoulder extension at xiphoid process depth. Regarding the comparisons of muscle activity between the 2 depths analyzed in this study, only the LD showed a significantly (p ≤ 0.05) higher activity at the xiphoid process depth compared with that at the clavicle depth. Therefore, if maximum muscle activation of the extremities is required, the xiphoid depth is a better choice than clavicle depth, and the kind of device is not relevant. Regarding core muscles, neither the kind of device nor the immersion depth modifies muscle activation.
NASA Astrophysics Data System (ADS)
Lim, H. S.; Lee, J. Y.; Yoon, H.
2016-12-01
Soil temperatures, water temperatures, and weather parameters were monitored at a variety of locations in the vicinity of King Sejong station, King George Island, Antarctica, during summer 2010-2011. Thermal characteristics of soil and water were analysed using time-series analyses, apparent thermal diffusivity (ATD), and active layer thickness. The temperatures of pond water and nearby seawater showed the distinctive diurnal variations and correlated strongly with solar radiation (r = 0.411-0.797). Soil temperature (0.1-0.3 m depth) also showed diurnal fluctuations that decreased with depth and were directly linked to air temperature (r = 0.513-0.783) rather than to solar radiation; correlation decreased with depth and the time lag in the response increased by 2-3 hours per 0.1 m of soil depth. Owing to the lack of snow cover, summertime soil temperature was not decoupled from air temperature. Estimated ATD was between 0.022 and 29.209 mm2/sec, showed temporal and spatial variations, and correlated strongly with soil moisture content. The maximum estimated active layer thickness in the study area was a 41-70 cm, which is consistent with values reported in the previous work.
Model predictions and visualization of the particle flux on the surface of Mars.
Cucinotta, Francis A; Saganti, Premkumar B; Wilson, John W; Simonsen, Lisa C
2002-12-01
Model calculations of the particle flux on the surface of Mars due to the Galactic Cosmic Rays (GCR) can provide guidance on radiobiological research and shielding design studies in support of Mars exploration science objectives. Particle flux calculations for protons, helium ions, and heavy ions are reported for solar minimum and solar maximum conditions. These flux calculations include a description of the altitude variations on the Martian surface using the data obtained by the Mars Global Surveyor (MGS) mission with its Mars Orbiter Laser Altimeter (MOLA) instrument. These particle flux calculations are then used to estimate the average particle hits per cell at various organ depths of a human body in a conceptual shelter vehicle. The estimated particle hits by protons for an average location at skin depth on the Martian surface are about 10 to 100 particle-hits/cell/year and the particle hits by heavy ions are estimated to be 0.001 to 0.01 particle-hits/cell/year.
Permeability of the continental crust: Implications of geothermal data and metamorphic systems
Manning, C.E.; Ingebritsen, S.E.
1999-01-01
In the upper crust, where hydraulic gradients are typically 10 MPa km-1, the mean permeabilities required to accommodate the estimated metamorphic fluid fluxes decrease from ~10-16 m2 to ~10-18 m2 between 5- and 12-km depth. Below ~12 km, which broadly corresponds to the brittle-plastic transition, mean k is effectively independent of depth at ~10(-18.5??1) m2. Consideration of the permeability values inferred from thermal modeling and metamorphic fluxes suggests a quasi-exponential decay of permeability with depth of log k ~ -3.2 log z - 14, where k is in meters squared and z is in kilometers. At mid to lower crustal depths this curve lies just below the threshold value for significant advection of heat. Such conditions may represent an optimum for metamorphism, allowing the maximum transport of fluid and solute mass that is possible without advective cooling.
Mechanics of airway and alveolar collapse in human breath-hold diving.
Fitz-Clarke, John R
2007-11-15
A computational model of the human respiratory tract was developed to study airway and alveolar compression and re-expansion during deep breath-hold dives. The model incorporates the chest wall, supraglottic airway, trachea, branched airway tree, and elastic alveoli assigned time-dependent surfactant properties. Total lung collapse with degassing of all alveoli is predicted to occur around 235 m, much deeper than estimates for aquatic mammals. Hysteresis of the pressure-volume loop increases with maximum diving depth due to progressive alveolar collapse. Reopening of alveoli occurs stochastically as airway pressure overcomes adhesive and compressive forces on ascent. Surface area for gas exchange vanishes at collapse depth, implying that the risk of decompression sickness should reach a plateau beyond this depth. Pulmonary capillary transmural stresses cannot increase after local alveolar collapse. Consolidation of lung parenchyma might provide protection from capillary injury or leakage caused by vascular engorgement due to outward chest wall recoil at extreme depths.
NASA Astrophysics Data System (ADS)
Mohan, Kapil; Chaudhary, Peush; Patel, Pruthul; Chaudhary, B. S.; Chopra, Sumer
2018-02-01
The Kachchh Mainland Fault (KMF) is a major E-W trending fault in the Kachchh region of Gujarat extending >150 km from Lakhpat village in the west to the Bhachau town in the east. The Katrol Hill Fault (KHF) is an E-W trending intrabasinal fault located in the central region of Kachchh Basin and the south of KMF. The western parts of both of the faults are characterized, and the sediment thickness has been estimated in the region using a Magnetotelluric (MT) survey at 17 sites along a 55 km long north-south profile with a site spacing of 2-3 km. The analysis reveals that the maximum sediment thickness is 2.3 km (Quaternary, Tertiary, and Mesozoic) in the region, out of which, the Mesozoic sediments feature a maximum thickness of 2 km. The estimated sediment thickness is found consistent with the thickness suggested by a deep borehole (depth approx. 2.5 km) drilled by Oil and Natural Gas Corporation (ONGC) at Nirona (Northern part of the study area). From 2-D inversion of the MT data, three conductive zones are identified from north to south. The first conductive zone is dipping nearly vertical down to 7-8 km depth. It becomes north-dipping below 8 km depth and is inferred as KMF. The second conductive zone is found steeply dipping into the southern limbs near Manjal village (28 km south of Nirona), which is inferred as the KHF. A vertical-dipping (down to 20 km depth) conductive zone has also been observed near Ulat village, located 16 km north of Manjal village and 12 km south of Nirona village. This conductive zone becomes listric north-dipping beyond 20 km depth. It is reported first time by a Geophysical survey in the region.
NASA Astrophysics Data System (ADS)
Fan, Tiantian; Yu, Hongbin
2018-03-01
A novel shape from focus method combining 3D steerable filter for improved performance on treating textureless region was proposed in this paper. Different from conventional spatial methods focusing on the search of maximum edges' response to estimate the depth map, the currently proposed method took both of the edges' response and the axial imaging blur degree into consideration during treatment. As a result, more robust and accurate identification for the focused location can be achieved, especially when treating textureless objects. Improved performance in depth measurement has been successfully demonstrated from both of the simulation and experiment results.
Fowler, A M; Booth, D J
2012-03-01
The length frequencies and age structures of resident Pseudanthias rubrizonatus (n = 407), a small protogynous serranid, were measured at four isolated artificial structures on the continental shelf of north-western Australia between June and August 2008, to determine whether these structures supported full (complete size and age-structured) populations of this species. The artificial structures were located in depths between 82 and 135 m, and growth rates of juveniles and adults, and body condition of adults, were compared among structures to determine the effect of depth on potential production. All life-history stages, including recently settled juveniles, females and terminal males, of P. rubrizonatus were caught, ranging in standard length (L(s)) from 16·9 to 96·5 mm. Presumed ages estimated from whole and sectioned otoliths ranged between 22 days and 5 years, and parameter ±s.e. estimates of the von Bertalanffy growth model were L(∞) = 152 ± 34 mm, k = 0·15(±0·05) and t(0) = -1·15(±0·15). Estimated annual growth rates were similar between shallow and deep artificial structures; however, otolith lengths and recent growth of juveniles differed among individual structures, irrespective of depth. The artificial structures therefore sustained full populations of P. rubrizonatus, from recently settled juveniles through to adults; however, confirmation of the maximum age attainable for the species is required from natural populations. Depth placement of artificial reefs may not affect the production of fish species with naturally wide depth ranges. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.
Object recognition and localization from 3D point clouds by maximum-likelihood estimation
NASA Astrophysics Data System (ADS)
Dantanarayana, Harshana G.; Huntley, Jonathan M.
2017-08-01
We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.
Integration time for the perception of depth from motion parallax.
Nawrot, Mark; Stroyan, Keith
2012-04-15
The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio for a selection of points on a complicated stimulus. Copyright © 2012 Elsevier Ltd. All rights reserved.
Linking Incoming Plate Faulting and Intermediate Depth Seismicity
NASA Astrophysics Data System (ADS)
Kwong, K. B.; van Zelst, I.; Tong, X.; Eimer, M. O.; Naif, S.; Hu, Y.; Zhan, Z.; Boneh, Y.; Schottenfels, E.; Miller, M. S.; Moresi, L. N.; Warren, J. M.; Wiens, D. A.
2017-12-01
Intermediate depth earthquakes, occurring between 70-350 km depth, are often attributed to dehydration reactions within the subducting plate. It is proposed that incoming plate normal faulting associated with plate bending at the trench may control the amount of hydration in the plate by producing large damage zones that create pathways for the infiltration of seawater deep into the subducting mantle. However, a relationship between incoming plate seismicity, faulting, and intermediate depth seismicity has not been established. We compiled a global dataset consisting of incoming plate earthquake moment tensor (CMT) solutions, focal depths, bend fault spacing and offset measurements, along with plate age and convergence rates. In addition, a global intermediate depth seismicity dataset was compiled with parameters such as the maximum seismic moment and seismicity rate, as well as thicknesses of double seismic zones. The maximum fault offset in the bending region has a strong correlation with the intermediate depth seismicity rate, but a more modest correlation with other parameters such as convergence velocity and plate age. We estimated the expected rate of seismic moment release for the incoming plate faults using mapped fault scarps from bathymetry. We compare this with the cumulative moment from normal faulting earthquakes in the incoming plate from the global CMT catalog to determine whether outer rise fault movement has an aseismic component. Preliminary results from Tonga and the Middle America Trench suggest there may be an aseismic component to incoming plate bending faulting. The cumulative seismic moment calculated for the outer rise faults will also be compared to the cumulative moment from intermediate depth earthquakes to assess whether these parameters are related. To support the observational part of this study, we developed a geodynamic numerical modeling study to systematically explore the influence of parameters such as plate age and convergence rate on the offset, depth, and spacing of outer rise faults. We then compare these robust constraints on outer rise faulting to the observed widths of intermediate depth earthquakes globally.
Subduction zone seismicity and the thermo-mechanical evolution of downgoing lithosphere
NASA Astrophysics Data System (ADS)
Wortel, M. J. R.; Vlaar, N. J.
1988-09-01
In this paper we discuss characteristic features of subduction zone seismicity at depths between about 100 km and 700 km, with emphasis on the role of temperature and rheology in controlling the deformation of, and the seismic energy release in downgoing lithosphere. This is done in two steps. After a brief review of earlier developments, we first show that the depth distribution of hypocentres at depths between 100 km and 700 km in subducted lithosphere can be explained by a model in which seismic activity is confined to those parts of the slab which have temperatures below a depth-dependent critical value T cr. Second, the variation of seismic energy release (frequency of events, magnitude) with depth is addressed by inferring a rheological evolution from the slab's thermal evolution and by combining this with models for the system of forces acting on the subducting lithosphere. It is found that considerable stress concentration occurs in a reheating slab in the depth range of 400 to 650 700 km: the slab weakens, but the stress level strongly increases. On the basis of this stress concentration a model is formulated for earthquake generation within subducting slabs. The model predicts a maximum depth of seismic activity in the depth range of 635 to 760 km and, for deep earthquake zones, a relative maximum in seismic energy release near the maximum depth of earthquakes. From our modelling it follows that, whereas such a maximum is indeed likely to develop in deep earthquake zones, zones with a maximum depth around 300 km (such as the Aleutians) are expected to exhibit a smooth decay in seismic energy release with depth. This is in excellent agreement with observational data. In conclusion, the incoroporation of both depth-dependent forces and depth-dependent rheology provides new insight into the generation of intermediate and deep earthquakes and into the variation of seismic activity with depth. Our results imply that no barrier to slab penetration at a depth of 650 700 km is required to explain the maximum depth of seismic activity and the pattern of seismic energy release in deep earthquake zones.
Evaluation of the Sparton tight-tolerance AXBT
NASA Technical Reports Server (NTRS)
Boyd, Janice D.; Linzell, Robert S.
1993-01-01
Forty-six near-simultaneous pairs of conductivity - temperature - depth (CTD) and Sparton 'tight tolerance' air expendable bathythermograph (AXBT) temperature profiles were obtained in summer 1991 from a location in the Sargasso Sea. The data were analyzed to assess the temperature and depth accuracies of the Sparton AXBTs. The tight-tolerance criterion was not achieved using the manufacturer's equations but may have been achieved using customized equations computed from the CTD data. The temperature data from the customized equations had a one standard deviation error of 0.13 C. A customized elapsed fall time-to-depth conversion equation was found to be z = 1.620t - 2.2384 x 10(exp -4) t(exp 2) + 1.291 x 10(exp -7) t(exp 3), with z the depth in meters and t the elapsed fall time after probe release in seconds. The standard deviation of the depth error was about 5 m; a rule of thumb for estimating maximum bounds on the depth error below 100 m could be expressed as +/-2% of depth or +/- 10 m, whichever is greater. This equation gave greater depth accuracy than either the manufacturer's supplied equation or the navy standard equation.
Observed and Predicted Pier Scour in Maine
Hodgkins, Glenn A.; Lombard, Pamela J.
2002-01-01
Pier-scour and related data were collected and analyzed for nine high river flows at eight bridges across Maine from 1997 through 2001. Six bridges had multiple piers. Fifteen of 23 piers where data were measured during a high flow had observed maximum scour depths ranging from 0.5 feet (ft) to 12.0 ft. No pier scour was observed at the remaining eight piers. The maximum predicted pier-scour depths associated with the 23 piers were computed using the equations in the Federal Highway Administration's Hydraulic Engineering Circular number 18 (HEC-18), with data collected for this study. The predicted HEC-18 maximum pier-scour depths were compared to the observed maximum pier-scour depths. The HEC-18 pier-scour equations are intended to be envelope equations, ideally never underpredicting scour depths and not appreciably overpredicting them. The HEC-18 pier-scour equations performed well for rivers in Maine. Twenty-two out of 23 pier-scour depths were overpredicted by 0.7 ft to 18.3 ft. One pier-scour depth was underpredicted by 4.5 ft. For one pier at each of two bridges, large amounts of debris lodged on the piers after high-flow measurements were made at those sites. The scour associated with the debris increased the maximum pier-scour depths by about 5 ft in each case.
NASA Astrophysics Data System (ADS)
Avanzi, Francesco; De Michele, Carlo; Gabriele, Salvatore; Ghezzi, Antonio; Rosso, Renzo
2015-04-01
Here, we show how atmospheric circulation and topography rule the variability of depth-duration-frequency (DDF) curves parameters, and we discuss how this variability has physical implications on the formation of extreme precipitations at high elevations. A DDF is a curve ruling the value of the maximum annual precipitation H as a function of duration D and the level of probability F. We consider around 1500 stations over the Italian territory, with at least 20 years of data of maximum annual precipitation depth at different durations. We estimated the DDF parameters at each location by using the asymptotic distribution of extreme values, i.e. the so-called Generalized Extreme Value (GEV) distribution, and considering a statistical simple scale invariance hypothesis. Consequently, a DDF curve depends on five different parameters. A first set relates H with the duration (namely, the mean value of annual maximum precipitation depth for unit duration and the scaling exponent), while a second set links H to F (namely, a scale, position and shape parameter). The value of the shape parameter has consequences on the type of random variable (unbounded, upper or lower bounded). This extensive analysis shows that the variability of the mean value of annual maximum precipitation depth for unit duration obeys to the coupled effect of topography and modal direction of moisture flux during extreme events. Median values of this parameter decrease with elevation. We called this phenomenon "reverse orographic effect" on extreme precipitation of short durations, since it is in contrast with general knowledge about the orographic effect on mean precipitation. Moreover, the scaling exponent is mainly driven by topography alone (with increasing values of this parameter at increasing elevations). Therefore, the quantiles of H(D,F) at durations greater than unit turn to be more variable at high elevations than at low elevations. Additionally, the analysis of the variability of the shape parameter with elevation shows that extreme events at high elevations appear to be distributed according to an upper bounded probability distribution. These evidences could be a characteristic sign of the formation of extreme precipitation events at high elevations.
NASA Astrophysics Data System (ADS)
Castellví, F.; Snyder, R. L.
2009-09-01
SummaryHigh-frequency temperature data were recorded at one height and they were used in Surface Renewal (SR) analysis to estimate sensible heat flux during the full growing season of two rice fields located north-northeast of Colusa, CA (in the Sacramento Valley). One of the fields was seeded into a flooded paddy and the other was drill seeded before flooding. To minimize fetch requirements, the measurement height was selected to be close to the maximum expected canopy height. The roughness sub-layer depth was estimated to discriminate if the temperature data came from the inertial or roughness sub-layer. The equation to estimate the roughness sub-layer depth was derived by combining simple mixing-length theory, mixing-layer analogy, equations to account for stable atmospheric surface layer conditions, and semi-empirical canopy-architecture relationships. The potential for SR analysis as a method that operates in the full surface boundary layer was tested using data collected over growing vegetation at a site influenced by regional advection of sensible heat flux. The inputs used to estimate the sensible heat fluxes included air temperature sampled at 10 Hz, the mean and variance of the horizontal wind speed, the canopy height, and the plant area index for a given intermediate height of the canopy. Regardless of the stability conditions and measurement height above the canopy, sensible heat flux estimates using SR analysis gave results that were similar to those measured with the eddy covariance method. Under unstable cases, it was shown that the performance was sensitive to estimation of the roughness sub-layer depth. However, an expression was provided to select the crucial scale required for its estimation.
Enlisted Recruit Candidate Market Depth Analysis: Final Report
2018-04-01
national surveys and DoD institutional data from 2007–2013. Estimates for four of the standards (medical/ physical , overweight, mental health , and...recruits scoring in the upper half of the AFQT. Other standards, such as age, physical fitness, body fat percentage, conduct, and significant tattoos...30. Physical Fitness DODI 1308.3 sets gender-specific height and weight maximums. Dependency Status Individuals with two or more dependents under
Hardware accelerator of convolution with exponential function for image processing applications
NASA Astrophysics Data System (ADS)
Panchenko, Ivan; Bucha, Victor
2015-12-01
In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.
NASA Technical Reports Server (NTRS)
Ono, S.; Ennyu, A.; Najjar, R. G.; Bates, N.
1998-01-01
A diagnostic model of the mean annual cycles of dissolved inorganic carbon (DIC) and oxygen below the mixed layer at the Bermuda Atlantic Time-series Study (BATS) site is presented and used to estimate organic carbon remineralization in the seasonal thermocline. The model includes lateral and vertical advection as well as vertical, diffusion. Very good agreement is found for the remineralization estimates based on oxygen and DIC. Net remineralization averaged from mid-spring to early fall is found to be a maximum between 120 and 140 in. Remineralization integrated between 100 (the compensation depth) and 250 m during this period is estimated to be about 1 mol C/sq m. This flux is consistent with independent estimates of the loss of particulate and dissolved organic carbon.
Using tsunami deposits to determine the maximum depth of benthic burrowing
Shirai, Kotaro; Murakami-Sugihara, Naoko
2017-01-01
The maximum depth of sediment biomixing is directly related to the vertical extent of post-depositional environmental alteration in the sediment; consequently, it is important to determine the maximum burrowing depth. This study examined the maximum depth of bioturbation in a natural marine environment in Funakoshi Bay, northeastern Japan, using observations of bioturbation structures developed in an event layer (tsunami deposits of the 2011 Tohoku-Oki earthquake) and measurements of the radioactive cesium concentrations in this layer. The observations revealed that the depth of bioturbation (i.e., the thickness of the biomixing layer) ranged between 11 and 22 cm, and varied among the sampling sites. In contrast, the radioactive cesium concentrations showed that the processing of radioactive cesium in coastal environments may include other pathways in addition to bioturbation. The data also revealed the nature of the bioturbation by the heart urchin Echinocardium cordatum (Echinoidea: Loveniidae), which is one of the important ecosystem engineers in seafloor environments. The maximum burrowing depth of E. cordatum in Funakoshi Bay was 22 cm from the seafloor surface. PMID:28854254
Using tsunami deposits to determine the maximum depth of benthic burrowing.
Seike, Koji; Shirai, Kotaro; Murakami-Sugihara, Naoko
2017-01-01
The maximum depth of sediment biomixing is directly related to the vertical extent of post-depositional environmental alteration in the sediment; consequently, it is important to determine the maximum burrowing depth. This study examined the maximum depth of bioturbation in a natural marine environment in Funakoshi Bay, northeastern Japan, using observations of bioturbation structures developed in an event layer (tsunami deposits of the 2011 Tohoku-Oki earthquake) and measurements of the radioactive cesium concentrations in this layer. The observations revealed that the depth of bioturbation (i.e., the thickness of the biomixing layer) ranged between 11 and 22 cm, and varied among the sampling sites. In contrast, the radioactive cesium concentrations showed that the processing of radioactive cesium in coastal environments may include other pathways in addition to bioturbation. The data also revealed the nature of the bioturbation by the heart urchin Echinocardium cordatum (Echinoidea: Loveniidae), which is one of the important ecosystem engineers in seafloor environments. The maximum burrowing depth of E. cordatum in Funakoshi Bay was 22 cm from the seafloor surface.
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
Spectral analysis of the 1976 aeromagnetic survey of Harrat Rahat, Kingdom of Saudi Arabia
Blank, H. Richard; Sadek, Hamdy S.
1983-01-01
Harrat Rahat, an extensive plateau of Cenozoic mafic lava on the Precambrian shield of western Saudi Arabia, has been studied for its water resource and geothermal potential. In support of these investigations, the thickness of the lava sequence at more than 300 points was estimated by spectral analysis of low-level aeromagnetic profiles utilizing the integral Fourier transform of field intensity along overlapping profile segments. The optimum length of segment for analysis was determined to be about 40 km or 600 field samples. Contributions from two discrete magnetic source ensembles could be resolved on almost all spectra computed. The depths to these ensembles correspond closely to the flight height (300 m), and, presumably, to the mean depth to basement near the center of each profile segment. The latter association was confirmed in all three cases where spectral estimates could be directly compared with basement depths measured in drill holes. The maximum thickness estimated for the lava section is 380 m and the mean about 150 m. Data from an isopach map prepared from these results suggest that thickness variations are strongly influenced by pre-harrat, north-northwest-trending topography probably consequent on Cenozoic faulting. The thickest zones show a rough correlation with three axially-disposed volcanic shields.
A comparison of hydrographically and optically derived mixed layer depths
Zawada, D.G.; Zaneveld, J.R.V.; Boss, E.; Gardner, W.D.; Richardson, M.J.; Mishonov, A.V.
2005-01-01
Efforts to understand and model the dynamics of the upper ocean would be significantly advanced given the ability to rapidly determine mixed layer depths (MLDs) over large regions. Remote sensing technologies are an ideal choice for achieving this goal. This study addresses the feasibility of estimating MLDs from optical properties. These properties are strongly influenced by suspended particle concentrations, which generally reach a maximum at pycnoclines. The premise therefore is to use a gradient in beam attenuation at 660 nm (c660) as a proxy for the depth of a particle-scattering layer. Using a global data set collected during World Ocean Circulation Experiment cruises from 1988-1997, six algorithms were employed to compute MLDs from either density or temperature profiles. Given the absence of published optically based MLD algorithms, two new methods were developed that use c660 profiles to estimate the MLD. Intercomparison of the six hydrographically based algorithms revealed some significant disparities among the resulting MLD values. Comparisons between the hydrographical and optical approaches indicated a first-order agreement between the MLDs based on the depths of gradient maxima for density and c660. When comparing various hydrographically based algorithms, other investigators reported that inherent fluctuations of the mixed layer depth limit the accuracy of its determination to 20 m. Using this benchmark, we found a ???70% agreement between the best hydrographical-optical algorithm pairings. Copyright 2005 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Buongiorno Nardelli, B.; Guinehut, S.; Verbrugge, N.; Cotroneo, Y.; Zambianchi, E.; Iudicone, D.
2017-12-01
The depth of the upper ocean mixed layer provides fundamental information on the amount of seawater that directly interacts with the atmosphere. Its space-time variability modulates water mass formation and carbon sequestration processes related to both the physical and biological pumps. These processes are particularly relevant in the Southern Ocean, where surface mixed-layer depth estimates are generally obtained either as climatological fields derived from in situ observations or through numerical simulations. Here we demonstrate that weekly observation-based reconstructions can be used to describe the variations of the mixed-layer depth in the upper ocean over a range of space and time scales. We compare and validate four different products obtained by combining satellite measurements of the sea surface temperature, salinity, and dynamic topography and in situ Argo profiles. We also compute an ensemble mean and use the corresponding spread to estimate mixed-layer depth uncertainties and to identify the more reliable products. The analysis points out the advantage of synergistic approaches that include in input the sea surface salinity observations obtained through a multivariate optimal interpolation. Corresponding data allow to assess mixed-layer depth seasonal and interannual variability. Specifically, the maximum correlations between mixed-layer anomalies and the Southern Annular Mode are found at different time lags, related to distinct summer/winter responses in the Antarctic Intermediate Water and Sub-Antarctic Mode Waters main formation areas.
NASA Astrophysics Data System (ADS)
Takeda, T.; Yano, T. E.; Shiomi, K.
2013-12-01
The highly-developed active fault evaluation is necessary particularly at the Kanto metropolitan area, where multiple major active fault zones exist. The cutoff depth of active faults is one of important parameters since it is a good indicator to define fault dimensions and hence its maximum expected magnitude. The depth is normally estimated from microseismicity, thermal structure, and depths of Curie point and Conrad discontinuity. For instance, Omuralieva et al. (2012) has estimated the cutoff depths of the whole Japan by creating a 3-D relocated hypocenter catalog. However its spatial resolution could be insufficient for the robustness of the active faults evaluation since precision within 15 km that is comparable to the minimum evaluated fault size is preferred. Therefore the spatial resolution of the earthquake catalog to estimate the cutoff depth is required to be smaller than 15 km. This year we launched the Japan Unified hIgh-resolution relocated Catalog for Earthquakes (JUICE) Project (Yano et al., this fall meeting), of which objective is to create precise and reliable earthquake catalog for all of Japan, using waveform cross-correlation data and Double-Difference relocation method (Waldhauser and Ellsworth, 2000). This catalog has higher precision of hypocenter determination than the routine one. In this study, we estimate high-resolution cutoff depths of seismogenic layer using this catalog of the Kanto region where preliminary JUICE analysis has been already done. D90, the cutoff depths which contain 90% of the occuring earthquake is often used as a reference to understand the seismogenic layer. The reason of choosing 90% is because it relies on uncertainties based on the amount of depth errors of hypocenters.. In this study we estimate D95 because more precise and reliable catalog is now available by the JUICE project. First we generate 10 km equally spaced grid in our study area. Second we pick hypocenters within a radius of 10 km from each grid point and arrange into hypocenter groups. Finally we estimate D95 from the hypocenter groups at each grid point. During the analysis we use three conditions; (1) the depths of the hypocenters we used are less than 25 km; (2) the minimum number of the hypocenter group is 25; and (3) low frequency earthquakes are excluded. Our estimate of D95 shows undulated and fine features, such as having a different profile along the same fault. This can be seen at two major fault zones: (1) Tachikawa fault zone, and (2) the northwest marginal fault zone of the Kanto basin. The D95 gets deeper from northwest to southwest along these fault zones, , suggesting that the constant cutoff depth cannot be used even along the same fault zone. One of patters of our D95 shows deeper in the south Kanto region. The reason for this pattern could be that hypocenters we used in this study may be contaminated by seismicity near the plate boundary between the Philippine Sea plate and the Eurasian plate. Therefore we should carefully interpret D95 in the south Kanto.
NASA Astrophysics Data System (ADS)
Motoyama, H.; Suzuki, T.; Fukui, K.; Ohno, H.; Hoshina, Y.; Hirabayashi, M.; Fujita, S.
2017-12-01
1. Introduction It is possible to reveal the past climate and environmental change from the ice core drilled in polar ice sheet and glaciers. The 54th Japanese Antarctic Research Expedition conducted several shallow core drillings up to 30 m depth in the inland and coastal areas of the East Antarctic ice sheet. Ice core sample was cut out at a thickness of about 5 cm in the cold room of the National Institute of Polar Research, and analyzed ion, water isotope, dust and so one. We also conducted dielectric profile measurement (DEP measurement). The age as a key layer of large-scale volcanic explosion was based on Sigl et al. (Nature Climate Change, 2014). 2. Inland ice core Ice cores were collected at the NDF site (77°47'14"S, 39°03'34"E, 3754 m.a.s.l.) and S80 site (80°00'00"S, 40°30'04"E, 3622 m.a.s.l.). Dating of ice core was done as follows. Calculate water equivalent from core density. Accumulate water equivalent from the surface. Approximate the relation of depth - cumulative water equivalent by a quartic equation. We determined the key layer with nssSO42 - peak corresponding to several large volcanic explosions. The accumulation rate was kept constant between the key layers. As a result, NDF was estimated to be around 1360 AD and S80 was estimated to be around 1400 AD in the deepest ice core. 3. Coastal ice core An ice core was collected at coastal H15 sites (69°04'10"S, 40°44'51"E, 1030 m.a.s.l.). Dating of ice core was done as follows. Calculate water equivalent from ice core density. Accumulate water equivalent from the surface. Approximate the relation of depth - cumulative water equivalent by a quartic equation. Basically we decided to summer (December) and winter (June) due to the seasonal change of the water isotope (δD or δ18O). In addition to the seasonal change of isotope, confirm the following. Maximum of SO42- / Na +, which is earlier in time than the maximum of water isotope. Maximum of MSA at about the same time as the maximum of the water isotope. Na+ is maximal immediately after the local maximum of the water isotope. The deepest age was estimated to be around 1940 AD. 4. Example of results In the inland area, the annual average surface mass balance decreased from 1450 to 1850 AD, but it has increased since 1850 AD. The annual mass balance of coastal H15 is consistent with the result of snow stake measurement.
Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy
Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.
1998-01-01
We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.
NASA Astrophysics Data System (ADS)
Chen, Jun; Zhang, Xiangguang; Xing, Xiaogang; Ishizaka, Joji; Yu, Zhifeng
2017-12-01
Quantifying the diffuse attenuation coefficient of the photosynthetically available radiation (Kpar) can improve our knowledge of euphotic depth (Zeu) and biomass heating effects in the upper layers of oceans. An algorithm to semianalytically derive Kpar from remote sensing reflectance (Rrs) is developed for the global open oceans. This algorithm includes the following two portions: (1) a neural network model for deriving the diffuse attention coefficients (Kd) that considers the residual error in satellite Rrs, and (2) a three band depth-dependent Kpar algorithm (TDKA) for describing the spectrally selective attenuation mechanism of underwater solar radiation in the open oceans. This algorithm is evaluated with both in situ PAR profile data and satellite images, and the results show that it can produce acceptable PAR profile estimations while clearly removing the impacts of satellite residual errors on Kpar estimations. Furthermore, the performance of the TDKA algorithm is evaluated by its applicability in Zeu derivation and mean temperature within a mixed layer depth (TML) simulation, and the results show that it can significantly decrease the uncertainty in both compared with the classical chlorophyll-a concentration-based Kpar algorithm. Finally, the TDKA algorithm is applied in simulating biomass heating effects in the Sargasso Sea near Bermuda, with new Kpar data it is found that the biomass heating effects can lead to a 3.4°C maximum positive difference in temperature in the upper layers but could result in a 0.67°C maximum negative difference in temperature in the deep layers.
NASA Astrophysics Data System (ADS)
Fujiwara, Yoshiaki; Yamasato, Hitoshi; Shimbori, Toshiki; Sakai, Takayuki
2014-12-01
Since the caldera-forming eruption of Miyakejima Volcano in 2000, low-frequency (LF) earthquakes have occurred frequently beneath the caldera. Some of these LF earthquakes are accompanied by emergent infrasonic pulses that start with dilatational phases and may be accompanied by the eruption of small amounts of ash. The estimated source locations of both the LF earthquakes and the infrasonic signals are within the vent at shallow depth. Moreover, the maximum seismic amplitude roughly correlates with the maximum amplitude of the infrasonic pulses. From these observations, we hypothesized that the infrasonic waves were excited by partial subsidence within the vent associated with the LF earthquakes. To verify our hypothesis, we used the infrasonic data to estimate the volumetric change due to the partial subsidence associated with each LF earthquake. The results showed that partial subsidence in the vent can well explain the generation of infrasonic waves.
A simplified 137Cs transport model for estimating erosion rates in undisturbed soil.
Zhang, Xinbao; Long, Yi; He, Xiubin; Fu, Jiexiong; Zhang, Yunqi
2008-08-01
(137)Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of (137)Cs fallout from atmosphere in 1963. (137)Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The (137)Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the (137)Cs depth distribution in profile, where the maximum (137)Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total (137)Cs fallout amount deposited on the earth surface in 1963 and the (137)Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of (137)Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the (137)Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different (137)Cs loss proportions of the reference inventory at the Kaixian site of the Three Gorge Region, China are estimated by the two models. The over-estimation of the soil loss by using the previous simple profile-shape model obviously increases with the time period from the sampling year to the year of 1963 and (137)Cs loss proportion of the reference inventory. As to 20-80% of (137)Cs loss proportions of the reference inventory at the Kaixian site in 2004, the annual soil loss depths estimated by the new simplified transport process model are only 57.90-56.24% of the values estimated by the previous model.
Striker, Lora K.; Severance, Tim
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.4 ft. The worst-case contraction scour occurred at the maximum free surface flow discharge, which was less than the 100-year discharge. Abutment scour ranged from 4.8 to 8.0 ft. The worst-case abutment scour occurred at 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A crosssection of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
NASA Astrophysics Data System (ADS)
Zhu, Q.; Xu, Y. P.; Gu, H.
2014-12-01
Traditionally, regional frequency analysis methods were developed for stationary environmental conditions. Nevertheless, recent studies have identified significant changes in hydrological records, leading to the 'death' of stationarity. Besides, uncertainty in hydrological frequency analysis is persistent. This study aims to investigate the impact of one of the most important uncertainty sources, parameter uncertainty, together with nonstationarity, on design rainfall depth in Qu River Basin, East China. A spatial bootstrap is first proposed to analyze the uncertainty of design rainfall depth estimated by regional frequency analysis based on L-moments and estimated on at-site scale. Meanwhile, a method combining the generalized additive models with 30-year moving window is employed to analyze non-stationarity existed in the extreme rainfall regime. The results show that the uncertainties of design rainfall depth with 100-year return period under stationary conditions estimated by regional spatial bootstrap can reach 15.07% and 12.22% with GEV and PE3 respectively. On at-site scale, the uncertainties can reach 17.18% and 15.44% with GEV and PE3 respectively. In non-stationary conditions, the uncertainties of maximum rainfall depth (corresponding to design rainfall depth) with 0.01 annual exceedance probability (corresponding to 100-year return period) are 23.09% and 13.83% with GEV and PE3 respectively. Comparing the 90% confidence interval, the uncertainty of design rainfall depth resulted from parameter uncertainty is less than that from non-stationarity frequency analysis with GEV, however, slightly larger with PE3. This study indicates that the spatial bootstrap can be successfully applied to analyze the uncertainty of design rainfall depth on both regional and at-site scales. And the non-stationary analysis shows that the differences between non-stationary quantiles and their stationary equivalents are important for decision makes of water resources management and risk management.
McGarr, Arthur F.; Boettcher, M.; Fletcher, Jon Peter B.; Sell, Russell; Johnston, Malcolm J.; Durrheim, R.; Spottiswoode, S.; Milev, A.
2009-01-01
For one week during September 2007, we deployed a temporary network of field recorders and accelerometers at four sites within two deep, seismically active mines. The ground-motion data, recorded at 200 samples/sec, are well suited to determining source and ground-motion parameters for the mining-induced earthquakes within and adjacent to our network. Four earthquakes with magnitudes close to 2 were recorded with high signal/noise at all four sites. Analysis of seismic moments and peak velocities, in conjunction with the results of laboratory stick-slip friction experiments, were used to estimate source processes that are key to understanding source physics and to assessing underground seismic hazard. The maximum displacements on the rupture surfaces can be estimated from the parameter , where is the peak ground velocity at a given recording site, and R is the hypocentral distance. For each earthquake, the maximum slip and seismic moment can be combined with results from laboratory friction experiments to estimate the maximum slip rate within the rupture zone. Analysis of the four M 2 earthquakes recorded during our deployment and one of special interest recorded by the in-mine seismic network in 2004 revealed maximum slips ranging from 4 to 27 mm and maximum slip rates from 1.1 to 6.3 m/sec. Applying the same analyses to an M 2.1 earthquake within a cluster of repeating earthquakes near the San Andreas Fault Observatory at Depth site, California, yielded similar results for maximum slip and slip rate, 14 mm and 4.0 m/sec.
Accuracy of snow depth estimation in mountain and prairie environments by an unmanned aerial vehicle
NASA Astrophysics Data System (ADS)
Harder, Phillip; Schirmer, Michael; Pomeroy, John; Helgason, Warren
2016-11-01
Quantifying the spatial distribution of snow is crucial to predict and assess its water resource potential and understand land-atmosphere interactions. High-resolution remote sensing of snow depth has been limited to terrestrial and airborne laser scanning and more recently with application of structure from motion (SfM) techniques to airborne (manned and unmanned) imagery. In this study, photography from a small unmanned aerial vehicle (UAV) was used to generate digital surface models (DSMs) and orthomosaics for snow cover at a cultivated agricultural Canadian prairie and a sparsely vegetated Rocky Mountain alpine ridgetop site using SfM. The accuracy and repeatability of this method to quantify snow depth, changes in depth and its spatial variability was assessed for different terrain types over time. Root mean square errors in snow depth estimation from differencing snow-covered and non-snow-covered DSMs were 8.8 cm for a short prairie grain stubble surface, 13.7 cm for a tall prairie grain stubble surface and 8.5 cm for an alpine mountain surface. This technique provided useful information on maximum snow accumulation and snow-covered area depletion at all sites, while temporal changes in snow depth could also be quantified at the alpine site due to the deeper snowpack and consequent higher signal-to-noise ratio. The application of SfM to UAV photographs returns meaningful information in areas with mean snow depth > 30 cm, but the direct observation of snow depth depletion of shallow snowpacks with this method is not feasible. Accuracy varied with surface characteristics, sunlight and wind speed during the flight, with the most consistent performance found for wind speeds < 10 m s-1, clear skies, high sun angles and surfaces with negligible vegetation cover.
Rowan, Elisabeth L.; Hayba, Daniel O.; Nelson, Philip H.; Burns, W. Matthew; Houseknecht, David W.
2003-01-01
Representative compaction curves for the principle lithologies are essential input for reliable models of basin history. Compaction curves influence estimates of maximum burial and erosion. Different compaction curves may produce significantly different thermal histories. Default compaction curves provided by basin modeling packages may or may not be a good proxy for the compaction properties in a given area. Compaction curves in the published literature span a wide range, even within one lithology, e.g., sandstone (see Panel 3). An abundance of geophysical well data for the North Slope, from both government and private sources, provides us with an unusually good opportunity to develop compaction curves for the Cretaceous-Tertiary Brookian sandstones, siltstones, and shales. We examined the sonic and gamma ray logs from 19 offshore wells (see map), where significant erosion is least likely to have occurred. Our data are primarily from the Cretaceous-Tertiary Brookian sequence and are less complete for older sequences. For each well, the fraction of shale (Vsh) at a given depth was estimated from the gamma ray log, and porosity was computed from sonic travel time. By compositing porosities for the near-pure sand (Vsh99%)from many individual wells we obtained data over sufficient depth intervals to define sandstone and shale 'master' compaction curves. A siltstone curve was defined using the sonic-derived porosities for Vsh values of 50%. These compaction curves generally match most of the sonic porosities with an error of 5% or less. Onshore, the curves are used to estimate the depth of maximum burial at the end of Brookian sedimentation. The depth of sonic-derived porosity profiles is adjusted to give the best match with the 'master' compaction curves. The amount of the depth adjustment is the erosion estimate. Using our compaction curves, erosion estimates on the North Slope range from zero in much of the offshore, to as much as 1500 ft along the coast, and to more than 10,000 ft in the foothills (Panel 3). Compaction curves provide an alternative to vitrinite reflectance for estimating erosion. Vitrinite reflectance data are often very sparse in contrast to well log data and are subject to inconsistencies when measurements are made by different labs. The phenomenon of 'recycling' can also make the reflectance values of dispersed vitrinite problematic for quantifying erosion. Recycling is suspected in dispersed vitrinite in North Slope rocks, particularly in the younger, Cretaceous-Tertiary section. The compaction curves defined here are being integrated into our burial history and thermal models to determine the timing of source rock maturation. An example on Panel 3 shows the results of calculating the maturity of the Shublik Fm. at the Tulaga well using two different sets of shale and siltstone compaction curves. Finally, accurate compaction curves improve a model's ability to realistically simulate the pressure regime during burial, including overpressures.
Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters
NASA Astrophysics Data System (ADS)
Kim, T.; Kim, Y. S.
2017-12-01
The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).
Estimating cross-slope exchange from drifter tracks and from glider sections
NASA Astrophysics Data System (ADS)
Huthnance, John M.
2017-04-01
In areas of complex topography, it can be difficult to define "along-slope" or "cross-slope" direction, yet transport estimates are sensitive to these definitions, especially as along-slope flow is favoured by geostrophy. However, if drifter positions and hence underlying water depths are recorded regularly, we know where and when depth contours are crossed by the drifters, and hence by the water assuming that the drifters follow the water. An approach is discussed for deriving statistics of contour-crossing speed, via depth changes experienced by the drifters and an effective slope. The transport equation for (e.g.) salinity S can be reduced to an explicit equation for effective diffusivity K if we assume steady along-slope flow with known total transport Q, a salinity maximum at its "core" and effective diffusion to less saline waters to either side. Salinity gradients along the flow and to either side are needed to calculate K. Gliders provide a means of measuring salinity gradients in this context. Measurements at the continental shelf edge south-west of England and west of Scotland illustrate the calculation. Both approaches give overall rather than process-related estimates. There is limited scope for process discrimination according to (i) how often drifter locations are recorded and (ii) the time-intervals into which estimates are "binned". (i) Frequent recording may record more crossings owing to processes of short time scale, albeit these are less significant for slowly-evolving water contents. (ii) Sufficient samples for statistically significant estimates of exchange entail "bins" spanning some weeks or months for typically-limited numbers of drifters or gliders.
NASA Astrophysics Data System (ADS)
Fukahata, Y.; Fukushima, Y.
2009-05-01
On 14 June 2008, the Iwate-Miyagi Nairiku earthquake struck northeast Japan, where active seismicity has been observed under east-west compressional stress fields. The magnitude and hypocenter depth of the earthquake are reported as Mj 7.2 and 8 km, respectively. The earthquake is considered to have occurred on a west-dipping reverse fault with a roughly north-south strike. The earthquake caused significant surface displacements, which were detected by PALSAR, a Synthetic Aperture Radar (SAR) onboard the Japanese ALOS satellite. Several pairs of PALSAR images from six different paths are available to measure the coseismic displacements. Interferometric SAR (InSAR) is useful to obtain crustal displacements in the region where coseismic displacement is not so large (less than 1 m), whereas range and azimuth offsets provide displacement measurements up to a few meters on the whole processed area. We inverted the obtained displacement data to estimate slip distribution on the fault. Since the precise location and direction of the fault are not well known, the inverse problem is nonlinear. Following the method of Fukahata and Wright (2008), we resolved the weak non-linearity based on Akaike's Bayesian Information Criterion. We first estimated slip distribution by assuming a pure dip slip. The optimal fault geometry was estimated at dip 26 and strike 203 degrees. The maximum slip is more than 8 m and most slips concentrate at shallow depths (less than 4 km). The azimuth offset data suggest non-negligible right lateral slip components, so we next estimated slip distribution without fixing the rake angle. Again, a large slip area with the maximum slip of about 8 m in the shallow depth was obtained. Such slip models contradict with our existing common sense; our results indicate that the released strain is more than 10 to the power of -3. Range and azimuth offsets computed from SAR images obtained from both ascending and descending orbits appear to be more consistent with a conjugate fault slip, which contributes to lower the stress drop possibly to a level typical to this kind of earthquakes.
Probable flood predictions in ungauged coastal basins of El Salvador
Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.
2008-01-01
A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.
Application of Radar-Rainfall Estimates to Probable Maximum Precipitation in the Carolinas
NASA Astrophysics Data System (ADS)
England, J. F.; Caldwell, R. J.; Sankovich, V.
2011-12-01
Extreme storm rainfall data are essential in the assessment of potential impacts on design precipitation amounts, which are used in flood design criteria for dams and nuclear power plants. Probable Maximum Precipitation (PMP) from National Weather Service Hydrometeorological Report 51 (HMR51) is currently used for design rainfall estimates in the eastern U.S. The extreme storm database associated with the report has not been updated since the early 1970s. In the past several decades, several extreme precipitation events have occurred that have the potential to alter the PMP values, particularly across the Southeast United States (e.g., Hurricane Floyd 1999). Unfortunately, these and other large precipitation-producing storms have not been analyzed with the detail required for application in design studies. This study focuses on warm-season tropical cyclones (TCs) in the Carolinas, as these systems are the critical maximum rainfall mechanisms in the region. The goal is to discern if recent tropical events may have reached or exceeded current PMP values. We have analyzed 10 storms using modern datasets and methodologies that provide enhanced spatial and temporal resolution relative to point measurements used in past studies. Specifically, hourly multisensor precipitation reanalysis (MPR) data are used to estimate storm total precipitation accumulations at various durations throughout each storm event. The accumulated grids serve as input to depth-area-duration calculations. Individual storms are then maximized using back-trajectories to determine source regions for moisture. The development of open source software has made this process time and resource efficient. Based on the current methodology, two of the ten storms analyzed have the potential to challenge HMR51 PMP values. Maximized depth-area curves for Hurricane Floyd indicate exceedance at 24- and 72-hour durations for large area sizes, while Hurricane Fran (1996) appears to exceed PMP at large area sizes for short-duration, 6-hour storms. Utilizing new methods and data, however, requires careful consideration of the potential limitations and caveats associated with the analysis and further evaluation of the newer storms within the context of historical storms from HMR51. Here, we provide a brief background on extreme rainfall in the Carolinas, along with an overview of the methods employed for converting MPR to depth-area relationships. Discussion of the issues and limitations, evaluation of the various techniques, and comparison to HMR51 storms and PMP values are also presented.
Mid-depth temperature maximum in an estuarine lake
NASA Astrophysics Data System (ADS)
Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.
2018-03-01
The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~< 2 m), transparent water, daytime maximum of wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.
Climate variability, rice production and groundwater depletion in India
NASA Astrophysics Data System (ADS)
Bhargava, Alok
2018-03-01
This paper modeled the proximate determinants of rice outputs and groundwater depths in 27 Indian states during 1980-2010. Dynamic random effects models were estimated by maximum likelihood at state and well levels. The main findings from models for rice outputs were that temperatures and rainfall levels were significant predictors, and the relationships were quadratic with respect to rainfall. Moreover, nonlinearities with respect to population changes indicated greater rice production with population increases. Second, groundwater depths were positively associated with temperatures and negatively with rainfall levels and there were nonlinear effects of population changes. Third, dynamic models for in situ groundwater depths in 11 795 wells in mainly unconfined aquifers, accounting for latitudes, longitudes and altitudes, showed steady depletion. Overall, the results indicated that population pressures on food production and environment need to be tackled via long-term healthcare, agricultural, and groundwater recharge policies in India.
Leslie A. Viereck; Nancy R. Werdin-Pfisterer; Phyllis C. Adams; Kenji Yoshikawa
2008-01-01
Maximum thaw depths were measured annually in an unburned stand, a heavily burned stand, and a fireline in and adjacent to the 1971 Wickersham fire. Maximum thaw in the unburned black spruce stand ranged from 36 to 52 cm. In the burned stand, thaw increased each year to a maximum depth of 302 cm in 1995. In 1996, the entire layer of seasonal frost remained, creating a...
Mapping snow depth return levels: smooth spatial modeling versus station interpolation
NASA Astrophysics Data System (ADS)
Blanchet, J.; Lehning, M.
2010-12-01
For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.
Seismotectonic Models of the Three Recent Devastating SCR Earthquakes in India
NASA Astrophysics Data System (ADS)
Mooney, W. D.; Kayal, J.
2007-12-01
During the last decade, three devastating earthquakes, the Killari 1993 (Mb 6.3), Jabalpur 1997 (Mb 6.0) and the Bhuj 2001 (Mw 7.7) occurred in the Stable Continental Region (SCR), Peninsular India. First, the September 30, 1993 Killari earthquake (Mb 6.3) occurred in the Deccan province of central India, in the Latur district of Maharashtra state. The local geology in the area is obscured by the late Cretaceous-Eocene basalt flows, referred to as the Deccan traps. This makes it difficult to recognize the geological surface faults that could be associated with the Killari earthquake. The epicentre was reported at 18.090N and 76.620E, and the focal depth at 7 +/- 1 km was precisely estimated by waveform inversion (Chen and Kao, 1995). The maximum intensity reached to VIII and the earthquake caused a loss of about 10,000 lives and severe damage to property. The May 22, 1997 Jabalpur earthquake (Mb 6.0), epicentre at 23.080N and 80.060E, is a well studied earthquake in the Son-Narmada-Tapti (SONATA) seismic zone. A notable aspects of this earthquake is that it was the first significant event in India to be recorded by 10 broadband seismic stations which were established in 1996 by the India Meteorological Department (IMD). The focal depth was well estimated using the "converted phases" of the broadband seismograms. The focal depth was given in the lower crust at a depth of 35 +/- 1 km, similar to the moderate earthquakes reported from the Amazona ancient rift system in SCR of South America. Maximum MSK intensity of the Jabalpur earthquake reached to VIII in the MSK scale and this earthquake killed about 50 people in the Jabalpur area. Finally, the Bhuj earthquake (MW 7.7) of January 26, 2001 in the Gujarat state, northwestern India, was felt across the whole country, and killed about 20,000 people. The maximum intensity level reached X. The epicenter of the earthquake is reported at 23.400N and 70.280E, and the well estimated focal depth at 25 km. A total of about 3000 aftershocks (M> 1.0) were recorded until mid April, 2001. About 500 aftershocks (M>2.0) are well located; the epicenter map shows an aftershock cluster area, about 60 km x 30 km, between 70.0-70.60E and 23.3-23.60N; almost all the aftershocks occurred within the high intensity (IX) zone. The source area of the main shock and most of the aftershocks are at a depth range of 20-25 km. The fault-plane solutions suggest that the main shock originated at the base of the paleo-rift zone by a south dipping, hidden reverse fault; the rupture propagated both NE and NW. The aftershocks occurred by left-lateral strike-slip motion along the NE trending fault, compatible with the main shock solution, and by pure reverse to right-lateral, strike-slip motion along the NW trending conjugate fault. Understanding these earthquake sequences may shed new light in on the tectonics and active faults in the source regions.
NASA Astrophysics Data System (ADS)
Schmidt, J. P.; Bilek, S. L.; Worthington, L. L.; Schmandt, B.; Aster, R. C.
2017-12-01
The Socorro Magma Body (SMB) is a thin, sill-like intrusion with a top at 19 km depth covering approximately 3400 km2 within the Rio Grande Rift. InSAR studies show crustal uplift patterns linked to SMB inflation with deformation rates of 2.5 mm/yr in the area of maximum uplift with some peripheral subsidence. Our understanding of the emplacement history and shallow structure above the SMB is limited. We use a large seismic deployment to explore seismicity and crustal attenuation in the SMB region, focusing on the area of highest observed uplift to investigate the possible existence of fluid/magma in the upper crust. We would expect to see shallower earthquakes and/or higher attenuation if high heat flow, fluid or magma is present in the upper crust. Over 800 short period vertical component geophones situated above the northern portion of the SMB were deployed for two weeks in 2015. This data is combined with other broadband and short period seismic stations to detect and locate earthquakes as well as to estimate seismic attenuation. We use phase arrivals from the full dataset to relocate a set of 33 local/regional earthquakes recorded during the deployment. We also measure amplitude decay after the S-wave arrival to estimate coda attenuation caused by scattering of seismic waves and anelastic processes. Coda attenuation is estimated using the single backscatter method described by Aki and Chouet (1975), filtering the seismograms at 6, 9 and 12 Hz center frequencies. Earthquakes occurred at 2-13 km depth during the deployment, but no spatial patterns linked with the high uplift region were observed over this short duration. Attenuation results for this deployment suggest Q ranging in values of 130 to 2000, averaging around Q of 290, comparable to Q estimates of other studies of the western US. With our dense station coverage, we explore attenuation over smaller scales, and find higher attenuation for stations in the area of maximum uplift relative to stations outside of the maximum uplift, which could indicate upper crustal heterogeneities with shallow process above the magma body in this area.
NASA Astrophysics Data System (ADS)
Collins, M. S.; Hertzberg, J. E.; Mekik, F.; Schmidt, M. W.
2017-12-01
Based on the thermodynamics of solid-solution substitution of Mg for Ca in biogenic calcite, magnesium to calcium ratios in planktonic foraminifera have been proposed as a means by which variations in habitat water temperatures can be reconstructed. Doing this accurately has been a problem, however, as we demonstrate that various calibration equations provide disparate temperature estimates from the same Mg/Ca dataset. We examined both new and published data to derive a globally applicable temperature-Mg/Ca relationship and from this relationship to accurately predict habitat depth for Neogloboquadrina dutertrei - a deep chlorophyll maximum dweller. N. dutertrei samples collected from Atlantic core tops were analyzed for trace element compositions at Texas A&M University, and the measured Mg/Ca ratios were used to predict habitat temperatures using multiple pre-existing calibration equations. When combining Atlantic and previously published Pacific Mg/Ca datasets for N. dutertrei, a notable dissolution effect was evident. To overcome this issue, we used the G. menardii Fragmentation Index (MFI) to account for dissolution and generated a multi-basin temperature equation using multiple linear regression to predict habitat temperature. However, the correlations between Mg/Ca and temperature, as well as the calculated MFI percent dissolved, suggest that N. dutertrei Mg/Ca ratios are affected equally by both variables. While correcting for dissolution makes habitat depth estimation more accurate, the lack of a definitively strong correlation between Mg/Ca and temperature is likely an effect of variable habitat depth for this species because most calibration equations have assumed a uniform habitat depth for this taxon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puckett, T.M.
1991-05-01
The presence of abundant and diverse sighted ostracodes in chalk and marl of the Demopolis Chalk (Campanian and Maastrichtian) in Alabama and Mississippi strongly suggests that the Late Cretaceous sea floor was within the photic zone. The maximum depth of deposition is calculated from an equation based on eye morphology and efficiency and estimates of the vertical light attenuation. In this equation, K, the vertical light attenuation coefficient, is the most critical variable because it is the divisor for the rest of the equation. Rates of accumulation of coccoliths during the Cretaceous are estimated and are on the same ordermore » as those in modern areas of high phytoplankton production, suggesting similar pigment and coccolith concentrations in the water column. Values of K are known for a wide range of water masses and pigment concentrations, including areas of high phytoplankton production; thus light attenuation through the Cretaceous seas can be estimated reliably. Waters in which attenuation is due only to biogenic matter-conditions that result in deposition of relatively pure chalk-have values of K ranging between 0.2 and 0.3. Waters rich in phytoplankton and mud-conditions that result in deposition of marl-have K values as great as 0.5. Substituting these values for K results in depth range of 65 to 90 m for deposition of chalk and depth of 35 m for deposition of marl. These depth values suggest that deposition of many Cretaceous chalks and marls around the world were deposited under relatively shallow conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickinson, W.W.; Law, B.E.
1985-05-01
The burial history of Upper Cretaceous and Tertiary rocks in the northern Green River basin is difficult to reconstruct for three reasons: (1) most of these rocks do not crop out, (2) there are few stratigraphic markers in the subsurface, and (3) regional uplift beginning during the Pliocene caused erosion that removed most upper Tertiary rocks. To understand better the burial and thermal history of the basin, published vitrinite reflectance (R/sub o/) data from three wells were compared to TTI (time-temperature index) maturation units calculated from Lopatin reconstructions. For each well, burial reconstructions were made as follows. Maximum depth ofmore » burial was first estimated by stratigraphic and structural evidence and by extrapolation to a paleosurface intercept of R/sub o/ = 0.2%. This burial was completed by early Oligocene (35 Ma), after which there was no net deposition. The present geothermal gradient in each well as used because there is no geologic evidence for elevated paleotemperature gradients. Using these reconstructions, calculated TTI units agreed with measured R/sub o/ values when minor adjustments were made to the estimated burial depths. Reconstructed maximum burials were deeper than present by 2500-3000 ft (762-914 m) in the Pacific Creek area, by 4000-4500 ft (1219-1372 m) in the Pinedale area, and by 0-1000 ft (0-305 m) in the Merna area. However, at Pinedale geologic evidence can only account for about 3000 ft (914 m) of additional burial. This discrepancy is explained by isoreflectance lines, which parallel the Pinedale anticline and indicate that approximately 2000 ft (610 m) of structural relief occurred after maximum burial. In other parts of the basin, isoreflectance lines also reveal significant structural deformation after maximum burial during early Oligocene to early Pliocene time.« less
Colonization of the deep sea by fishes
Priede, I G; Froese, R
2013-01-01
Analysis of maximum depth of occurrence of 11 952 marine fish species shows a global decrease in species number (N) with depth (x; m): log10N = −0·000422x + 3·610000 (r2 = 0·948). The rate of decrease is close to global estimates for change in pelagic and benthic biomass with depth (−0·000430), indicating that species richness of fishes may be limited by food energy availability in the deep sea. The slopes for the Classes Myxini (−0·000488) and Actinopterygii (−0·000413) follow this trend but Chondrichthyes decrease more rapidly (−0·000731) implying deficiency in ability to colonize the deep sea. Maximum depths attained are 2743, 4156 and 8370 m for Myxini, Chondrichthyes and Actinopterygii, respectively. Endemic species occur in abundance at 7–7800 m depth in hadal trenches but appear to be absent from the deepest parts of the oceans, >9000 m deep. There have been six global oceanic anoxic events (OAE) since the origin of the major fish taxa in the Devonian c. 400 million years ago (mya). Colonization of the deep sea has taken place largely since the most recent OAE in the Cretaceous 94 mya when the Atlantic Ocean opened up. Patterns of global oceanic circulation oxygenating the deep ocean basins became established coinciding with a period of teleost diversification and appearance of the Acanthopterygii. Within the Actinopterygii, there is a trend for greater invasion of the deep sea by the lower taxa in accordance with the Andriashev paradigm. Here, 31 deep-sea families of Actinopterygii were identified with mean maximum depth >1000 m and with >10 species. Those with most of their constituent species living shallower than 1000 m are proposed as invasive, with extinctions in the deep being continuously balanced by export of species from shallow seas. Specialized families with most species deeper than 1000 m are termed deep-sea endemics in this study; these appear to persist in the deep by virtue of global distribution enabling recovery from regional extinctions. Deep-sea invasive families such as Ophidiidae and Liparidae make the greatest contribution to fish fauna at depths >6000 m. PMID:24298950
Seasonal bathymetric distributions of 16 fishes in Lake Superior, 1958-75
Selgeby, James H.; Hoff, Michael H.
1996-01-01
The bathymetric distributions of fishes in Lake Superior, which is one of the largest and deepest lakes in the world, has not been studied on a lakewide scale. Knowledge about the bathymetric distributions will aid in designing fish sampling programs, estimating absolute abundances, and modeling energy flow in the lake. Seasonal bathymetric distributions were determined , by 10-m depth intervals, for 16 fishes collected with bottom trawls and bottom-set gill nets within the upper 150 m of Lake Superior during 1958-75. In spring trawl catches, maximum abundance occurred at these depths: 15 m for round whitefish (Prosopium cylindraceum); 25m for longnose sucker (Catostomus catostomus); 35 m for lake whitefish (Coregonus clupeaformis) and rainbow smelt (Osmerus mordax); 45 m for lake trout (Salvelinus namaycush); 65 m for pygmy whitefish (Prospoium coulteri) and bloater (Coregonus hoyi); 75 m for trout- perch (Percopsis omiscomaycus); 105 m for shortjaw cisco (Coregonus zenithicus); and 115 m for ninespine stickleback (Pungitius pungitius), burbot (Lota lota), slimy sculpin (Cottus cogantus), spoonhead sculpin (Cottus ricei), and deepwater sculpin (Myoxcephalus thompsoni). Bathymetric distributions in spring gill nets were similar to those in trawls, except that depths of maximum abundances in gill nets were shallower than those in trawls for lake trout, rainbow smelt, longnose sucker, and burbot. Lake herring (Coregonus artedi) and kiyi (Coregonus kiyi) were rarely caught in trawls, and their maximum abundances in spring gill net collections were at depths of 25 and 145 m, respectively. In summer, pygmy whitefish, shortjaw cisco, lake herring, kiyi, longnose sucker, burbot, ninespine stickleback, trout-perch, slimy sculpin, and spponhead sculpin were at shallower depths than in spring, whereas rainbow smelt were found in deeper water; there was no change for other species. In fall, shortjaw cisco was at shallower depths than in summer, whereas the remaining species were found deeper, except for lake whitefish and lake trout whose modal depths did not change. Distributions of lake trout and lake whitefish were analyzed by age group, and the young (ages 1-3) of both species were often found in shallower water than were older fish. The shallow-water species exhibited little seasonal changes in bathymetric distributions, whereas the species that inhabited the middepths of deeper water generally moved shallower as the seasons progressed. Most of the more pronounced seasaonl changes in bathymetric distribution were associated with spawning movements.
Predicting Secchi disk depth from average beam attenuation in a deep, ultra-clear lake
Larson, G.L.; Hoffman, R.L.; Hargreaves, B.R.; Collier, R.W.
2007-01-01
We addressed potential sources of error in estimating the water clarity of mountain lakes by investigating the use of beam transmissometer measurements to estimate Secchi disk depth. The optical properties Secchi disk depth (SD) and beam transmissometer attenuation (BA) were measured in Crater Lake (Crater Lake National Park, Oregon, USA) at a designated sampling station near the maximum depth of the lake. A standard 20 cm black and white disk was used to measure SD. The transmissometer light source had a nearly monochromatic wavelength of 660 nm and a path length of 25 cm. We created a SD prediction model by regression of the inverse SD of 13 measurements recorded on days when environmental conditions were acceptable for disk deployment with BA averaged over the same depth range as the measured SD. The relationship between inverse SD and averaged BA was significant and the average 95% confidence interval for predicted SD relative to the measured SD was ??1.6 m (range = -4.6 to 5.5 m) or ??5.0%. Eleven additional sample dates tested the accuracy of the predictive model. The average 95% confidence interval for these sample dates was ??0.7 m (range = -3.5 to 3.8 m) or ??2.2%. The 1996-2000 time-series means for measured and predicted SD varied by 0.1 m, and the medians varied by 0.5 m. The time-series mean annual measured and predicted SD's also varied little, with intra-annual differences between measured and predicted mean annual SD ranging from -2.1 to 0.1 m. The results demonstrated that this prediction model reliably estimated Secchi disk depths and can be used to significantly expand optical observations in an environment where the conditions for standardized SD deployments are limited. ?? 2007 Springer Science+Business Media B.V.
Laboratory-based maximum slip rates in earthquake rupture zones and radiated energy
McGarr, A.; Fletcher, Joe B.; Boettcher, M.; Beeler, N.; Boatwright, J.
2010-01-01
Laboratory stick-slip friction experiments indicate that peak slip rates increase with the stresses loading the fault to cause rupture. If this applies also to earthquake fault zones, then the analysis of rupture processes is simplified inasmuch as the slip rates depend only on the local yield stress and are independent of factors specific to a particular event, including the distribution of slip in space and time. We test this hypothesis by first using it to develop an expression for radiated energy that depends primarily on the seismic moment and the maximum slip rate. From laboratory results, the maximum slip rate for any crustal earthquake, as well as various stress parameters including the yield stress, can be determined based on its seismic moment and the maximum slip within its rupture zone. After finding that our new equation for radiated energy works well for laboratory stick-slip friction experiments, we used it to estimate radiated energies for five earthquakes with magnitudes near 2 that were induced in a deep gold mine, an M 2.1 repeating earthquake near the San Andreas Fault Observatory at Depth (SAFOD) site and seven major earthquakes in California and found good agreement with energies estimated independently from spectra of local and regional ground-motion data. Estimates of yield stress for the earthquakes in our study range from 12 MPa to 122 MPa with a median of 64 MPa. The lowest value was estimated for the 2004 M 6 Parkfield, California, earthquake whereas the nearby M 2.1 repeating earthquake, as recorded in the SAFOD pilot hole, showed a more typical yield stress of 64 MPa.
NASA Technical Reports Server (NTRS)
Moore, D. G. (Principal Investigator); Heilman, J. L.
1980-01-01
The author has identified the following significant results. Day thermal data were analyzed to assess depth to groundwater in the test site. HCMM apparent temperature was corrected for atmospheric effects using lake temperature of the Oahe Reservoir in central South Dakota. Soil surface temperatures were estimated using an equation developed for ground studies. A significant relationship was found between surface soil temperature and depth to groundwater, as well as between the surface soil-maximum air temperature differential and soil water content (% of field capacity) in the 0 cm and 4 cm layer of the profile. Land use for the data points consisted of row crops, small grains, stubble, and pasture.
Distribution and life strategies of two bacterial populations in a eutrophic lake
Weinbauer; Hofle
1998-10-01
Monoclonal antibodies and epifluorescence microscopy were used to determine the depth distribution of two indigenous bacterial populations in the stratified Lake Plusssee and characterize their life strategies. Populations of Comamonas acidovorans PX54 showed a depth distribution with maximum abundances in the oxic epilimnion, whereas Aeromonas hydrophila PU7718 showed a depth distribution with maximum abundances in the anoxic thermocline layer (metalimnion), i. e., in the water layer with the highest microbial activity. Resistance of PX54 to protist grazing and high metabolic versatility and growth rate of PU7718 were the most important life strategy traits for explaining the depth distribution of the two bacterial populations. Maximum abundance of PX54 was 16,000 cells per ml, and maximum abundance of PU7718 was 20,000 cells per ml. Determination of bacterial productivity in dilution cultures with different-size fractions of dissolved organic matter (DOM) from lake water indicates that low-molecular-weight (LMW) DOM is less bioreactive than total DOM (TDOM). The abundance and growth rate of PU7718 were highest in the TDOM fractions, whereas those of PX54 were highest in the LMW DOM fraction, demonstrating that PX54 can grow well on the less bioreactive DOM fraction. We estimated that 13 to 24% of the entire bacterial community and 14% of PU7718 were removed by viral lysis, whereas no significant effect of viral lysis on PX54 could be detected. Growth rates of PX54 (0.11 to 0.13 h-1) were higher than those of the entire bacterial community (0.04 to 0.08 h-1) but lower than those of PU7718 (0.26 to 0.31 h-1). In undiluted cultures, the growth rates were significantly lower, pointing to density effects such as resource limitation or antibiosis, and the effects were stronger for PU7718 and the entire bacterial community than for PX54. Life strategy characterizations based on data from literature and this study revealed that the fast-growing and metabolically versatile A. hydrophila PU7718 is an r-strategist or opportunistic population in Lake Plusssee, whereas the grazing-resistant C. acidovorans PX54 is rather a K-strategist or equilibrium population.
NASA Astrophysics Data System (ADS)
Delbari, Masoomeh; Sharifazari, Salman; Mohammadi, Ehsan
2018-02-01
The knowledge of soil temperature at different depths is important for agricultural industry and for understanding climate change. The aim of this study is to evaluate the performance of a support vector regression (SVR)-based model in estimating daily soil temperature at 10, 30 and 100 cm depth at different climate conditions over Iran. The obtained results were compared to those obtained from a more classical multiple linear regression (MLR) model. The correlation sensitivity for the input combinations and periodicity effect were also investigated. Climatic data used as inputs to the models were minimum and maximum air temperature, solar radiation, relative humidity, dew point, and the atmospheric pressure (reduced to see level), collected from five synoptic stations Kerman, Ahvaz, Tabriz, Saghez, and Rasht located respectively in the hyper-arid, arid, semi-arid, Mediterranean, and hyper-humid climate conditions. According to the results, the performance of both MLR and SVR models was quite well at surface layer, i.e., 10-cm depth. However, SVR performed better than MLR in estimating soil temperature at deeper layers especially 100 cm depth. Moreover, both models performed better in humid climate condition than arid and hyper-arid areas. Further, adding a periodicity component into the modeling process considerably improved the models' performance especially in the case of SVR.
Rossa, Carlos; Sloboda, Ron; Usmani, Nawaid; Tavakoli, Mahdi
2016-07-01
This paper proposes a method to predict the deflection of a flexible needle inserted into soft tissue based on the observation of deflection at a single point along the needle shaft. We model the needle-tissue as a discretized structure composed of several virtual, weightless, rigid links connected by virtual helical springs whose stiffness coefficient is found using a pattern search algorithm that only requires the force applied at the needle tip during insertion and the needle deflection measured at an arbitrary insertion depth. Needle tip deflections can then be predicted for different insertion depths. Verification of the proposed method in synthetic and biological tissue shows a deflection estimation error of [Formula: see text]2 mm for images acquired at 35 % or more of the maximum insertion depth, and decreases to 1 mm for images acquired closer to the final insertion depth. We also demonstrate the utility of the model for prostate brachytherapy, where in vivo needle deflection measurements obtained during early stages of insertion are used to predict the needle deflection further along the insertion process. The method can predict needle deflection based on the observation of deflection at a single point. The ultrasound probe can be maintained at the same position during insertion of the needle, which avoids complications of tissue deformation caused by the motion of the ultrasound probe.
The evaluation of maximum horizontal in-situ stress using the wellbore imagers data
NASA Astrophysics Data System (ADS)
Dubinya, N. V.; Ezhov, K. A.
2016-12-01
Well drilling provides a number of possibilities to improve the knowledge of stress state of the upper layers of the Earth crust. The data obtained from drilling, well logging, core experiments and special tests is used to evaluate the principal stresses' directions and magnitudes. Although the values of vertical stress and minimum horizontal stress may be decently estimated, the maximum horizontal stress remains a major problem. In this study a new method to estimate this value is proposed. The suggested approach is based on the concept of hydraulically conductive and non-conductive fractures near a wellbore (Barton, Zoback and Moos, 1995). It was stated that all the fractures which properties may be acquired from well logging data can be divided into two groups regarding hydraulic conductivity. The fracture properties and the in-situ stress state are put in relationship via the Mohr diagram. This approach was later used by Ito and Zoback (2000) to estimate the magnitude of the maximum horizontal stress from the temperature profiles. In the current study ultrasonic and resistivity borehole imaging are used to estimate the magnitude of maximum horizontal stress in rather precise way. After proper interpretation one is able to obtain orientation and hydraulic conductivity for each fracture appeared at the images. If the proper profiles of vertical and minimum horizontal stresses are known all the fractures may be analyzed at the Mohr diagram. Alteration of maximum horizontal stress profile grants an opportunity to adjust it so the conductive fractures at the Mohr diagram fit the data from imagers' interpretation. The precision of the suggested approach was evaluated for several oil production wells in Siberia with decent wellbore stability models. It appeared that the difference between maximum horizontal stress estimated in a suggested approach and the values obtained from drilling reports did not exceed 0.5 MPa. Thus the proposed approach may be used to evaluate the values of maximum horizontal stress using the wellbore imagers' data. References Barton, C.A., Zoback, M.D., Moos, D. Fluid flow along potentially active faults in crystalline rock - Geology, 1995. T. Ito, M. Zoback, Fracture permeability and in situ stress to 7 km depth in the KTB Scientific Drillhole, Geophysical Research Letters, 2000.
NASA Astrophysics Data System (ADS)
Schwarz, C.; Cox, T.; van Engeland, T.; van Oevelen, D.; van Belzen, J.; van de Koppel, J.; Soetaert, K.; Bouma, T. J.; Meire, P.; Temmerman, S.
2017-10-01
A short-term intensive measurement campaign focused on flow, turbulence, suspended particle concentration, floc dynamics and settling velocities were carried out in a brackish intertidal creek draining into the main channel of the Scheldt estuary. We compare in situ estimates of settling velocities between a laser diffraction (LISST) and an acoustic Doppler technique (ADV) at 20 and 40 cm above bottom (cmab). The temporal variation in settling velocity estimated were compared over one tidal cycle, with a maximum flood velocity of 0.46 m s-1, a maximum horizontal ebb velocity of 0.35 m s-1 and a maximum water depth at high water slack of 2.41 m. Results suggest that flocculation processes play an important role in controlling sediment transport processes in the measured intertidal creek. During high-water slack, particles flocculated to sizes up to 190 μm, whereas at maximum flood and maximum ebb tidal stage floc sizes only reached up to 55 μm and 71 μm respectively. These large differences indicate that flocculation processes are mainly governed by turbulence-induced shear rate. In this study, we specifically recognize the importance of along-channel gradients that places constraints on the application of the acoustic Doppler technique due to conflicts with the underlying assumptions. Along-channel gradients were assessed by additional measurements at a second location and scaling arguments which could be used as an indication whether the Reynolds-flux method is applicable. We further show the potential impact of along-channel advection of flocs out of equilibrium with local hydrodynamics influencing overall floc sizes.
A comparison of measures of riverbed form for evaluating distributions of benthic fishes
Wildhaber, Mark L.; Lamberson, Peter J.; Galat, David L.
2003-01-01
A method to quantitatively characterize the bed forms of a large river and a preliminary test of the relationship between bed-form characteristics and catch per unit area of benthic fishes is presented. We used analog paper recordings of bathymetric data from the Missouri River and fish data collected from 1996 to 1998 at both the segment (???101-102-km) and macrohabitat (???10-1-100-km) spatial scales. Bed-form traces were transformed to digital data with image analysis software. The slope, mean residual, and SD of the residuals of the regression of depth versus distance along the bottom, as well as mean depth, were estimated for each trace. These four metrics were compared with sinuosity, fractal dimension, critical scale, and maximum mean angle for the same traces. Mean depth and sinuosity differed among segments and macrohabitats. Fractal-based measures of the relative depth of bottom troughs (critical scale) and smoothness (maximum mean angle) differed among segments. Statistics-based measures of the relative depth of bottom troughs (mean residual) and smoothness (SD of the residuals) differed among macrohabitats. Sites with shovelnose sturgeon Scaphirhynchus platorynchus were shallower and smoother than sites without shovelnose sturgeon. When compared with sites without sicklefin chub Macrhybopsis meeki, sites with sicklefin chub were shallower, had shallower troughs, and sloped more out of the flow of the river. Sites with sturgeon chub M. gelida were shallower, had shallower troughs, and were smoother than sites without sturgeon chub. Sites with and without channel catfish Ictalurus punctatus did not differ for any bed-form variables measured. Nonzero shovelnose sturgeon density increased with depth, whereas nonzero sturgeon chub density decreased with depth. Indices of bed-form structure demonstrated potential for describing the distribution and abundance of Missouri River benthic fishes. The observed fish patterns, though limited, provide valuable direction for future research into the habitat preferences of these fishes.
Meteoric 10Be in soil profiles - A global meta-analysis
Graly, Joseph A.; Bierman, Paul R.; Reusser, Lucas J.; Pavich, Milan J.
2010-01-01
In order to assess current understanding of meteoric 10Be dynamics and distribution in terrestrial soils, we assembled a database of all published meteoric 10Be soil depth profiles, including 104 profiles from 27 studies in globally diverse locations, collectively containing 679 individual measurements. This allows for the systematic comparison of meteoric 10Be concentration to other soil characteristics and the comparison of profile depth distributions between geologic settings. Percent clay, 9Be, and dithionite-citrate extracted Al positively correlate to meteoric 10Be in more than half of the soils where they were measured, but the lack of significant correlation in other soils suggests that no one soil factor controls meteoric 10Be distribution with depth. Dithionite-citrate extracted Fe and cation exchange capacity are only weakly correlated to meteoric 10Be. Percent organic carbon and pH are not significantly related to meteoric 10Be concentration when all data are complied.The compilation shows that meteoric 10Be concentration is seldom uniform with depth in a soil profile. In young or rapidly eroding soils, maximum meteoric 10Be concentrations are typically found in the uppermost 20 cm. In older, more slowly eroding soils, the highest meteoric 10Be concentrations are found at depth, usually between 50 and 200 cm. We find that the highest measured meteoric 10Be concentration in a soil profile is an important metric, as both the value and the depth of the maximum meteoric 10Be concentration correlate with the total measured meteoric 10Be inventory of the soil profile.In order to refine the use of meteoric 10Be as an estimator of soil erosion rate, we compare near-surface meteoric 10Be concentrations to total meteoric 10Be soil inventories. These trends are used to calibrate models of meteoric 10Be loss by soil erosion. Erosion rates calculated using this method vary based on the assumed depth and timing of erosional events and on the reference data selected.
NASA Astrophysics Data System (ADS)
Jin, Honglin; Kato, Teruyuki; Hori, Muneo
2007-07-01
An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.
NASA Astrophysics Data System (ADS)
Vinod, P. N.; Joseph, Sherin; John, Reji
2017-04-01
In this paper, efficacy of pulsed thermography technique has been explored for the first time for the detection and quantification of the subsurface defects present in the rubber-encapsulated piezoelectric sensors. Initial experiments were performed on adhesively bonded joints of the rubber/Al or rubber/PZT control samples to find out an optimum acquisition time for the 3-mm rubber encapsulants. Thermographic measurements were performed in the reflection mode and acquired thermal images were analysed and processed images were described in terms of the phase images. The defective regions are identified as delamination of the adhesive joints at the interface of rubber and PZT stacks, and presence of porosity in the encapsulation in the inspected hydrophone. The defect depths of the observed anomalies were calculated empirically from the plots of the peak time of thermal contrast (tmax) maximum and thermal contrast maximum (Cmax) for a particular defect. The estimated defect depths of the prominent porosity observed in the PZT hydrophone are found nearly 1 mm from the surface.
Contributions of depth filter components to protein adsorption in bioprocessing.
Khanal, Ohnmar; Singh, Nripen; Traylor, Steven J; Xu, Xuankuo; Ghose, Sanchayita; Li, Zheng J; Lenhoff, Abraham M
2018-04-16
Depth filtration is widely used in downstream bioprocessing to remove particulate contaminants via depth straining and is therefore applied to harvest clarification and other processing steps. However, depth filtration also removes proteins via adsorption, which can contribute variously to impurity clearance and to reduction in product yield. The adsorption may occur on the different components of the depth filter, that is, filter aid, binder, and cellulose filter. We measured adsorption of several model proteins and therapeutic proteins onto filter aids, cellulose, and commercial depth filters at pH 5-8 and ionic strengths <50 mM and correlated the adsorption data to bulk measured properties such as surface area, morphology, surface charge density, and composition. We also explored the role of each depth filter component in the adsorption of proteins with different net charges, using confocal microscopy. Our findings show that a complete depth filter's maximum adsorptive capacity for proteins can be estimated by its protein monolayer coverage values, which are of order mg/m 2 , depending on the protein size. Furthermore, the extent of adsorption of different proteins appears to depend on the nature of the resin binder and its extent of coating over the depth filter surface, particularly in masking the cation-exchanger-like capacity of the siliceous filter aids. In addition to guiding improved depth filter selection, the findings can be leveraged in inspiring a more intentional selection of components and design of depth filter construction for particular impurity removal targets. © 2018 Wiley Periodicals, Inc.
Dynamic deformations of shallow sediments in the Valley of Mexico, Part II: Single-station estimates
Singh, S.K.; Santoyo, M.; Bodin, P.; Gomberg, J.
1997-01-01
We develop simple relations to estimate dynamic displacement gradients (and hence the strains and rotations) during earthquakes in the lake-bed zone of the Valley of Mexico, where the presence of low-velocity, high-water content clays in the uppermost layers cause dramatic amplification of seismic waves and large strains. The study uses results from a companion article (Bodin et al., 1997) in which the data from an array at Roma, a lake-bed site, were analyzed to obtain displacement gradients. In this article, we find that the deformations at other lake-bed sites may differ from those at Roma by a factor of 2 to 3. More accurate estimates of the dominant components of the deformation at an individual instrumented lake-bed site may be obtained from the maximum horizontal velocity and displacement, ??max and umax, at the surface. The maximum surface strain ??max is related to ??max by ??max = ??max/C, with C ??? 0.6 km/sec. From the analysis of data from sites equipped with surface and borehole sensors, we find that the vertical gradient of peak horizontal displacement (??umax/??z) computed from sensors at 0 and 30 m equals (umax)z = 0/??z, ??z = 30 m, within a factor of 1.5. This is the largest gradient component, and the latter simple relation permits its estimation from surface records alone. The observed profiles of umax versus depth suggest a larger gradient in some depth range of 10 to 20 m, in agreement with synthetic calculations presented in Bodin et al. (1997). From the free-field recordings of the 19 September 1985 Michoacan earthquake, we estimate a maximum surface strain, ??max, between 0.05% and 0.11%, and a lower bound for the peak vertical gradient (??umax/??z) between 0.3% and 1.3%. This implies that (1) the extensive failure of water pipe joints during the Michoacan earthquake in the valley occurred at axial strains of about 0.1%, not 0.38% as previously reported, and (2) the clays of the valley behave almost linearly even at shear strain of about 1%, in agreement with laboratory tests. The available data in the valley can be used to predict deformations during future earthquakes using self-similar earthquake scaling.
NASA Astrophysics Data System (ADS)
Koshimura, S.; Hino, R.; Ohta, Y.; Kobayashi, H.; Musa, A.; Murashima, Y.
2014-12-01
With use of modern computing power and advanced sensor networks, a project is underway to establish a new system of real-time tsunami inundation forecasting, damage estimation and mapping to enhance society's resilience in the aftermath of major tsunami disaster. The system consists of fusion of real-time crustal deformation monitoring/fault model estimation by Ohta et al. (2012), high-performance real-time tsunami propagation/inundation modeling with NEC's vector supercomputer SX-ACE, damage/loss estimation models (Koshimura et al., 2013), and geo-informatics. After a major (near field) earthquake is triggered, the first response of the system is to identify the tsunami source model by applying RAPiD Algorithm (Ohta et al., 2012) to observed RTK-GPS time series at GEONET sites in Japan. As performed in the data obtained during the 2011 Tohoku event, we assume less than 10 minutes as the acquisition time of the source model. Given the tsunami source, the system moves on to running tsunami propagation and inundation model which was optimized on the vector supercomputer SX-ACE to acquire the estimation of time series of tsunami at offshore/coastal tide gauges to determine tsunami travel and arrival time, extent of inundation zone, maximum flow depth distribution. The implemented tsunami numerical model is based on the non-linear shallow-water equations discretized by finite difference method. The merged bathymetry and topography grids are prepared with 10 m resolution to better estimate the tsunami inland penetration. Given the maximum flow depth distribution, the system performs GIS analysis to determine the numbers of exposed population and structures using census data, then estimates the numbers of potential death and damaged structures by applying tsunami fragility curve (Koshimura et al., 2013). Since the tsunami source model is determined, the model is supposed to complete the estimation within 10 minutes. The results are disseminated as mapping products to responders and stakeholders, e.g. national and regional municipalities, to be utilized for their emergency/response activities. In 2014, the system is verified through the case studies of 2011 Tohoku event and potential earthquake scenarios along Nankai Trough with regard to its capability and robustness.
SOME APPLICATIONS OF SEISMIC SOURCE MECHANISM STUDIES TO ASSESSING UNDERGROUND HAZARD.
McGarr, A.; ,
1984-01-01
Various measures of the seismic source mechanism of mine tremors, such as magnitude, moment, stress drop, apparent stress, and seismic efficiency, can be related directly to several aspects of the problem of determining the underground hazard arising from strong ground motion of large seismic events. First, the relation between the sum of seismic moments of tremors and the volume of stope closure caused by mining during a given period can be used in conjunction with magnitude-frequency statistics and an empirical relation between moment and magnitude to estimate the maximum possible sized tremor for a given mining situation. Second, it is shown that the 'energy release rate,' a commonly-used parameter for predicting underground seismic hazard, may be misleading in that the importance of overburden stress, or depth, is overstated. Third, results involving the relation between peak velocity and magnitude, magnitude-frequency statistics, and the maximum possible magnitude are applied to the problem of estimating the frequency at which design limits of certain underground support equipment are likely to be exceeded.
NASA Astrophysics Data System (ADS)
Tornabene, Livio L.; Watters, Wesley A.; Osinski, Gordon R.; Boyce, Joseph M.; Harrison, Tanya N.; Ling, Victor; McEwen, Alfred S.
2018-01-01
We use topographic data to show that impact craters with pitted floor deposits are among the deepest on Mars. This is consistent with the interpretation of pitted materials as primary crater-fill impactite deposits emplaced during crater formation. Our database consists of 224 pitted material craters ranging in size from ∼1 to 150 km in diameter. Our measurements are based on topographic data from the Mars Orbiter Laser Altimeter (MOLA) and the High-Resolution Stereo Camera (HRSC). We have used these craters to measure the relationship between crater diameter and the initial post-formation depth. Depth was measured as maximum rim-to-floor depth, (dr), but we also report the depth measured using other definitions. The database was down-selected by refining or removing elevation measurements from ;problematic; craters affected by processes and conditions that influenced their dr/D, such as pre-impact slopes/topography and later overprinting craters. We report a maximum (deepest) and mean scaling relationship of dr = (0.347 ± 0.021)D0.537 ± 0.017 and dr = (0.323 ± 0.017)D0.538 ± 0.016, respectively. Our results suggest that significant variations between previously-reported MOLA-based dr vs. D relationships may result from the inclusion of craters that: 1) are influenced by atypical processes (e.g., highly oblique impact), 2) are significantly degraded, 3) reside within high-strength regions, and 4) are transitional (partially collapsed). By taking such issues into consideration and only measuring craters with primary floor materials, we present the best estimate to date of a MOLA-based relationship of dr vs. D for the least-degraded complex craters on Mars. This can be applied to crater degradation studies and provides a useful constraint for models of complex crater formation.
Schumacher, E L; Owens, B D; Uyeno, T A; Clark, A J; Reece, J S
2017-08-01
This study tests for interspecific evidence of Heincke's law among hagfishes and advances the field of research on body size and depth of occurrence in fishes by including a phylogenetic correction and by examining depth in four ways: maximum depth, minimum depth, mean depth of recorded specimens and the average of maximum and minimum depths of occurrence. Results yield no evidence for Heincke's law in hagfishes, no phylogenetic signal for the depth at which species occur, but moderate to weak phylogenetic signal for body size, suggesting that phylogeny may play a role in determining body size in this group. © 2017 The Fisheries Society of the British Isles.
NASA Astrophysics Data System (ADS)
Dondurur, Derman; Sarı, Coşkun
2004-07-01
A FORTRAN 77 computer code is presented that permits the inversion of Slingram electromagnetic anomalies to an optimal conductor model. Damped least-squares inversion algorithm is used to estimate the anomalous body parameters, e.g. depth, dip and surface projection point of the target. Iteration progress is controlled by maximum relative error value and iteration continued until a tolerance value was satisfied, while the modification of Marquardt's parameter is controlled by sum of the squared errors value. In order to form the Jacobian matrix, the partial derivatives of theoretical anomaly expression with respect to the parameters being optimised are calculated by numerical differentiation by using first-order forward finite differences. A theoretical and two field anomalies are inserted to test the accuracy and applicability of the present inversion program. Inversion of the field data indicated that depth and the surface projection point parameters of the conductor are estimated correctly, however, considerable discrepancies appeared on the estimated dip angles. It is therefore concluded that the most important factor resulting in the misfit between observed and calculated data is due to the fact that the theory used for computing Slingram anomalies is valid for only thin conductors and this assumption might have caused incorrect dip estimates in the case of wide conductors.
Sample, Bradley E; Lowe, John; Seeley, Paul; Markin, Melanie; McCarthy, Chris; Hansen, Jim; Aly, Alaa H
2015-01-01
Soil invertebrates, mammals, and plants penetrate and exploit the surface soil layer (i.e., the biologically active zone) to varying depths. As the US Department of Energy remediates radioactive and hazardous wastes in soil at the Hanford Site, a site-specific definition of the biologically active zone is needed to identify the depth to which remedial actions should be taken to protect the environment and avoid excessive cleanup expenditures. This definition may then be considered in developing a point of compliance for remediation in accordance with existing regulations. Under the State of Washington Model Toxic Control Act (MTCA), the standard point of compliance for soil cleanup levels with unrestricted land use is 457 cm (15 ft) below ground surface. When institutional controls are required to control excavations to protect people, MTCA allows a conditional point of compliance to protect biological resources based on the depth of the biologically active zone. This study was undertaken to identify and bound the biologically active zone based on ecological resources present at the Hanford Site. Primary data were identified describing the depths to which ants, mammals, and plants may exploit the surface soil column at the Hanford Site and other comparable locations. The maximum depth observed for harvester ants (Pogonomyrmex spp.) was 270 cm (8.9 ft), with only trivial excavation below 244 cm (8 ft). Badgers (Taxidea taxus) are the deepest burrowing mammal at the Hanford Site, with maximum burrow depths of 230 cm (7.6 ft); all other mammals did not burrow below 122 cm (4 ft). Shrubs are the deepest rooting plants with rooting depths to 300 cm (9.8 ft) for antelope bitterbrush (Purshia tridentata). The 2 most abundant shrub species did not have roots deeper than 250 cm (8.2 ft). The deepest rooted forb had a maximum root depth of 240 cm (7.9 ft). All other forbs and grasses had rooting depths of 200 cm (6.6 ft) or less. These data indicate that the biologically active soil zone in the Hanford Central Plateau does not exceed 300 cm (9.8 ft), the maximum rooting depth for the deepest rooting plant. The maximum depth at which most other plant and animal species occur is substantially shallower. Spatial distribution and density of burrows and roots over depths were also evaluated. Although maximum excavation by harvester ants is 270 cm (8.9 ft), trivial volume of soil is excavated below 150 cm (∼5 ft). Maximum rooting depths for all grasses, forbs, and the most abundant and deepest rooting shrubs are 300 cm (9.8 ft) or less. Most root biomass (>50-80%) is concentrated in the top 100 cm (3.3 ft), whereas at the maximum depth (9.8 ft), only trace root biomass is present. Available data suggest a limited likelihood for significant transport of contaminants to the surface by plants at or below 244 cm (8 ft), and suggest that virtually all plants or animal species occurring on the Central Plateau have a negligible likelihood for transporting soil contaminants to the surface from depths at or below 305 cm (10 ft). © 2014 SETAC.
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; de Moel, H.
2016-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage functions and maximum damages can have large effects on flood damage estimates. This explanation is then used to quantify the uncertainty in the damage estimates with a Monte Carlo analysis. The Monte Carlo analysis uses a damage function library with 272 functions from seven different flood damage models. The paper shows that the resulting uncertainties in estimated damages are in the order of magnitude of a factor of 2 to 5. The uncertainty is typically larger for flood events with small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Measuring Paleolandscape Relief in Alluvial River Systems from the Stratigraphic Record
NASA Astrophysics Data System (ADS)
Hajek, E. A.; Trampush, S. M.; Chamberlin, E.; Greenberg, E.
2017-12-01
Aggradational alluvial river systems sometimes generate relief in the vicinity of their channel belts (i.e. alluvial ridges) and it has been proposed that this process may define important thresholds in river avulsion. The compensation scale can be used to estimate the maximum relief across a landscape and can be connected to the maximum scale of autogenic organization in experimental and numerical systems. Here we use the compensation scale - measured from outcrops of Upper Cretaceous and Paleogene fluvial deposits - to estimate the maximum relief that characterized ancient fluvial landscapes. In some cases, the compensation scale significantly exceeds the maximum channel depth observed in a deposit, suggesting that aggradational alluvial systems organize to sustain more relief than might be expected by looking only in the immediate vicinity of the active channel belt. Instead, these results indicate that in some systems, positive topographic relief generated by multiple alluvial ridge complexes and/or large-scale fan features may be associated with landscape-scale autogenic organization of channel networks that spans multiple cycles of channel avulsion. We compare channel and floodplain sedimentation patterns among the studied ancient fluvial systems in an effort to determine whether avulsion style, channel migration, or floodplain conditions influenced the maximum autogenic relief of ancient landscapes. Our results emphasize that alluvial channel networks may be organized at much larger spatial and temporal scales than previously realized and provide an avenue for understanding which types of river systems are likely to exhibit the largest range of autogenic dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, Yann; Royer, Alain; O'Neill, Norman T.
Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less
NASA Astrophysics Data System (ADS)
Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; Turner, David D.; Eloranta, Edwin W.
2017-06-01
Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookup table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation (R2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21 µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.
Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; ...
2017-06-09
Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less
Tectonic history of the Syria Planum province of Mars
Tanaka, K.L.; Davis, P.A.
1988-01-01
We attribute most of the development of extensive fractures in the Tharsis region to discrete tectonic provinces within the region, rather than to Tharsis as a single entity. One of these provinces is in Syria Planum. Faults and collapse structures in the Syria Planum tectonic province on Mars are grouped into 13 sets based on relative age, areal distribution, and morphology. According to superposition and fault crosscutting relations and crater counts we designate six distinct episodes of tectonic activity. Photoclinometric topographic profiles across 132 grabens and fault scarps show that Syria Planum grabens have widths (average of 2.5 km, and most range from 1 to 6 km) similar to lunar grabens, but the Martian grabens have slightly higher side walls (average abour 132 m) and gentler wall slopes (average of 9?? and range of 2??-25??) than lunar grabens (93 m high and 18?? slopes). Estimates of the amount of extension for individual grabens range from 20 to 350 m; most estimates of the thickness of the faulted layer range from 0.5 to 4.5 km (average is 1.5 km). This thickness range corresponds closely to the 0.8- to 3.6-km range in depth for pits, troughs, and canyons in Noctis Labyrinthus and along the walls of Valles Marineris. We propose that the predominant 1- to 1.5-km values obtained for both the thickness of the faulted layer and the depths of the pits, troughs, and theater heads of the canyons reflect the initial depth to the water table in this region, as governed by the depth to the base of ground ice. Maximum depths for these features may indicate lowered groundwater table depths and the base of ejecta material. -from Authors
Roncali, Emilie; Phipps, Jennifer E; Marcu, Laura; Cherry, Simon R
2012-10-21
In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2×2×20 mm(3) phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors.
Roncali, Emilie; Phipps, Jennifer E.; Marcu, Laura; Cherry, Simon R.
2012-01-01
In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2 × 2 × 20 mm3 phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors. PMID:23010690
Subsurface damage distribution in the lapping process.
Wang, Zhuo; Wu, Yulie; Dai, Yifan; Li, Shengyi
2008-04-01
To systematically investigate the influence of lapping parameters on subsurface damage (SSD) depth and characterize the damage feature comprehensively, maximum depth and distribution of SSD generated in the optical lapping process were measured with the magnetorheological finishing wedge technique. Then, an interaction of adjacent indentations was applied to interpret the generation of maximum depth of SSD. Eventually, the lapping procedure based on the influence of lapping parameters on the material removal rate and SSD depth was proposed to improve the lapping efficiency.
NASA Astrophysics Data System (ADS)
Borgohain, Jayanta Madhab; Borah, Kajaljyoti; Biswas, Rajib; Bora, Dipok K.
2018-04-01
Spatial variation of seismic b-value is estimated in the Indo-Myanmar subduction zone of northeast (NE) India using the homogeneous part of earthquake catalogue (1996-2015), recorded by International Seismological Center (ISC), consisting of 895 events of magnitude MW ≥ 3.9. The study region is divided into 1° × 1° square grids and b-values are estimated at each grid by maximum likelihood method. In this study, the b-value varies from 0.75 to 1.54 in the region. Significant variation of low b-value in the respective location may indicate high stress accumulation in that region. Spatial variation reveals intermediate b-value anomalies around the epicenter of the Mw = 6.7 Manipur earthquake which occurred on 3rd January at 23:05 UTC (4 January 2016 at 04:35 IST). The variations of b-values are also estimated with respect to depth. The low b-value associated with the depth range ∼15-55 km, which may imply crustal homogeneity and high stress accumulation in the crust. Since, NE India lies in the seismic zone V of the country; this study can be helpful to understand seismotectonics in the region.
NASA Astrophysics Data System (ADS)
Alperin, M. J.; Albert, D. B.; Martens, C. S.
1994-11-01
Dissolved organic carbon (DOC) concentrations in anoxic marine sediments are controlled by at least three processes: (1) production of nonvolatile dissolved compounds, such as peptides and amino acids, soluble saccharides and fatty acids, via hydrolysis of particulate organic carbon (POC). (2) conversion of these compounds to volatile fatty acids and alcohols by fermentative bacteria. (3) consumption of volatile fatty acids and alcohols by terminal bacteria, such as sulfate reducers and methanogens. We monitored seasonal changes in concentration profiles of total DOC, nonacid-volatile (NAV) DOC and acid-volatile (AV) DOC in anoxic sediment from Cape Lookout Bight, North Carolina, USA, in order to investigate the factors that control seasonal variations in rates of hydrolysis, fermentation, and terminal metabolism. During the winter months, DOC concentrations increased continuously from 0.2 mM in the bottomwater to ~4 mM at a depth of 36 cm in the sediment column. During the summer, a large DOC maximum developed between 5 and 20 cm, with peak concentrations approaching 10 mM. The mid-depth summertime maximum was driven by increases in both NAV- and AV-DOC concentrations. Net NAV-DOC reaction rates were estimated by a diagenetic model applied to NAV-DOC concentration profiles. Depth-integrated production rates of NAV-DOC increased from February through July, suggesting that net rates of POC hydrolysis during this period are controlled by temperature. Net consumption of NAV-DOC during the late summer and early fall suggests reduced gross NAV-DOC production rates, presumably due to a decline in the availability of labile POC. A distinct subsurface peak in AV-DOC concentration developed during the late spring, when the sulfate depletion depth shoaled from 25 to 10 cm. We hypothesize that the AV-DOC maximum results from a decline in consumption by sulfate-reducing bacteria (due to sulfate limitation) and a lag in the development of an active population of methanogenic bacteria. A diagenetic model that incorporates a lag period in the sulfate reducer-methanogen transition successfully simulates the timing, magnitude, depth and shape of the AV-DOC peak.
NASA Astrophysics Data System (ADS)
Conte, Maureen H.; Ralph, Nate; Ross, Edith H.
Since 1978, the Oceanic Flux Program (OFP) time-series sediment traps have measured particle fluxes in the deep Sargasso Sea near Bermuda. There is currently a 20+yr flux record at 3200-m depth, a 12+yr flux at 1500-m depth, and a 9+yr record at 500-m depth. Strong seasonality is observed in mass flux at all depths, with a flux maximum in February-March and a smaller maximum in December-January. There is also significant interannual variability in the flux, especially with respect to the presence/absence of the December-January flux maximum and in the duration of the high flux period in the spring. The flux records at the three depths are surprisingly coherent, with no statistically significant temporal lag between 500 and 3200-m fluxes at our biweekly sample resolution. Bulk compositional data indicate an extremely rapid decrease in the flux of organic constituents with depth between 500 and 1500-m, and a smaller decrease with depth between 1500 and 3200-m depth. In contrast, carbonate flux is uniform or increases slightly between 500 and 1500-m, possibly reflecting deep secondary calcification by foraminifera. The lithogenic flux increases by over 50% between 500 and 3200-m depth, indicating strong deep water scavenging/repackaging of suspended lithogenic material. Concurrent with the rapid changes in flux composition, there is a marked reduction in the heterogeneity of the sinking particle pool with depth, especially within the mesopelagic zone. By 3200-m depth, the bulk composition of the sinking particle pool is strikingly uniform, both seasonally and over variations in mass flux of more than an order of magnitude. These OFP results provide strong indirect evidence for the intensity of reprocessing of the particle pool by resident zooplankton within mesopelagic and bathypelagic waters. The rapid loss of organic components, the marked reduction in the heterogeneity of the bulk composition of the flux, and the increase in terrigenous fluxes with depth are most consistent with a model of rapid particle turnover and material scavenging from the suspended pool during new particle formation. We suggest that much of the deep mass flux is generated in situ by deep-dwelling zooplankton, and that mass flux, as well as scavenging of suspended materials from the deep water column, varies in proportion to changes in grazer activity. Labile, very rapidly sinking aggregates (e.g., salp fecal material) arriving in the bathypelagic zone within days of their upper ocean production may act to stimulate zooplankton grazing rates and increase large particle production and deep mass flux days to weeks in advance of the arrival of bulk of surface-produced material. This process could reconcile mean particle sinking rate estimates with the phase coherence observed between upper and deep ocean mass fluxes.
NASA Technical Reports Server (NTRS)
Andre, C. G.
1986-01-01
A rare look at the chemical composition of subsurface stratigraphy in lunar basins filled with mare basalt is possible at fresh impact craters. Mg/Al maps from orbital X-ray flourescence measurements of mare areas indicate chemical anomalies associated with materials ejected by large post-mare impacts. A method of constraining the wide-ranging estimates of mare basalt depths using the orbital MG/Al data is evaluated and the results are compared to those of investigators using different indirect methods. Chemical anomalies at impact craters within the maria indicate five locations where higher Mg/Al basalt compositions may have been excavated from beneath the surface layer. At eight other locations, low Mg/Al anomalies suggest that basin-floor material was ejected. In these two cases, the stratigraphic layers are interpreted to occur at depths less than the calculated maximum depth of excavation. In five other cases, there is no apparent chemical change between the crater and the surrounding mare surface. This suggests homogeneous basalt compositions that extend down to the depths sampled, i.e., no anorthositic material that might represent the basin floor was exposed.
Stein, Ross S.
2007-01-01
Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).
Schaber, Gerald G.; McCauley, John F.; Breed, Carol S.; Olhoeft, Gary R.
1986-01-01
It is found that the Shuttle Imaging Radar A (SIR-A) signal penetration and subsurface backscatter within the upper meter or so of the sediment blanket in the Eastern Sahara of southern Egypt and northern Sudan are enhanced both by radar sensor parameters and by the physical and chemical characteristics of eolian and alluvial materials. The near-surface stratigraphy, the electrical properties of materials, and the types of radar interfaces found to be responsible for different classes of SIR-A tonal response are summarized. The dominant factors related to efficient microwave signal penetration into the sediment blanket include 1) favorable distribution of particle sizes, 2) extremely low moisture content and 3) reduced geometric scattering at the SIR-A frequency (1. 3 GHz). The depth of signal penetration that results in a recorded backscatter, called radar imaging depth, was documented in the field to be a maximum of 1. 5 m, or 0. 25 times the calculated skin depth, for the sediment blanket. The radar imaging depth is estimated to be between 2 and 3 m for active sand dune materials.
Savage, W.Z.; Morin, R.H.
2002-01-01
We have applied a previously developed analytical stress model to interpret subsurface stress conditions inferred from acoustic televiewer logs obtained in two municipal water wells located in a valley in the southern Davis Mountains near Alpine, Texas. The appearance of stress-induced breakouts with orientations that shift by 90?? at two different depths in one of the wells is explained by results from exact solutions for the effects of valleys on gravity and tectonically induced subsurface stresses. The theoretical results demonstrate that above a reference depth termed the hinge point, a location that is dependent on Poisson's ratio, valley shape, and magnitude of the maximum horizontal tectonic stress normal to the long axis of the valley, horizontal stresses parallel to the valley axis are greater than those normal to it. At depths below this hinge point the situation reverses and horizontal stresses normal to the valley axis are greater than those parallel to it. Application of the theoretical model at Alpine is accommodated by the fact that nearby earthquake focal mechanisms establish an extensional stress regime with the regional maximum horizontal principal stress aligned perpendicular to the valley axis. We conclude that the localized stress field associated with a valley setting can be highly variable and that breakouts need to be examined in this context when estimating the orientations and magnitudes of regional principal stresses.
Association of microparticles and neutrophil activation with decompression sickness.
Thom, Stephen R; Bennett, Michael; Banham, Neil D; Chin, Walter; Blake, Denise F; Rosen, Anders; Pollock, Neal W; Madden, Dennis; Barak, Otto; Marroni, Alessandro; Balestra, Costantino; Germonpre, Peter; Pieri, Massimo; Cialoni, Danilo; Le, Phi-Nga Jeannie; Logue, Christopher; Lambert, David; Hardy, Kevin R; Sward, Douglas; Yang, Ming; Bhopale, Veena B; Dujic, Zeljko
2015-09-01
Decompression sickness (DCS) is a systemic disorder, assumed due to gas bubbles, but additional factors are likely to play a role. Circulating microparticles (MPs)--vesicular structures with diameters of 0.1-1.0 μm--have been implicated, but data in human divers have been lacking. We hypothesized that the number of blood-borne, Annexin V-positive MPs and neutrophil activation, assessed as surface MPO staining, would differ between self-contained underwater breathing-apparatus divers suffering from DCS vs. asymptomatic divers. Blood was analyzed from 280 divers who had been exposed to maximum depths from 7 to 105 meters; 185 were control/asymptomatic divers, and 90 were diagnosed with DCS. Elevations of MPs and neutrophil activation occurred in all divers but normalized within 24 h in those who were asymptomatic. MPs, bearing the following proteins: CD66b, CD41, CD31, CD142, CD235, and von Willebrand factor, were between 2.4- and 11.7-fold higher in blood from divers with DCS vs. asymptomatic divers, matched for time of sample acquisition, maximum diving depth, and breathing gas. Multiple logistic regression analysis documented significant associations (P < 0.001) between DCS and MPs and for neutrophil MPO staining. Effect estimates were not altered by gender, body mass index, use of nonsteroidal anti-inflammatory agents, or emergency oxygen treatment and were modestly influenced by divers' age, choice of breathing gas during diving, maximum diving depth, and whether repetitive diving had been performed. There were no significant associations between DCS and number of MPs without surface proteins listed above. We conclude that MP production and neutrophil activation exhibit strong associations with DCS. Copyright © 2015 the American Physiological Society.
Ekama, G A; Marais, P
2004-02-01
The applicability of the one-dimensional idealized flux theory (1DFT) for the design of secondary settling tanks (SSTs) is evaluated by comparing its predicted maximum surface overflow (SOR) and solids loading (SLR) rates with that calculated with the two-dimensional computational fluid dynamics model SettlerCAD using as a basis 35 full-scale SST stress tests conducted on different SSTs with diameters from 30 to 45m and 2.25-4.1m side water depth (SWD), with and without Stamford baffles. From the simulations, a relatively consistent pattern appeared, i.e. that the 1DFT can be used for design but its predicted maximum SLR needs to be reduced by an appropriate flux rating, the magnitude of which depends mainly on SST depth and hydraulic loading rate (HLR). Simulations of the Watts et al. (Water Res. 30(9)(1996)2112) SST, with doubled SWDs and the Darvill new (4.1m) and old (2.5m) SSTs with interchanged depths, were run to confirm the sensitivity of the flux rating to depth and HLR. Simulations with and without a Stamford baffle were also performed. While the design of the internal features of the SST, such as baffling, has a marked influence on the effluent SS concentration while the SST is underloaded, these features appeared to have only a small influence on the flux rating, i.e. capacity, of the SST. Until more information is obtained, it would appear from the simulations that the flux rating of 0.80 of the 1DFT maximum SLR recommended by Ekama and Marais (Water Pollut. Control 85(1)(1986)101) remains a reasonable value to apply in the design of full-scale SSTs-for deep SSTs (4m SWD) the flux rating could be increased to 0.85 and for shallow SSTs (2.5m SWD) decreased to 0.75. It is recommended that (i) while the apparent interrelationship between SST flux rating and depth suggests some optimization of the volume of the SST, this be avoided and (ii) the depth of the SST be designed independently of the surface area as is usually the practice and once selected, the appropriate flux rating applied to the 1DFT estimate of the surface area.
Comparison of the 1D flux theory with a 2D hydrodynamic secondary settling tank model.
Ekama, G A; Marais, P
2004-01-01
The applicability of the 1D idealized flux theory (1DFT) for design of secondary settling tanks (SSTs) is evaluated by comparing its predicted maximum surface overflow (SOR) and solids loading (SLR) rates with that calculated from the 2D hydrodynamic model SettlerCAD using as a basis 35 full scale SST stress tests conducted on different SSTs with diameters from 30 to 45m and 2.25 to 4.1 m side water depth, with and without Stamford baffles. From the simulations, a relatively consistent pattern appeared, i.e. that the 1DFT can be used for design but its predicted maximum SLR needs to be reduced by an appropriate flux rating, the magnitude of which depends mainly on SST depth and hydraulic loading rate (HLR). Simulations of the sloping bottom shallow (1.5-2.5 m SWD) Dutch SSTs tested by STOWa and the Watts et al. SST, all with doubled SWDs, and the Darvill new (4.1 m) and old (2.5 m) SSTs with interchanged depths, were run to confirm the sensitivity of the flux rating to depth and HLR. Simulations with and without a Stamford baffle were also done. While the design of the internal features of the SST, such as baffling, have a marked influence on the effluent SS concentration for underloaded SSTs, these features appeared to have only a small influence on the flux rating, i.e. capacity, of the SST, In the meantime until more information is obtained, it would appear that from the simulations so far that the flux rating of 0.80 of the 1DFT maximum SLR recommended by Ekama and Marais remains a reasonable value to apply in the design of full scale SSTs--for deep SSTs (4 m SWD) the flux rating could be increased to 0.85 and for shallow SSTs (2.5 m SWD) decreased to 0.75. It is recommended that (i) while the apparent interrelationship between SST flux rating and depth suggests some optimization of the volume of the SST, that this be avoided and that (ii) the depth of the SST be designed independently of the surface area as is usually the practice and once selected, the appropriate flux rating is applied to the 1DFT estimate of the surface area.
Estimation of the Friction Coefficient of a Nanostructured Composite Coating
NASA Astrophysics Data System (ADS)
Shil'ko, S. V.; Chernous, D. A.; Ryabchenko, T. V.; Hat'ko, V. V.
2017-11-01
The frictional-mechanical properties of a thin polymer-ceramic coating obtained by gas-phase impregnation of nanoporous anodic alumina with a fluoropolymer (octafluorocyclobutane) have been investigated. The coefficient of sliding friction of the coating is predicted based on an analysis of contact deformation within the framework of the Winkler elastic foundation hypothesis and a three-phase micromechanical model. It is shown that an acceptable prediction accuracy can be obtained considering the uniaxial strain state of the coating. It was found that, on impregnation by the method of plasmachemical treatment, the relative depth of penetration of the polymer increased almost in proportion to the processing time. The rate and maximum possible depth of penetration of the polymer into nanoscale pores grew with increasing porosity of the alumina substrate.
Photosynthetic parameters in the Beaufort Sea in relation to the phytoplankton community structure
NASA Astrophysics Data System (ADS)
Huot, Y.; Babin, M.; Bruyant, F.
2013-05-01
To model phytoplankton primary production from remotely sensed data, a method to estimate photosynthetic parameters describing the photosynthetic rates per unit biomass is required. Variability in these parameters must be related to environmental variables that are measurable remotely. In the Arctic, a limited number of measurements of photosynthetic parameters have been carried out with the concurrent environmental variables needed. Such measurements and their relationship to environmental variables will be required to improve the accuracy of remotely sensed estimates of phytoplankton primary production and our ability to predict future changes. During the MALINA cruise, a large dataset of these parameters was obtained. Together with previously published datasets, we use environmental and trophic variables to provide functional relationships for these parameters. In particular, we describe several specific aspects: the maximum rate of photosynthesis (Pmaxchl) normalized to chlorophyll decreases with depth and is higher for communities composed of large cells; the saturation parameter (Ek) decreases with depth but is independent of the community structure; and the initial slope of the photosynthesis versus irradiance curve (αchl) normalized to chlorophyll is independent of depth but is higher for communities composed of larger cells. The photosynthetic parameters were not influenced by temperature over the range encountered during the cruise (-2 to 8 °C).
Photosynthetic parameters in the Beaufort Sea in relation to the phytoplankton community structure
NASA Astrophysics Data System (ADS)
Huot, Y.; Babin, M.; Bruyant, F.
2013-01-01
To model phytoplankton primary production from remotely sensed data a method to estimate photosynthetic parameters describing the photosynthetic rates per unit biomass is required. Variability in these parameters must be related to environmental variables that are measurable remotely. In the Arctic, a limited number of measurements of photosynthetic parameter have been carried out with the concurrent environmental variables needed. Therefore, to improve the accuracy of remote estimates of phytoplankton primary production as well as our ability to predict changes in the future such measurements and relationship to environmental variables are required. During the MALINA cruise, a large dataset of these parameters were obtained. Together with previously published datasets, we use environmental and trophic variables to provide functional relationships for these parameters. In particular, we describe several specific aspects: the maximum rate of photosynthesis (Pmaxchl) normalized to chlorophyll decreases with depth and is higher for communities composed of large cells; the saturation parameter (Ek) decreases with depth but is independent of the community structure; and the initial slope of the photosynthesis versus irradiance curve (αchl) normalized to chlorophyll is independent of depth but is higher for communities composed of larger cells. The photosynthetic parameters were not influenced by temperature over the range encountered during the cruise (-2 to 8 °C).
Pressure as a limit to bloater (Coregonus hoyi) vertical migration
TeWinkel, Leslie M.; Fleischer, Guy W.
1998-01-01
Observations of bloater vertical migration showed a limit to the vertical depth changes that bloater experience. In this paper, we conducted an analysis of maximum differences in pressure encountered by bloater during vertical migration. Throughout the bottom depths studied, bloater experienced maximum reductions in swim bladder volume equal to approximately 50-60% of the volume in midwater. The analysis indicated that the limit in vertical depth change may be related to a maximum level of positive or negative buoyancy for which bloater can compensate using alternative mechanisms such as hydrodynamic lift. Bloater may be limited in the extent of migration by either their depth of neutral buoyancy or the distance above the depth of neutral buoyancy at which they can still maintain their position in the water column. Although a migration limit for the bloater population was evident, individual distances of migration varied at each site. These variations in migration distances may indicate differences in depths of neutral buoyancy within the population. However, in spite of these variations, the strong correlation between shallowest depths of migration and swim bladder volume reduction across depths provides evidence that hydrostatic pressure limits the extent of daily vertical movement in bloater.
Estimation of organic carbon loss potential in north of Iran
NASA Astrophysics Data System (ADS)
Shahriari, A.; Khormali, F.; Kehl, M.; Welp, G.; Scholz, Ch.
2009-04-01
The development of sustainable agricultural systems requires techniques that accurately monitor changes in the amount, nature and breakdown rate of soil organic matter and can compare the rate of breakdown of different plant or animal residues under different management systems. In this research, the study area includes the southern alluvial and piedmont plains of Gorgan River extended from east to west direction in Golestan province, Iran. Samples from 10 soil series and were collected from cultivation depth (0-30 cm). Permanganate-oxidizable carbon (POC) an index of soil labile carbon, was used to show soil potential loss of organic carbon. In this index shows the maximum loss of OC in a given soil. Maximum loss of OC for each soil series was estimated through POC and bulk density (BD). The potential loss of OC were estimated between 1253263 and 2410813 g/ha Carbon. Stable organic constituents in the soil include humic substances and other organic macromolecules that are intrinsically resistant against microbial attack, or that are physically protected by adsorption on mineral surfaces or entrapment within clay and mineral aggregates. However, the (Clay + Silt)/OC ratio had a negative significant (p < 0.001) correlation with POC content, confirming the preserving effect of fine particle.
Comparison of observed and predicted abutment scour at selected bridges in Maine.
DOT National Transportation Integrated Search
2008-01-01
Maximum abutment-scour depths predicted with five different methods were compared to : maximum abutment-scour depths observed at 100 abutments at 50 bridge sites in Maine with a : median bridge age of 66 years. Prediction methods included the Froehli...
Assimilation of GOES-Derived Cloud Fields Into MM5
NASA Astrophysics Data System (ADS)
Biazar, A. P.; Doty, K. G.; McNider, R.
2007-12-01
This approach for the assimilation of GOES-derived cloud data into an atmospheric model (the Fifth-Generation Pennsylvania State University-National Center for Atmospheric Research Mesoscale Model, or MM5) was performed in two steps. In the first step, multiple linear regression equations were developed using a control MM5 simulation to develop relationships for several dependent variables in model columns that had one or more layers of clouds. In the second step, the regression equations were applied during an MM5 simulation with assimilation in which the hourly GOES satellite data were used to determine the cloud locations and some of the cloud properties, but with all the other variables being determined by the model data. The satellite-derived fields used were shortwave cloud albedo and cloud top pressure. Ten multiple linear regression equations were developed for the following dependent variables: total cloud depth, number of cloud layers, depth of the layer that contains the maximum vertical velocity, the maximum vertical velocity, the height of the maximum vertical velocity, the estimated 1-h stable (i.e., grid scale) precipitation rate, the estimated 1-h convective precipitation rate, the height of the level with the maximum positive diabatic heating, the magnitude of the maximum positive diabatic heating, and the largest continuous layer of upward motion. The horizontal components of the divergent wind were adjusted to be consistent with the regression estimate of the maximum vertical velocity. The new total horizontal wind field with these new divergent components was then used to nudge an ongoing MM5 model simulation towards the target vertical velocity. Other adjustments included diabatic heating and moistening at specified levels. Where the model simulation had clouds when the satellite data indicated clear conditions, procedures were taken to remove or diminish the errant clouds. The results for the period of 0000 UTC 28 June - 0000 UTC 16 July 1999 for both a continental 32-km grid and an 8-km grid over the Southeastern United States indicate a significant improvement in the cloud bias statistics. The main improvement was the reduction of high bias values that indicated times and locations in the control run when there were model clouds but when the satellite indicated clear conditions. The importance of this technique is that it has been able to assimilate the observed clouds in the model in a dynamically sustainable manner. Acknowledgments. This work was partially funded by the following grants: a GEWEX grant from NASA , the Cooperative Agreement between the University of Alabama in Huntsville and the Minerals Management Service on Gulf of Mexico Issues, a NASA applications grant, and a NSF grant.
NASA Astrophysics Data System (ADS)
Konstantinovskaya, E.; Malo, M.; Claprood, M.; Tran-Ngoc, T. D.; Gloaguen, E.; Lefebvre, R.
2012-04-01
The Paleozoic sedimentary succession of the St. Lawrence Platform was characterized to estimate the CO2 storage capacity, the caprock integrity and the fracture/fault stability at the Becancour pilot site. Results are based on the structural interpretation of 25 seismic lines and analysis of 11 well logs and petrophysical data. The three potential storage units of Potsdam, Beekmantown and Trenton saline aquifers are overlain by a multiple caprock system of Utica shales and Lorraine siltstones. The NE-SW regional normal faults dipping to the SE affect the subhorizontal sedimentary succession. The Covey Hill (Lower Potsdam) was found to be the only unit with significant CO2 sequestration potential, since these coarse-grained poorly-sorted fluvial-deltaic quartz-feldspar sandstones are characterized by the highest porosity, matrix permeability and net pay thickness and have the lowest static Young modulus, Poisson's ratio and compressive strength relative to other units. The Covey Hill is located at depths of 1145-1259 m, thus injected CO2 would be in supercritical state according to observed salinity, temperature and fluid pressure. The calcareous Utica shale of the regional seal is more brittle and has higher Young modulus and lower Poisson's ratio than the overlying Lorraine shale. The 3D geological model is kriged using the tops of the geological formations recorded at wells and picked travel times as external drift. The computed CO2 storage capacity in the Covey Hill sandstones is estimated by the volumetric and compressibility methods as 0.22 tons/km2 with storage efficiency factor E 2.4% and 0.09 tons/km2 with E 1%, respectively. A first set of numerical radial simulations of CO2 injection into the Covey Hill were carried out with TOUGH2/ECO2N. A geomechanical analysis of the St. Lawrence Platform sedimentary basin provides the maximum sustainable fluid pressures for CO2 injection that will not induce tensile fracturing and shear reactivation along pre-existing fractures and faults in the caprock. The regional stresses/pressure gradients estimated for the Paleozoic sedimentary basin (depths < 4 km) indicate a strike-slip stress regime. The average maximum horizontal stress orientation (SHmax) is estimated N62.8°E±4.0° in the Becancour-Notre Dame area. The high-angle NE-SW Yamaska normal fault is oriented at 16.7° to the SHmax orientation in the Becancour site. The slip tendency along the fault in this area is estimated to be 0.47 based on the stress magnitude and rock strength evaluations for the borehole breakout intervals in local wells. The regional pore pressure-stress coupling ratio under assumed parameters is about 0.5-0.65 and may contribute to reduce the risk of shear reactivation of faults and fractures. The maximum sustainable fluid pressure that would not cause opening of vertical tensile fractures during CO2 operations is about 18.5-20 MPa at a depth of 1 km.
NASA Astrophysics Data System (ADS)
Webster, C.; Bühler, Y.; Schirmer, M.; Stoffel, A.; Giulia, M.; Jonas, T.
2017-12-01
Snow depth distribution in forests exhibits strong spatial heterogeneity compared to adjacent open sites. Measurement of snow depths in forests is currently limited to a) manual point measurements, which are sparse and time-intensive, b) ground-penetrating radar surveys, which have limited spatial coverage, or c) airborne LiDAR acquisition, which are expensive and may deteriorate in denser forests. We present the application of unmanned aerial vehicles in combination with structure-from-motion (SfM) methods to photogrammetrically map snow depth distribution in forested terrain. Two separate flights were carried out 10 days apart across a heterogeneous forested area of 900 x 500 m. Corresponding snow depth maps were derived using both, LiDAR-based and SfM-based DTM data, obtained during snow-off conditions. Manual measurements collected following each flight were used to validate the snow depth maps. Snow depths were resolved at 5cm resolution and forest snow depth distribution structures such as tree wells and other areas of preferential melt were represented well. Differential snow depth maps showed maximum ablation in the exposed south sides of trees and smaller differences in the centre of gaps and on the north side of trees. This new application of SfM to map snow depth distribution in forests demonstrates a straightforward method for obtaining information that was previously only available through manual spatially limited ground-based measurements. These methods could therefore be extended to more frequent observation of snow depths in forests as well as estimating snow accumulation and depletion rates.
NASA Astrophysics Data System (ADS)
Schaffer-Smith, D.; Swenson, J. J.; Reiter, M. E.; Isola, J. E.
2017-12-01
Over 50% of western hemisphere shorebird species are in decline due to ongoing habitat loss and habitat degradation. Wetland dependent shorebirds prefer shallowly flooded habitats (water depth <5cm), yet most wetlands are not managed to optimize shallow areas. In-situ water depth measurements and microtopography data coupled with satellite image analysis can assist in understanding habitat suitability patterns at broad spatial scales. We generated detailed bathymetry, and estimated spatial daily water depths, the proportion of wetland area providing flooded habitat within the optimal depth range, and the volume of water present in 23 managed wetlands in the Sacramento Valley of California, a globally important shorebird stopover site. Using 30 years of satellite imagery, we estimated suitable habitat extent across the landscape under a range of climate conditions. While spring shorebird abundance has historically peaked in early April, we found that maximum optimal habitat extent occurred after mid-April. More than 50% of monitored wetlands provided limited optimal habitat (<5% of total wetland extent) during the peak of migration between mid-March and mid-April. Furthermore, the duration of suitable habitat presence was fleeting; only 4 wetlands provided at least 10 consecutive days with >5% optimal habitat during the peak of migration. Wetlands with a higher percent clay content and lower topographic variability were more likely to provide a greater extent and duration of suitable habitat. We estimated that even in a relatively wet El-Nino year as little as 0.01%, to 10.72% of managed herbaceous wetlands in the Sacramento Valley provided optimal habitat for shorebirds at the peak of migration in early April. In an extreme drought year, optimal habitat decreased by 80% compared to a wet year Changes in the timing of wetland irrigation and drawdown schedules and the design of future wetland restoration projects could increase the extent and duration of optimal flooded habitat for migratory shorebirds, without significant increases in overall water use requirements.
Haro, Alexander J.; Mulligan, Kevin; Suro, Thomas P.; Noreika, John; McHugh, Amy
2017-10-16
Recent efforts to advance river connectivity for the Millstone River watershed in New Jersey have led to the evaluation of a low-flow gauging weir that spans the full width of the river. The methods and results of a desktop modelling exercise were used to evaluate the potential ability of three anadromous fish species (Alosa sapidissima [American shad], Alosa pseudoharengus [alewife], and Alosa aestivalis [blueback herring]) to pass upstream over the U.S. Geological Survey Blackwells Mills streamgage (01402000) and weir on the Millstone River, New Jersey, at various streamflows, and to estimate the probability that the weir will be passable during the spring migratory season. Based on data from daily fishway counts downstream from the Blackwells Mills streamgage and weir between 1996 and 2014, the general migratory period was defined as April 14 to May 28. Recorded water levels and flow data were used to theoretically estimate water depths and velocities over the weir, as well as flow exceedances occurring during the migratory period.Results indicate that the weir is a potential depth barrier to fish passage when streamflows are below 200 cubic feet per second using a 1-body-depth criterion for American shad (the largest fish among the target species). Streamflows in that range occur on average 35 percent of the time during the migratory period. An increase of the depth criterion to 2 body depths causes the weir to become a possible barrier to passage when flows are below 400 cubic feet per second. Streamflows in that range occur on average 73 percent of the time during the migration season. Average cross-sectional velocities at several points along the weir do not seem to be limiting to the fish migration, but maximum theoretical velocities estimated without friction loss over the face of the weir could be potentially limiting.
Improved depth estimation with the light field camera
NASA Astrophysics Data System (ADS)
Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display
Flynn, Robert H.; Medalie, Laura
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 2.7 ft. The worst-case contraction scour occurred at the maximum free-surface flow (with road overflow) discharge, which was less than the 100-year discharge. Abutment scour ranged from 9.8 to 10.7 ft along the left abutment and from 16.2 to 19.9 ft along the right abutment. The worstcase abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scouredstreambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particlesize distribution. It is generally accepted that the Froehlich and Hire equations (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Spatio-temporal analysis of Modified Omori law in Bayesian framework
NASA Astrophysics Data System (ADS)
Rezanezhad, V.; Narteau, C.; Shebalin, P.; Zoeller, G.; Holschneider, M.
2017-12-01
This work presents a study of the spatio temporal evolution of the modified Omori parameters in southern California in then time period of 1981-2016. A nearest-neighbor approach is applied for earthquake clustering. This study targets small mainshocks and corresponding big aftershocks ( 2.5 ≤ mmainshocks ≤ 4.5 and 1.8 ≤ maftershocks ≤ 2.8 ). We invert for the spatio temporal behavior of c and p values (especially c) all over the area using a MCMC based maximum likelihood estimator. As parameterizing families we use Voronoi cells with randomly distributed cell centers. Considering that c value represents a physical character like stress change we expect to see a coherent c value pattern over seismologically coacting areas. This correlation of c valus can actually be seen for the San Andreas, San Jacinto and Elsinore faults. Moreover, the depth dependency of c value is studied which shows a linear behavior of log(c) with respect to aftershock's depth within 5 to 15 km depth.
Preliminary gravity inversion model of Frenchman Flat Basin, Nevada Test Site, Nevada
Phelps, Geoffrey A.; Graham, Scott E.
2002-01-01
The depth of the basin beneath Frenchman Flat is estimated using a gravity inversion method. Gamma-gamma density logs from two wells in Frenchman Flat constrained the density profiles used to create the gravity inversion model. Three initial models were considered using data from one well, then a final model is proposed based on new information from the second well. The preferred model indicates that a northeast-trending oval-shaped basin underlies Frenchman Flat at least 2,100 m deep, with a maximum depth of 2,400 m at its northeast end. No major horst and graben structures are predicted. Sensitivity analysis of the model indicates that each parameter contributes the same magnitude change to the model, up to 30 meters change in depth for a 1% change in density, but some parameters affect a broader area of the basin. The horizontal resolution of the model was determined by examining the spacing between data stations, and was set to 500 square meters.
The small-animal radiation research platform (SARRP): dosimetry of a focused lens system.
Deng, Hua; Kennedy, Christopher W; Armour, Elwood; Tryggestad, Erik; Ford, Eric; McNutt, Todd; Jiang, Licai; Wong, John
2007-05-21
A small animal radiation platform equipped with on-board cone-beam CT and conformal irradiation capabilities is being constructed for translational research. To achieve highly localized dose delivery, an x-ray lens is used to focus the broad beam from a 225 kVp x-ray tube down to a beam with a full width half maximum (FWHM) of approximately 1.5 mm in the energy range 40-80 keV. Here, we report on the dosimetric characteristics of the focused beam from the x-ray lens subsystem for high-resolution dose delivery. Using the metric of the average dose within a 1.5 mm diameter area, the dose rates at a source-to-surface distance (SSD) of 34 cm are 259 and 172 cGy min(-1) at 6 mm and 2 cm depths, respectively, with an estimated uncertainty of +/-5%. The per cent depth dose is approximately 56% at 2 cm depth for a beam at 34 cm SSD.
Pier and contraction scour prediction in cohesive soils at selected bridges in Illinois
Straub, Timothy D.; Over, Thomas M.
2010-01-01
This report presents the results of testing the Scour Rate In Cohesive Soils-Erosion Function Apparatus (SRICOS-EFA) method for estimating scour depth of cohesive soils at 15 bridges in Illinois. The SRICOS-EFA method for complex pier and contraction scour in cohesive soils has two primary components. The first component includes the calculation of the maximum contraction and pier scour (Zmax). The second component is an integrated approach that considers a time factor, soil properties, and continued interaction between the contraction and pier scour (SRICOS runs). The SRICOS-EFA results were compared to scour prediction results for non-cohesive soils based on Hydraulic Engineering Circular No. 18 (HEC-18). On average, the HEC-18 method predicted higher scour depths than the SRICOS-EFA method. A reduction factor was determined for each HEC-18 result to make it match the maximum of three types of SRICOS run results. The unconfined compressive strength (Qu) for the soil was then matched with the reduction factor and the results were ranked in order of increasing Qu. Reduction factors were then grouped by Qu and applied to each bridge site and soil. These results, and comparison with the SRICOS Zmax calculation, show that less than half of the reduction-factor method values were the lowest estimate of scour; whereas, the Zmax method values were the lowest estimate for over half. A tiered approach to predicting pier and contraction scour was developed. There are four levels to this approach numbered in order of complexity, with the fourth level being a full SRICOS-EFA analysis. Levels 1 and 2 involve the reduction factors and Zmax calculation, and can be completed without EFA data. Level 3 requires some surrogate EFA data. Levels 3 and 4 require streamflow for input into SRICOS. Estimation techniques for both EFA surrogate data and streamflow data were developed.
Relation of local scour to hydraulic properties at selected bridges in New York
Butch, Gerard K.; ,
1993-01-01
Hydraulic properties, bridge geometry, and basin characteristics at 31 bridges in New York are being investigated to identify factors that affect local scour. Streambed elevations measured by the U.S. Geological Survey and New York State Department of Transportation are used to estimate local-scour depth. Data that show zero or minor scour were included in the analysis to decrease bias and to estimate hydraulic properties related to local scour. The maximum measured local scour at the 31 bridges for a single peak flow was 5.4 feet, but the deepening of scour holes at two sites to 6.1 feet and 7.8 feet by multiple peak flows could indicate that the number or duration of high flows is a factor. Local scour at a pier generally increased as the recurrence interval (magnitude) of the discharge increased, but the correlation between local-scour depth and recurrence interval was inconsistent among study sites. For example, flows with a 2-year recurrence interval produced 2 feet of local scour at two sites, whereas a flow with a recurrence interval produced 2 feet of local scour at two sites, whereas a flow with a recurrence interval of 50 years produced only 0.5 feet of local scour at another site. Local-scour depth increased with water depth, stream velocity, and Reynolds number but did not correlate well with bed-material size, Froude number, pier geometry, friction slope, or several other hydraulic and basin characteristics.
NASA Astrophysics Data System (ADS)
Ito, T.; Funato, A.; Tamagawa, T.; Tezuka, K.; Yabe, Y.; Abe, S.; Ishida, A.; Ogasawara, H.
2017-12-01
When rock is cored at depth by drilling, anisotropic expansion occurs with the relief of anisotropic rock stresses, resulting in a sinusoidal variation of core diameter with a period of 180 deg. in the core roll angle. The circumferential variation of core diameter is given theoretically as a function of rock stresses. These new findings can lead various ideas to estimate the rock stress from circumferential variation of core diameter measured after the core retrieving. In the simplest case when a single core sample is only available, the difference between the maximum and minimum components of rock stress in a plane perpendicular to the drilled hole can be estimated from the maximum and minimum core diameters (see the detail in, Funato and Ito, IJRMMS, 2017). The advantages of this method include, (i) much easier measurement operation than those in other in-situ or in-lab estimation methods, and (ii) applicability in high stress environment where stress measurements need pressure for packers or pumping system for the hydro-fracturing methods higher than their tolerance levels. We have successfully tested the method at deep seismogenic zones in South African gold mines, and we are going to apply it to boreholes collared at 3 km depth and intersecting a M5.5 rupture plane several hundred meters below the mine workings in the ICDP project of "Drilling into Seismogenic zones of M2.0 - M5.5 earthquakes in deep South African gold mines" (DSeis) (e.g., http://www.icdp-online.org/projects/world/africa/orkney-s-africa/details/). If several core samples with different orientation are available, all of three principal components of 3D rock stress can be estimated. To realize this, we should have several boreholes drilled in different directions in a rock mass where the stress field is considered to be uniform. It is commonly carried out to dill boreholes in different directions from a mine gallery. Even in a deep borehole drilled vertically from the ground surface, the downhole tool of rotary sidewall coring allows us to take core samples with different orientations at depths of interest from the sidewall of the vertically-drilled borehole. The theoretical relationship between the core expansion and rock stress has been verified through the examination of core samples prepared in laboratory experiments and retrieved field cores.
LPM effect and primary energy estimations
NASA Technical Reports Server (NTRS)
Bourdeau, M. F.; Capdevielle, J. N.
1985-01-01
The distortion of the electron cascade development under LPM effects is now admitted; it consists of an increase of depth of showers origin, of shower maximum T sum max, a decrease of the number of particles at maximum N sub max and results in a flattening and a widening of the cascade transition curve. Connected with the influence of multiple Coulomb scattering on basic electromagnetic processes (bremsstrahlung, pair production), this effect appears at high energy with a threshold dependent on the density of the medium (more than 10 TeV for lead, more than 10 sup 6 TeV in air). Consequently, the electromagnetic components of hadron induced showers in lead and EAS in air calculated for the same hadronic cascades in the different alternative, including or not the LPM effect are examined here.
Methods and Systems for Characterization of an Anomaly Using Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M. (Inventor)
2013-01-01
A method for characterizing an anomaly in a material comprises (a) extracting contrast data; (b) measuring a contrast evolution; (c) filtering the contrast evolution; (d) measuring a peak amplitude of the contrast evolution; (d) determining a diameter and a depth of the anomaly, and (e) repeating the step of determining the diameter and the depth of the anomaly until a change in the estimate of the depth is less than a set value. The step of determining the diameter and the depth of the anomaly comprises estimating the depth using a diameter constant C.sub.D equal to one for the first iteration of determining the diameter and the depth; estimating the diameter; and comparing the estimate of the depth of the anomaly after each iteration of estimating to the prior estimate of the depth to calculate the change in the estimate of the depth of the anomaly.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Xiao, Yong; Cheng, Xinyi; Li, Deng; Wang, Liwei
2016-02-01
For the continuous crystal-based positron emission tomography (PET) detector built in our lab, a maximum likelihood algorithm adapted for implementation on a field programmable gate array (FPGA) is proposed to estimate the three-dimensional (3D) coordinate of interaction position with the single-end detected scintillation light response. The row-sum and column-sum readout scheme organizes the 64 channels of photomultiplier (PMT) into eight row signals and eight column signals to be readout for X- and Y-coordinates estimation independently. By the reference events irradiated in a known oblique angle, the probability density function (PDF) for each depth-of-interaction (DOI) segment is generated, by which the reference events in perpendicular irradiation are assigned to DOI segments for generating the PDFs for X and Y estimation in each DOI layer. Evaluated by the experimental data, the algorithm achieves an average X resolution of 1.69 mm along the central X-axis, and DOI resolution of 3.70 mm over the whole thickness (0-10 mm) of crystal. The performance improvements from 2D estimation to the 3D algorithm are also presented. Benefiting from abundant resources of FPGA and a hierarchical storage arrangement, the whole algorithm can be implemented into a middle-scale FPGA. By a parallel structure in pipelines, the 3D position estimator on the FPGA can achieve a processing throughput of 15 M events/s, which is sufficient for the requirement of real-time PET imaging.
Calculating maximum frost depths at Mn/ROAD : winters 1993-94, 1994-95 and 1995-96
DOT National Transportation Integrated Search
1997-03-01
This effort involved calculating maximum frost penetration depths for each of the 40 test cells at Mn/ROAD, the Minnesota Department of Transportation's pavement testing facility, for the 1993-94, 1994-95, and 1995-96 winters. The report compares res...
Maximum rooting depth of vegetation types at the global scale.
Canadell, J; Jackson, R B; Ehleringer, J B; Mooney, H A; Sala, O E; Schulze, E-D
1996-12-01
The depth at which plants are able to grow roots has important implications for the whole ecosystem hydrological balance, as well as for carbon and nutrient cycling. Here we summarize what we know about the maximum rooting depth of species belonging to the major terrestrial biomes. We found 290 observations of maximum rooting depth in the literature which covered 253 woody and herbaceous species. Maximum rooting depth ranged from 0.3 m for some tundra species to 68 m for Boscia albitrunca in the central Kalahari; 194 species had roots at least 2 m deep, 50 species had roots at a depth of 5 m or more, and 22 species had roots as deep as 10 m or more. The average for the globe was 4.6±0.5 m. Maximum rooting depth by biome was 2.0±0.3 m for boreal forest. 2.1±0.2 m for cropland, 9.5±2.4 m for desert, 5.2±0.8 m for sclerophyllous shrubland and forest, 3.9±0.4 m for temperate coniferous forest, 2.9±0.2 m for temperate deciduous forest, 2.6±0.2 m for temperate grassland, 3.7±0.5 m for tropical deciduous forest, 7.3±2.8 m for tropical evergreen forest, 15.0±5.4 m for tropical grassland/savanna, and 0.5±0.1 m for tundra. Grouping all the species across biomes (except croplands) by three basic functional groups: trees, shrubs, and herbaceous plants, the maximum rooting depth was 7.0±1.2 m for trees, 5.1±0.8 m for shrubs, and 2.6±0.1 m for herbaceous plants. These data show that deep root habits are quite common in woody and herbaceous species across most of the terrestrial biomes, far deeper than the traditional view has held up to now. This finding has important implications for a better understanding of ecosystem function and its application in developing ecosystem models.
Arsenic in ground water in selected parts of southwestern Ohio, 2002-03
Thomas, Mary Ann; Schumann, Thomas L.; Pletsch, Bruce A.
2005-01-01
Arsenic concentrations were measured in 57 domestic wells in Preble, Miami, and Shelby Counties, in southwestern Ohio. The median arsenic concentration was 7.1 ?g/L (micrograms per liter), and the maximum was 67.6 ?g/L. Thirty-seven percent of samples had arsenic concentrations greater than the U.S. Environmental Protection Agency drinking-water standard of 10 ?g/L. Elevated arsenic concentrations (>10 ?g/L) were detected over the entire range of depths sampled (42 to 221 feet) and in each of three aquifer types, Silurian carbonate bedrock, glacial buried-valley deposits, and glacial till with interbedded sand and gravel. One factor common in all samples with elevated arsenic concentrations was that iron concentrations were greater than 1,000 ?g/L. The observed correlations of arsenic with iron and alkalinity are consistent with the hypothesis that arsenic was released from iron oxides under reducing conditions (by reductive dissolution or reductive desorption). Comparisons among the three aquifer types revealed some differences in arsenic occurrence. For buried-valley deposits, the median arsenic concentration was 4.6 ?g/L, and the maximum was 67.6 ?g/L. There was no correlation between arsenic concentrations and depth; the highest concentrations were at intermediate depths (about 100 feet). Half of the buried-valley samples were estimated to be methanic. Most of the samples with elevated arsenic concentrations also had elevated concentrations of dissolved organic carbon and ammonia. For carbonate bedrock, the median arsenic concentration was 8.0 ?g/L, and the maximum was 30.7 ?g/L. Arsenic concentrations increased with depth. Elevated arsenic concentrations were detected in iron- or sulfate-reducing samples. Arsenic was significantly correled with molybdenum, strontium, fluoride, and silica, which are components of naturally ocurring minerals. For glacial till with interbedded sand and gravel, half of the samples had elevated arsenic concentrations. The median was 11.4 ?g/L, and the maximum was 27.6 ?g/L. At shallow depths (<100 feet), this aquifer type had higher arsenic and iron concentrations than carbonate bedrock. It is not known whether these observed differences among aquifer types are related to variations in (1) arsenic content of the aquifer material, (2) organic carbon content of the aquifer material, (3) mechanisms of arsenic mobilization (or uptake), or (4) rates of arsenic mobilization (or uptake). A followup study that includes solid-phase analyses and geochemical modeling was begun in 2004 in northwestern Preble County.
Tseng, H.-Y.; Onstott, T.C.; Burruss, R.C.; Miller, D.S.
1996-01-01
Microbial populations have been found at the depth of 2621-2804 m in a borehole near the center of Triassic Taylorsville Basin, Virginia. To constrain possible scenarios for long-term survival in or introduction of these microbial populations to the deep subsurface, we attempted to refine models of thermal and burial history of the basin by analyzing aqueous and gaseous fluid inclusions in calcite/quartz veins or cements in cuttings from the same borehole. These results are complemented by fission-track data from the adjacent boreholes. Homogenization temperatures of secondary aqueous fluid inclusions range from 120?? to 210??C between 2027- and 3069-m depth, with highest temperatures in the deepest samples. The salinities of these aqueous inclusions range from 0 to ??? 4.3 eq wt% NaCl. Four samples from the depth between 2413 and 2931 m contain both two-phase aqueous and one-phase methane-rich inclusions in healed microcracks. The relative CH4 and CO2 contents of these gaseous inclusions was estimated by microthermometry and laser Raman spectroscopy. If both types of inclusions in sample 2931 m were trapped simultaneously, the density of the methane-rich inclusions calculated from the Peng - Robinson equation of state implies an entrapment pressure of 360 ?? 20 bar at the homogenization temperature (162.5 ?? 12.5??C) of the aqueous inclusions. This pressure falls between the hydrostatic and lithostatic pressures at the present depth 2931 m of burial. If we assume that the pressure regime was hydrostatic at the time of trapping, then the inclusions were trapped at 3.6 km in a thermal gradient of ??? 40??C/km. The high temperatures recorded by the secondary aqueous inclusions are consistent with the pervasive resetting of zircon and apatite fission-track dates. In order to fit the fission-track length distributions of the apatite data, however, a cooling rate of 1-2??C/Ma following the thermal maximum is required. To match the integrated dates, the thermal maximum would have occurred at ??? 200 Ma. The timing of the maximum temperature is consistent with rapid burial of the Taylorsville Basin to twice its present-day depth and thermal re-equilibration with a 40??C/km geothermal gradient, followed by slow exhumation. The results may imply that the microorganisms did not survive in situ, but were transported from the cooler portions of the basin sometime after maximum burial and heating.
Estimates of chemical compaction and maximum burial depth from bedding parallel stylolites
NASA Astrophysics Data System (ADS)
Gasparrini, Marta; Beaudoin, Nicolas; Lacombe, Olivier; David, Marie-Eleonore; Youssef, Souhail; Koehn, Daniel
2017-04-01
Chemical compaction is a diagenetic process affecting sedimentary series during burial that develops rough dissolution surfaces named Bedding Parallel Stylolites (BPS). BPS are related to the dissolution of important rock volumes and can lead to porosity reduction around them due to post-dissolution cementation. Our understanding of the effect of chemical compaction on rock volume and porosity evolution during basin burial is however too tight yet to be fully taken into account in basin models and thermal or fluid-flow simulations. This contribution presents a novel and multidisciplinary approach to quantify chemical compaction and to estimate maximum paleodepth of burial, applied to the Dogger carbonate reservoirs from the Paris Basin sub-surface. This succession experienced a relatively simple burial history (nearly continuous burial from Upper Jurassic to Upper Cretaceous, followed by a main uplift phase), and mainly underwent normal overburden (inducing development of BPS), escaping major tectonic stress episodes. We considered one core from the depocentre and one from the eastern margin of the basin in the same stratigraphic interval (Bathonian Sup. - Callovian Inf.; restricted lagoonal setting), and analysed the macro- and micro-facies to distinguish five main depositional environments. Type and abundance of BPS were continuously recorded along the logs and treated statistically to obtain preliminary rules relying the occurrence of the BPS as a function of the contrasting facies and burial histories. The treatment of high resolution 2D images allowed the identification and separation of the BPS to evaluate total stylolitization density and insoluble thickness as an indirect measure of the dissolved volume, with respect to the morphology of the BPS considered. Based on the morphology of the BPS roughness, we used roughness signal analysis method to reconstruct the vertical paleo-stress (paleo-depth) recorded by the BPS during chemical compaction. The comparison between amount of compaction and dissolved volume as a function of the macro- and micro-facies, as well as estimates of maximum paleodepth of burial, deepen our knowledge of the factors controlling BPS development, the total thickness of carbonate dissolved and the occurrence of induced cementation in sedimentary basins.
Dwyer, Gary S.; Cronin, Thomas M.; Baker, Paul A.; Rodriguez-Lazaro, Julio
2000-01-01
We reconstructed three time series of last glacial-to-present deep-sea temperature from deep and intermediate water sediment cores from the western North Atlantic using Mg/Ca ratios of benthic ostracode shells. Although the Mg/Ca data show considerable variability (“scatter”) that is common to single-shell chemical analyses, comparisons between cores, between core top shells and modern bottom water temperatures (BWT), and comparison to other paleo-BWT proxies, among other factors, suggest that multiple-shell average Mg/Ca ratios provide reliable estimates of BWT history at these sites. The BWT records show not only glacial-to-interglacial variations but also indicate BWT changes during the deglacial and within the Holocene interglacial stage. At the deeper sites (4500- and 3400-m water depth), BWT decreased during the last glacial maximum (LGM), the late Holocene, and possibly during the Younger Dryas. Maximum deep-sea warming occurred during the latest deglacial and early Holocene, when BWT exceeded modern values by as much as 2.5°C. This warming was apparently most intense around 3000 m, the depth of the modern-day core of North Atlantic deep water (NADW). The BWT variations at the deeper water sites are consistent with changes in thermohaline circulation: warmer BWT signifies enhanced NADW influence relative to Antarctic bottom water (AABW). Thus maximum NADW production and associated heat flux likely occurred during the early Holocene and decreased abruptly around 6500 years B.P., a finding that is largely consistent with paleonutrient studies in the deep North Atlantic. BWT changes in intermediate waters (1000-m water depth) of the subtropical gyre roughly parallel the deep BWT variations including dramatic mid-Holocene cooling of around 4°C. Joint consideration of the Mg/Ca-based BWT estimates and benthic oxygen isotopes suggests that the cooling was accompanied by a decrease in salinity at this site. Subsequently, intermediate waters warmed to modern values that match those of the early Holocene maximum of ∼7°C. Intermediate water BWT changes must also be driven by changes in ocean circulation. These results thus provide independent evidence that supports the hypothesis that deep-ocean circulation is closely linked to climate change over a range of timescales regardless of the mean climate state. More generally, the results further demonstrate the potential of benthic Mg/Ca ratios as a tool for reconstructing past ocean and climate conditions.
Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.
2011-01-01
Remote sensing offers an efficient means of mapping bathymetry in river systems, but this approach has been applied primarily to clear-flowing, gravel bed streams. This study used field spectroscopy and radiative transfer modeling to assess the feasibility of spectrally based depth retrieval in a sand-bed river with a higher suspended sediment concentration (SSC) and greater water turbidity. Attenuation of light within the water column was characterized by measuring the amount of downwelling radiant energy at different depths and calculating a diffuse attenuation coefficient, Kd. Attenuation was strongest in blue and near-infrared bands due to scattering by suspended sediment and absorption by water, respectively. Even for red wavelengths with the lowest values of Kd, only a small fraction of the incident light propagated to the bed, restricting the range of depths amenable to remote sensing. Spectra recorded above the water surface were used to establish a strong, linear relationship (R2 = 0.949) between flow depth and a simple band ratio; even under moderately turbid conditions, depth remained the primary control on reflectance. Constraints on depth retrieval were examined via numerical modeling of radiative transfer within the atmosphere and water column. SSC and sensor radiometric resolution limited both the maximum detectable depth and the precision of image-derived depth estimates. Thus, although field spectra indicated that the bathymetry of turbid channels could be remotely mapped, model results implied that depth retrieval in sediment-laden rivers would be limited to shallow depths (on the order of 0.5 m) and subject to a significant degree of uncertainty.
NASA Astrophysics Data System (ADS)
Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.
2018-03-01
The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.
Antecedent wetness conditions based on ERS scatterometer data
NASA Astrophysics Data System (ADS)
Brocca, L.; Melone, F.; Moramarco, T.; Morbidelli, R.
2009-01-01
SummarySoil moisture is widely recognized as a key parameter in environmental processes mainly for the role of rainfall partitioning into runoff and infiltration. Therefore, for storm rainfall-runoff modeling the estimation of the antecedent wetness conditions ( AWC) is one of the most important aspect. In this context, this study investigates the potential of scatterometer on board of the ERS satellites for the assessment of wetness conditions in three Tiber sub-catchments (Central Italy), of which one includes an experimental area for soil moisture monitoring. The satellite soil moisture data are taken from the ERS/METOP soil moisture archive. First, the scatterometer-derived soil wetness index ( SWI) data are compared with two on-site soil moisture data sets acquired by different methodologies on areas of different extension ranging from 0.01 km 2 to ˜60 km 2. Moreover, the reliability of SWI to estimate the AWC at a catchment scale is investigated considering the relationship between SWI and the soil potential maximum retention parameter, S, of the Soil Conservation Service-Curve Number (SCS-CN) method for abstraction. Several flood events occurred from 1992 to 2005 are selected for this purpose. Specifically, the performance of the SWI for S estimation is compared with two antecedent precipitation indices ( API) and one base flow index ( BFI). The S values obtained through the observed direct runoff volume and rainfall depth are used as benchmark. Results show the great reliability of the SWI for the estimation of wetness conditions both at the plot and catchment scale despite the complex orography of the investigated areas. As far as the comparison with on site soil moisture data set is concerned, the SWI is found quite reliable in representing the soil moisture at layer depth of 15 cm, with a mean correlation coefficient equal to 0.81. The characteristic time length parameter variations, as expected, is depended on soil type, with values in accordance with previous studies. In terms of AWC assessment at catchment scale, based on selected flood events, the SWI is found highly correlated with the observed maximum potential retention of the SCS-CN method with a correlation coefficient R equal to -0.90. Besides, SWI in representing the AWC of the three investigated catchments, outperformed both API indices, poorly representative of AWC, and BFI. Finally, the classical SCS-CN method applied for direct runoff depth estimation, where S is assessed by SWI, provided good performance with a percentage error not exceeding ˜25% for 80% of investigated rainfall-runoff events.
NASA Technical Reports Server (NTRS)
Glushkov, A. V.; Efimov, N. N.; Makarov, I. T.; Pravdin, M. I.; Dedenko, L. G.
1985-01-01
The extensive air shower (EAS) development model independent method of the determination of a maximum depth of shower (X sub m) is considered. X sub m values obtained on various EAS parameters are in a good agreement.
The fish community of a small impoundment in upstate New York
McCoy, C. Mead; Madenjian, Charles P.; Adams, Jean V.; Harman, Willard N.
2001-01-01
Moe Pond is a dimictic impoundment with surface area of 15.6 ha, a mean depth of 1.8 m, and an unexploited fish community of only two species: brown bullhead (Ameiurus nebulosus) and golden shiner (Notemigonus crysoleucas). The age-1 and older brown bullhead population was estimated to be 4,057 individuals, based on the Schnabel capture-recapture method of population estimation. Density and biomass were respectively estimated at 260 individuals/ha and 13 kg/ha. Annual survival rate of age-2 through age-5 brown bullheads was estimated at 48%. The golden shiner length-frequency distribution was unimodal with modal length of 80 mm and maximum total length of 115 m. The golden shiner population estimate was 7,154 individuals, based on seven beach seine haul replicate samples; the density and biomass were 686 shiners/ha and 5 kg/ha, respectively. This study provides an information baseline that may be useful in understanding food web interactions and whole-pond nutrient flux.
GPS constraints on M 7-8 earthquake recurrence times for the New Madrid seismic zone
Stuart, W.D.
2001-01-01
Newman et al. (1999) estimate the time interval between the 1811-1812 earthquake sequence near New Madrid, Missouri and a future similar sequence to be at least 2,500 years, an interval significantly longer than other recently published estimates. To calculate the recurrence time, they assume that slip on a vertical half-plane at depth contributes to the current interseismic motion of GPS benchmarks. Compared to other plausible fault models, the half-plane model gives nearly the maximum rate of ground motion for the same interseismic slip rate. Alternative models with smaller interseismic fault slip area can satisfy the present GPS data by having higher slip rate and thus can have earthquake recurrence times much less than 2,500 years.
Evolution of CO2 in lakes Monoun and Nyos, Cameroon, before and during controlled degassing
Kusakabe, M.; Ohba, T.; Issa,; Yoshida, Y.; Satake, H.; Ohizumi, T.; Evans, William C.; Tanyileke, G.; Kling, G.W.
2008-01-01
Evolution of CO2 in Lakes Monoun and Nyos (Cameroon) before and during controlled degassing is described using results of regular monitoring obtained during the last 21 years. The CO2(aq) profiles soon after the limnic eruptions were estimated for Lakes Monoun and Nyos using the CTD data obtained in October and November 1986, respectively. Based on the CO2(aq) profiles through time, the CO2 Content and its change over time were calculated for both lakes. The CO2 accumulation rate calculated from the pre-degassing data, was constant after the limnic eruption at Lake Nyos (1986-2001), whereas the rate appeared initially high (1986-1996) but later slowed down (1996-2003) at Lake Monoun. The CO2 concentration at 58 m depth in Lake Monoun in January 2003 was very close to saturation due to the CO2 accumulation. This situation is suggestive of a mechanism for the limnic eruption, because it may take place spontaneously without receiving an external trigger. The CO2 content of the lakes decreased significantly after controlled degassing started in March 2001 at Lake Nyos and in February 2003 at Lake Monoun. The current content is lower than the content estimated soon after the limnic eruption at both lakes. At Monoun the degassing rate increased greatly after February 2006 due to an increase of the number of degassing pipes and deepening of the pipe intake depth. The current CO2 content is ???40% of the maximum content attained just before the degassing started. At current degassing rates the lower chemocline will subside to the degassing pipe intake depth of 93 m in about one year. After this depth is reached, the gas removal rate will progressively decline because water of lower CO2(aq) concentration will be tapped by the pipes. To keep the CO2 content of Lake Monoun as small as possible, it is recommended to set up a new, simple device that sends deep water to the surface since natural recharge of CO2 will continue. Controlled degassing at Lake Nyos since 2001 has also reduced the CO2 content. It is currently slightly below the level estimated after the limnic eruption in 1986. However, the current CO2 content still amounts to 80% of the maximum level of 14.8 giga moles observed in January 2001. The depth of the lower chemocline may reach the pipe intake depth of 203 m within a few years. After this situation is reached the degassing rate with the current system will progressively decline, and it would take decades to remove the majority of dissolved gases even if the degassing system keeps working continuously. Additional degassing pipes must be installed to speed up gas removal from Lake Nyos in order to make the area safer for local populations. Copyright ?? 2008 by The Geochemical Society of Japan.
NASA Astrophysics Data System (ADS)
Legowo, B.; Darsono; Putra, A. G.; Kurniawan, M. F. R.
2018-03-01
Geoelectrical is one of the geophysical methods that used to characteristic of rocks for early stage exploration. Geoelectrical using Wenner-Schlumberger configuration has been used to estimate the aquifer at Pondok Pesantren Darussallam. Based on the geological map of Grobogan, Kradenan is consist of Alluvium. There are three lines acquisition which length 500 meters and the space of electrode is 25 meters. The data processed using Res2Dinv and the 2D inversion show that the maximum depth is 78.2 meters. The result of this researh show that there is a aquifer at depth 30 - 50 meters. Based on the values of resistivity, 1 - 10 ohm,m identified as clay, then resistivity 10 - 100 ohm.m is sandstone indicated as aquifer, and resistivity 100 - 1338,9 ohm.m is limestone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paling, S.; Hillas, A.M.; Berley, D.
1997-07-01
An array of six wide angle Cerenkov detectors was constructed amongst the scintillator and muon detectors of the CYGNUS II array at Los Alamos National Laboratory to investigate cosmic ray composition in the PeV region through measurements of the shape of Cerenkov lateral distributions. Data were collected during clear, moonless nights over three observing periods in 1995. Estimates of depths of shower maxima determined from the recorded Cerenkov lateral distributions align well with existing results at higher energies and suggest a mixed to heavy composition in the PeV region with no significant variation observed around the knee. The accuracy ofmore » composition determination is limited by uncertainties in the expected levels of depth of maximum predicted using different Monte-Carlo shower simulation models.« less
Salp distribution and grazing in a saline intrusion off NW Spain
NASA Astrophysics Data System (ADS)
Huskin, Iñaki; Elices, Ma. José; Anadón, Ricardo
2003-07-01
Salp distribution and grazing were studied along three transects (19 stations) and a Lagrangian phase (7 stations) off Galician coast (NW Spain) in November 1999 during GIGOVI 99 cruise. A poleward saline intrusion was detected at the shelf-break, reaching salinity values above 35.90 u.p.s. at 100-m depth. The salp community was dominated by Salpa fusiformis, although Cyclosalpa bakeri, Thalia democratica and Iasis zonaria were also found in the study area. Total salp abundance ranged from 4 to 4500 ind m -2, representing biomass values between 0.2 and 2750 mg C m -2. Maximum densities were located in the frontal area separating the saline body from coastal waters. S. fusiformis pigment ingestion was estimated using the gut fluorescence method. Gut contents were linearly related to salp body size. Total pigment ingestion ranged from 0.001 to 15 mg Chl- a m -2 d -1, with maximum values at the coastal edge of the saline body. Estimated ingestion translates into an average daily grazing impact of 7% of chlorophyll standing stock, ranging from <1% to 77%.
Reconstructing the Paleo-Limnologic Evolution of Lake Bonney, Antarctica using Dissolved Noble Gases
NASA Astrophysics Data System (ADS)
Warrier, R. B.; Castro, M.; Hall, C. M.; Kenig, F. P.; Doran, P. T.
2013-12-01
The McMurdo Dry Valleys, situated on the western coast of the Ross Sea are the largest ice-free region in Antarctica. Lake Bonney (LB), located in western Taylor valley, one of the main east-west dry valleys, has two lobes, East Lake Bonney (ELB) and West Lake Bonney (WLB), which are separated by a narrow straight with a ~13 m deep sill. Because the evolution of LB is ultimately controlled by climate and because there are no reliable millennial-scale continental records of climate other than the Taylor Dome ice core in this region of Antarctica, a number of studies have reconstructed the paleolimnologic history of LB using diverse tools to try to reconstruct the history of the lake, and thus, the climate evolution in this area. However, many open questions remain with respect to the paleo-limnologic evolution of LB. To further place constraints on the evolution of LB, we analyzed 23 lake samples collected between 5 and ~40 m depth from both ELB and WLB for He and Ar concentrations as well as isotopic ratios. Preliminary results show that samples present He excesses up to two and three orders of magnitude with respect to air saturated water (ASW) in ELB and WLB, respectively. While He excesses generally increase with depth in WLB suggesting accumulation of 4He over time, a similar correlation with depth is not observed for ELB samples, indicating a more complex evolutionary history in this lobe. Measured R/Ra He isotopic ratios, where Ra is the atmospheric 3He/4He ratio, vary between 0.20-0.61 and 0.16-0.22 for ELB and WLB respectively, and indicate that observed He excesses are predominantly crustal in origin, with a small (<~5%) mantle contribution. In contrast, measured 40Ar/36Ar ratios indicate that Ar concentrations at all depths in ELB are atmospheric in origin while WLB samples below the sill indicate addition of excess 40Ar, likely of radiogenic origin. Preliminary estimates of water residence times based on measured He excesses and crustal production ratios from basement rocks point to maximum water ages of ~5 kyrs and ~500 kyrs for the deep waters of ELB and WLB, respectively. Similarly, a maximum residence time of ~500 kyrs was obtained for bottom waters of WLB assuming a crustal origin for the observed excess 40Ar. These preliminary age results are maximum estimations and assume that all He and Ar excesses are entirely of crustal origin. Our preliminary results indicate that the WLB waters have been isolated from the atmosphere for a much longer period of time than ELB waters and point to a very different evolution of both lobes. In addition, these maximum WLB ages obtained are much younger than previously thought (~1-5 Ma).
Technique for estimating depth of floods in Tennessee
Gamble, C.R.
1983-01-01
Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)
NASA Astrophysics Data System (ADS)
Sadeghi-Goughari, M.; Mojra, A.; Sadeghi, S.
2016-02-01
Intraoperative Thermal Imaging (ITI) is a new minimally invasive diagnosis technique that can potentially locate margins of brain tumor in order to achieve maximum tumor resection with least morbidity. This study introduces a new approach to ITI based on artificial tactile sensing (ATS) technology in conjunction with artificial neural networks (ANN) and feasibility and applicability of this method in diagnosis and localization of brain tumors is investigated. In order to analyze validity and reliability of the proposed method, two simulations were performed. (i) An in vitro experimental setup was designed and fabricated using a resistance heater embedded in agar tissue phantom in order to simulate heat generation by a tumor in the brain tissue; and (ii) A case report patient with parafalcine meningioma was presented to simulate ITI in the neurosurgical procedure. In the case report, both brain and tumor geometries were constructed from MRI data and tumor temperature and depth of location were estimated. For experimental tests, a novel assisted surgery robot was developed to palpate the tissue phantom surface to measure temperature variations and ANN was trained to estimate the simulated tumor’s power and depth. Results affirm that ITI based ATS is a non-invasive method which can be useful to detect, localize and characterize brain tumors.
Xiao, Kun; Zou, Changchun; Xiang, Biao; Liu, Jieqiong
2013-01-01
Gas hydrate model and free gas model are established, and two-phase theory (TPT) for numerical simulation of elastic wave velocity is adopted to investigate the unconsolidated deep-water sedimentary strata in Shenhu area, South China Sea. The relationships between compression wave (P wave) velocity and gas hydrate saturation, free gas saturation, and sediment porosity at site SH2 are studied, respectively, and gas hydrate saturation of research area is estimated by gas hydrate model. In depth of 50 to 245 m below seafloor (mbsf), as sediment porosity decreases, P wave velocity increases gradually; as gas hydrate saturation increases, P wave velocity increases gradually; as free gas saturation increases, P wave velocity decreases. This rule is almost consistent with the previous research result. In depth of 195 to 220 mbsf, the actual measurement of P wave velocity increases significantly relative to the P wave velocity of saturated water modeling, and this layer is determined to be rich in gas hydrate. The average value of gas hydrate saturation estimated from the TPT model is 23.2%, and the maximum saturation is 31.5%, which is basically in accordance with simplified three-phase equation (STPE), effective medium theory (EMT), resistivity log (Rt), and chloride anomaly method. PMID:23935407
Bathymetric Contour Maps for Lakes Surveyed in Iowa in 2006
Linhart, S.M.; Lund, K.D.
2008-01-01
The U.S. Geological Survey, in cooperation with the Iowa Department of Natural Resources, conducted bathymetric surveys on two lakes in Iowa during 2006 (Little Storm Lake and Silver Lake). The surveys were conducted to provide the Iowa Department of Natural Resources with information for the development of total maximum daily load limits, particularly for estimating sediment load and deposition rates. The bathymetric surveys can provide a baseline for future work on sediment loads and deposition rates for these lakes. Both of the lakes surveyed in 2006 are natural lakes. For Silver Lake, bathymetric data were collected using boat-mounted, differential global positioning system, echo depth-sounding equipment, and computer software. For Little Storm Lake, because of its shallow nature, bathymetric data were collected using manual depth measurements. Data were processed with commercial hydrographic software and exported into a geographic information system for mapping and calculating area and volume. Lake volumes were estimated to be 7,547,000 cubic feet (173 acre-feet) at Little Storm Lake and 126,724,000 cubic feet (2,910 acre-feet) at Silver Lake. Surface areas were estimated to be 4,110,000 square feet (94 acres) at Little Storm Lake and 27,957,000 square feet (640 acres) at Silver Lake.
Long-Term Hydrologic Impacts of Controlled Drainage Using DRAINMOD
NASA Astrophysics Data System (ADS)
Saadat, S.; Bowling, L. C.; Frankenberger, J.
2017-12-01
Controlled drainage is a management strategy designed to mitigate water quality issues caused by subsurface drainage but it may increase surface ponding and runoff. To improve controlled drainage system management, a long-term and broader study is needed that goes beyond the experimental studies. Therefore, the goal of this study was to parametrize the DRAINMOD field-scale, hydrologic model for the Davis Purdue Agricultural Center located in Eastern Indiana and to predict the subsurface drain flow and surface runoff and ponding at this research site. The Green-Ampt equation was used to characterize the infiltration, and digital elevation models (DEMs) were used to estimate the maximum depressional storage as the surface ponding parameter inputs to DRAINMOD. Hydraulic conductivity was estimated using the Hooghoudt equation and the measured drain flow and water table depths. Other model inputs were either estimated or taken from the measurements. The DRAINMOD model was calibrated and validated by comparing model predictions of subsurface drainage and water table depths with field observations from 2012 to 2016. Simulations based on the DRAINMOD model can increase understanding of the environmental and hydrological effects over a broader temporal and spatial scale than is possible using field-scale data and this is useful for developing management recommendations for water resources at field and watershed scales.
Sesquinary reimpacts dominate surface characteristics on Phobos
NASA Astrophysics Data System (ADS)
Nayak, Michael
2018-01-01
We use topographic data to show that impact craters with pitted floor deposits are among the deepest on Mars. This is consistent with the interpretation of pitted materials as primary crater-fill impactite deposits emplaced during crater formation. Our database consists of 224 pitted material craters ranging in size from ˜1 to 150 km in diameter. Our measurements are based on topographic data from the Mars Orbiter Laser Altimeter (MOLA) and the High-Resolution Stereo Camera (HRSC). We have used these craters to measure the relationship between crater diameter and the initial post-formation depth. Depth was measured as maximum rim-to-floor depth, (dr), but we also report the depth measured using other definitions. The database was down-selected by refining or removing elevation measurements from "problematic" craters affected by processes and conditions that influenced their dr/D, such as pre-impact slopes/topography and later overprinting craters. We report a maximum (deepest) and mean scaling relationship of dr = (0.347±0.021)D0.537±0.017 and dr = (0.323±0.017)D0.538±0.016, respectively. Our results suggest that significant variations between previously-reported MOLA-based dr vs. D relationships may result from the inclusion of craters that: 1) are influenced by atypical processes (e.g., highly oblique impact), 2) are significantly degraded, 3) reside within high-strength regions, and 4) are transitional (partially collapsed). By taking such issues into consideration and only measuring craters with primary floor materials, we present the best estimate to date of a MOLA-based relationship of dr vs. D for the least-degraded complex craters on Mars. This can be applied to crater degradation studies and provides a useful constraint for models of complex crater formation.
Sedimentation rates in the marshes of Sand Lake National Wildlife Refuge
Gleason, R.A.; Euliss, N.H.; Holmes, C.W.
2003-01-01
Impoundments located within river systems in the Northern Great Plains are vulnerable to sediment inputs because intensive agriculture in watersheds has increased soil erosion and sediments in rivers. At the request of the U.S. Fish and Wildlife Service (FWS), we evaluated the vertical accretion of sediment in the Mud Lake impoundment of Sand Lake National Wildlife Refuge (NWR), Brown County, South Dakota. The Mud Lake impoundment was created in 1936 by constructing a low-head dam across the James River. We collected sediment cores from the Mud Lake impoundment during August 2000 for determination of vertical accretion rates. Accretion rates were estimated using cesium-13 7 and lead-210 isotopic dating techniques to estimate sediment accretion over the past 100 years. Accretion rates were greatest near the dam (1.3 cm yr-1) with less accretion (0.2 cm yr-1) occurring in the upper reaches of Mud Lake. As expected, accretion was highest near the dam where water velocities and greater water depth facilitates sediment deposition. Higher rates of sedimentation (accretion> 2.0 cm year-1) occurred during the 1990s when river flows were especially high. Since 1959, sediment accretion has reduced maximum pool depth of Mud Lake near the dam by 55 cm. Assuming that sediment accretion rates remain the same in the future, we project Mud Lake will have a maximum pool depth of 77 and 51 cm by 2020 and 2040, respectively. Over this same time frame, water depths in the upper reaches of Mud Lake would be reduced to< 2 cm. Projected future loss of water depth will severely limit the ability of managers to manipulate pool levels in Mud Lake to cycle vegetation and create interspersion of cover and water to meet current wildlife habitat management objectives. As predicted for major dams constructed on rivers throughout the world, Mud Lake will have a finite life span. Our data suggests that the functional life span of Mud Lake since construction will be < 100 years. We anticipate that over the next 20 years, sediments entering Mud Lake will reduce water depths to the point that current wildlife management objectives cannot be achieved through customary water-level manipulations. Sedimentation impacts are not unique to the Sand Lake NWR. It is widely accepted that impoundments trap sediments and shallow impoundments, such as those managed by the FWS, are especially vulnerable. Given the ecological impacts associated with loss of water depths, we recommend that managers begin evaluating the long-term wildlife management goals for the refuge relative to associated costs and feasibility of options available to enhance and maximize the life span of existing impoundments, including upper watershed management.
Estimating Stresses, Fault Friction and Fluid Pressure from Topography and Coseismic Slip Models
NASA Astrophysics Data System (ADS)
Styron, R. H.; Hetland, E. A.
2014-12-01
Stress is a first-order control on the deformation state of the earth. However, stress is notoriously hard to measure, and researchers typically only estimate the directions and relative magnitudes of principal stresses, with little quantification of the uncertainties or absolute magnitude. To improve upon this, we have developed methods to constrain the full stress tensor field in a region surrounding a fault, including tectonic, topographic, and lithostatic components, as well as static friction and pore fluid pressure on the fault. Our methods are based on elastic halfspace techniques for estimating topographic stresses from a DEM, and we use a Bayesian approach to estimate accumulated tectonic stress, fluid pressure, and friction from fault geometry and slip rake, assuming Mohr-Coulomb fault mechanics. The nature of the tectonic stress inversion is such that either the stress maximum or minimum is better constrained, depending on the topography and fault deformation style. Our results from the 2008 Wenchuan event yield shear stresses from topography up to 20 MPa (normal-sinistral shear sense) and topographic normal stresses up to 80 MPa on the faults; tectonic stress had to be large enough to overcome topography to produce the observed reverse-dextral slip. Maximum tectonic stress is constrained to be >0.3 * lithostatic stress (depth-increasing), with a most likely value around 0.8, trending 90-110°E. Minimum tectonic stress is about half of maximum. Static fault friction is constrained at 0.1-0.4, and fluid pressure at 0-0.6 * total pressure on the fault. Additionally, the patterns of topographic stress and slip suggest that topographic normal stress may limit fault slip once failure has occurred. Preliminary results from the 2013 Balochistan earthquake are similar, but yield stronger constraints on the upper limits of maximum tectonic stress, as well as tight constraints on the magnitude of minimum tectonic stress and stress orientation. Work in progress on the Wasatch fault suggests that maximum tectonic stress may also be able to be constrained, and that some of the shallow rupture segmentation may be due in part to localized topographic loading. Future directions of this work include regions where high relief influences fault kinematics (such as Tibet).
Assessing the impact of climate and land use changes on extreme floods in a large tropical catchment
NASA Astrophysics Data System (ADS)
Jothityangkoon, Chatchai; Hirunteeyakul, Chow; Boonrawd, Kowit; Sivapalan, Murugesu
2013-05-01
In the wake of the recent catastrophic floods in Thailand, there is considerable concern about the safety of large dams designed and built some 50 years ago. In this paper a distributed rainfall-runoff model appropriate for extreme flood conditions is used to generate revised estimates of the Probable Maximum Flood (PMF) for the Upper Ping River catchment (area 26,386 km2) in northern Thailand, upstream of location of the large Bhumipol Dam. The model has two components: a continuous water balance model based on a configuration of parameters estimated from climate, soil and vegetation data and a distributed flood routing model based on non-linear storage-discharge relationships of the river network under extreme flood conditions. The model is implemented under several alternative scenarios regarding the Probable Maximum Precipitation (PMP) estimates and is also used to estimate the potential effects of both climate change and land use and land cover changes on the extreme floods. These new estimates are compared against estimates using other hydrological models, including the application of the original prediction methods under current conditions. Model simulations and sensitivity analyses indicate that a reasonable Probable Maximum Flood (PMF) at the dam site is 6311 m3/s, which is only slightly higher than the original design flood of 6000 m3/s. As part of an uncertainty assessment, the estimated PMF is sensitive to the design method, input PMP, land use changes and the floodplain inundation effect. The increase of PMP depth by 5% can cause a 7.5% increase in PMF. Deforestation by 10%, 20%, 30% can result in PMF increases of 3.1%, 6.2%, 9.2%, respectively. The modest increase of the estimated PMF (to just 6311 m3/s) in spite of these changes is due to the factoring of the hydraulic effects of trees and buildings on the floodplain as the flood situation changes from normal floods to extreme floods, when over-bank flows may be the dominant flooding process, leading to a substantial reduction in the PMF estimates.
Coseismic slip distribution of the February 27, 2010 Mw 8.9 Maule, Chile earthquake
Pollitz, Fred F.; Brooks, Ben; Tong, Xiaopeng; Bevis, Michael G.; Foster, James H.; Burgmann, Roland
2011-01-01
[1] Static offsets produced by the February 27, 2010 Mw = 8.8 Maule, Chile earthquake as measured by GPS and InSAR constrain coseismic slip along a section of the Andean megathrust of dimensions 650 km (in length) × 180 km (in width). GPS data have been collected from both campaign and continuous sites sampling both the near-field and far field. ALOS/PALSAR data from several ascending and descending tracks constrain the near-field crustal deformation. Inversions of the geodetic data for distributed slip on the megathrust reveal a pronounced slip maximum of order 15 m at ∼15–25 km depth on the megathrust offshore Lloca, indicating that seismic slip was greatest north of the epicenter of the bilaterally propagating rupture. A secondary slip maximum appears at depth ∼25 km on the megathrust just west of Concepción. Coseismic slip is negligible below 35 km depth. Estimates of the seismic moment based on different datasets and modeling approaches vary from 1.8 to 2.6 × 1022 N m. Our study is the first to model the static displacement field using a layered spherical Earth model, allowing us to incorporate both near-field and far-field static displacements in a consistent manner. The obtained seismic moment of 1.97 × 1022 N m, corresponding to a moment magnitude of 8.8, is similar to that obtained by previous seismic and geodetic inversions.
He, J; Gao, H; Xu, P; Yang, R
2015-12-01
Body weight, length, width and depth at two growth stages were observed for a total of 5015 individuals of GIFT strain, along with a pedigree including 5588 individuals from 104 sires and 162 dams was collected. Multivariate animal models and a random regression model were used to genetically analyse absolute and relative growth scales of these growth traits. In absolute growth scale, the observed growth traits had moderate heritabilities ranging from 0.321 to 0.576, while pairwise ratios between body length, width and depth were lowly inherited and maximum heritability was only 0.146 for length/depth. All genetic correlations were above 0.5 between pairwise growth traits and genetic correlation between length/width and length/depth varied between both growth stages. Based on those estimates, selection index of multiple traits of interest can be formulated in future breeding program to improve genetically body weight and morphology of the GIFT strain. In relative growth scale, heritabilities in relative growths of body length, width and depth to body weight were 0.257, 0.412 and 0.066, respectively, while genetic correlations among these allometry scalings were above 0.8. Genetic analysis for joint allometries of body weight to body length, width and depth will contribute to genetically regulate the growth rate between body shape and body weight. © 2015 Blackwell Verlag GmbH.
Holtschlag, David J.
2009-01-01
Two-dimensional hydrodynamic and transport models were applied to a 34-mile reach of the Ohio River from Cincinnati, Ohio, upstream to Meldahl Dam near Neville, Ohio. The hydrodynamic model was based on the generalized finite-element hydrodynamic code RMA2 to simulate depth-averaged velocities and flow depths. The generalized water-quality transport code RMA4 was applied to simulate the transport of vertically mixed, water-soluble constituents that have a density similar to that of water. Boundary conditions for hydrodynamic simulations included water levels at the U.S. Geological Survey water-level gaging station near Cincinnati, Ohio, and flow estimates based on a gate rating at Meldahl Dam. Flows estimated on the basis of the gate rating were adjusted with limited flow-measurement data to more nearly reflect current conditions. An initial calibration of the hydrodynamic model was based on data from acoustic Doppler current profiler surveys and water-level information. These data provided flows, horizontal water velocities, water levels, and flow depths needed to estimate hydrodynamic parameters related to channel resistance to flow and eddy viscosity. Similarly, dye concentration measurements from two dye-injection sites on each side of the river were used to develop initial estimates of transport parameters describing mixing and dye-decay characteristics needed for the transport model. A nonlinear regression-based approach was used to estimate parameters in the hydrodynamic and transport models. Parameters describing channel resistance to flow (Manning’s “n”) were estimated in areas of deep and shallow flows as 0.0234, and 0.0275, respectively. The estimated RMA2 Peclet number, which is used to dynamically compute eddy-viscosity coefficients, was 38.3, which is in the range of 15 to 40 that is typically considered appropriate. Resulting hydrodynamic simulations explained 98.8 percent of the variability in depth-averaged flows, 90.0 percent of the variability in water levels, 93.5 percent of the variability in flow depths, and 92.5 percent of the variability in velocities. Estimates of the water-quality-transport-model parameters describing turbulent mixing characteristics converged to different values for the two dye-injection reaches. For the Big Indian Creek dye-injection study, an RMA4 Peclet number of 37.2 was estimated, which was within the recommended range of 15 to 40, and similar to the RMA2 Peclet number. The estimated dye-decay coefficient was 0.323. Simulated dye concentrations explained 90.2 percent of the variations in measured dye concentrations for the Big Indian Creek injection study. For the dye-injection reach starting downstream from Twelvemile Creek, however, an RMA4 Peclet number of 173 was estimated, which is far outside the recommended range. Simulated dye concentrations were similar to measured concentration distributions at the first four transects downstream from the dye-injection site that were considered vertically mixed. Farther downstream, however, simulated concentrations did not match the attenuation of maximum concentrations or cross-channel transport of dye that were measured. The difficulty of determining a consistent RMA4 Peclet was related to the two-dimension model assumption that velocity distributions are closely approximated by their depth-averaged values. Analysis of velocity data showed significant variations in velocity direction with depth in channel reaches with curvature. Channel irregularities (including curvatures, depth irregularities, and shoreline variations) apparently produce transverse currents that affect the distribution of constituents, but are not fully accounted for in a two-dimensional model. The two-dimensional flow model, using channel resistance to flow parameters of 0.0234 and 0.0275 for deep and shallow areas, respectively, and an RMA2 Peclet number of 38.3, and the RMA4 transport model with a Peclet number of 37.2, may have utility for emergency-planning purposes. Emergency-response efforts would be enhanced by continuous streamgaging records downstream from Meldahl Dam, real-time water-quality monitoring, and three-dimensional modeling. Decay coefficients are constituent specific.
Maize and soybean root front velocity and maximum depth in the Iowa, USA
USDA-ARS?s Scientific Manuscript database
Quantitative measurements of root traits can improve our understanding of how crops respond to soil-weather conditions. However, such data are rare. Our objective was to quantify maximum root depth and root front velocity (RFV) for corn and soybean crops across a range of growing conditions in the M...
Quantitative light-induced fluorescence technology for quantitative evaluation of tooth wear
NASA Astrophysics Data System (ADS)
Kim, Sang-Kyeom; Lee, Hyung-Suk; Park, Seok-Woo; Lee, Eun-Song; de Josselin de Jong, Elbert; Jung, Hoi-In; Kim, Baek-Il
2017-12-01
Various technologies used to objectively determine enamel thickness or dentin exposure have been suggested. However, most methods have clinical limitations. This study was conducted to confirm the potential of quantitative light-induced fluorescence (QLF) using autofluorescence intensity of occlusal surfaces of worn teeth according to enamel grinding depth in vitro. Sixteen permanent premolars were used. Each tooth was gradationally ground down at the occlusal surface in the apical direction. QLF-digital and swept-source optical coherence tomography images were acquired at each grinding depth (in steps of 100 μm). All QLF images were converted to 8-bit grayscale images to calculate the fluorescence intensity. The maximum brightness (MB) values of the same sound regions in grayscale images before (MB) and phased values after (MB) the grinding process were calculated. Finally, 13 samples were evaluated. MB increased over the grinding depth range with a strong correlation (r=0.994, P<0.001). In conclusion, the fluorescence intensity of the teeth and grinding depth was strongly correlated in the QLF images. Therefore, QLF technology may be a useful noninvasive tool used to monitor the progression of tooth wear and to conveniently estimate enamel thickness.
Optimum soil frost depth to alleviate climate change effects in cold region agriculture
NASA Astrophysics Data System (ADS)
Yanai, Yosuke; Iwata, Yukiyoshi; Hirota, Tomoyoshi
2017-03-01
On-farm soil frost control has been used for the management of volunteer potatoes (Solanum tuberosum L.), a serious weed problem caused by climate change, in northern Japan. Deep soil frost penetration is necessary for the effective eradication of unharvested small potato tubers; however, this process can delay soil thaw and increase soil wetting in spring, thereby delaying agricultural activity initiation and increasing nitrous oxide emissions from soil. Conversely, shallow soil frost development helps over-wintering of unharvested potato tubers and nitrate leaching from surface soil owing to the periodic infiltration of snowmelt water. In this study, we synthesised on-farm snow cover manipulation experiments to determine the optimum soil frost depth that can eradicate unharvested potato tubers without affecting agricultural activity initiation while minimising N pollution from agricultural soil. The optimum soil frost depth was estimated to be 0.28-0.33 m on the basis of the annual maximum soil frost depth. Soil frost control is a promising practice to alleviate climate change effects on agriculture in cold regions, which was initiated by local farmers and further promoted by national and local research institutes.
Optimum soil frost depth to alleviate climate change effects in cold region agriculture.
Yanai, Yosuke; Iwata, Yukiyoshi; Hirota, Tomoyoshi
2017-03-21
On-farm soil frost control has been used for the management of volunteer potatoes (Solanum tuberosum L.), a serious weed problem caused by climate change, in northern Japan. Deep soil frost penetration is necessary for the effective eradication of unharvested small potato tubers; however, this process can delay soil thaw and increase soil wetting in spring, thereby delaying agricultural activity initiation and increasing nitrous oxide emissions from soil. Conversely, shallow soil frost development helps over-wintering of unharvested potato tubers and nitrate leaching from surface soil owing to the periodic infiltration of snowmelt water. In this study, we synthesised on-farm snow cover manipulation experiments to determine the optimum soil frost depth that can eradicate unharvested potato tubers without affecting agricultural activity initiation while minimising N pollution from agricultural soil. The optimum soil frost depth was estimated to be 0.28-0.33 m on the basis of the annual maximum soil frost depth. Soil frost control is a promising practice to alleviate climate change effects on agriculture in cold regions, which was initiated by local farmers and further promoted by national and local research institutes.
NASA Astrophysics Data System (ADS)
Yeh, E. C.; Li, W. C.; Chiang, T. C.; Lin, W.; Wang, T. T.; Yu, C. W.; Chiao, C. H.; Yang, M. W.
2014-12-01
Scientific study in deep boreholes has paid more attention as the demand of natural resources and waste disposal and risk evaluation of seismic hazard dramatically increases, such as petroleum exploitation, geothermal energy, carbon sequestration, nuclear waste disposal and seismogenic faulting. In the deep borehole geoengineering, knowledge of in-situ stress is essential for the design of drilling-casing plan. Understanding the relationship between fracture and in-situ stress is the key information to evaluate the potential of fracture seal/conduit and fracture reactivity. Also, assessment of in-situ stress can provide crucial information to investigate mechanism of earthquake faulting and stress variationfor earthquake cycles. Formations under the Coastal Plain in Taiwan have evaluated as saline-water formations with gently west-dipping and no distinct fractures endured by regional tectonics of arc-continental collision with N35W compression. The situation is characterized as a suitable place for carbon sequestration. In this study, we will integrate results from different in-situ stress determinations such as anelastic strain recovery (ASR), borehore breakout, hydraulic fracturing from a 3000m borehole of carbon sequestration testing site and further evaluate the seal feasibility and tectonic implication. Results of 30 ASR experiments between the depth of 1500m and 3000m showed the consistent normal faulting stress regime. Stress gradient of vertical stress, horizontal maximum stress and horizontal minimum stress with depth is estimated. Borehole breakout is not existed throughout 1500-3000m. The mean orientation of breakout is about 175deg and mean width of breakout is 84 deg. Based on rock mechanical data, maximum injection pressure of carbon sequestration can be evaulated. Furthermore, normal faulting stress regime is consistent with core observations and image logging, the horizontal maximum stress of 85deg inferred from breakout suggested that this place has been affected by the compression of oblique collision. The comparison of stress magnitudes estimated from ASR, breakout and hydraulic fracturing cab further verified current results.
NASA Astrophysics Data System (ADS)
Finenko, Z. Z.; Churilova, T. Ya.
An assessment of the spatial and temporal variation in the photo physiological parameters and chlorophyll-specific absorption coefficients of marine phytoplankton is essential for estimate of global primary production by satellite data. Relationships of photosynthesis rate on light intensity have been used for estimation of two photosynthetic parameters of phytoplankton in the Black Sea: light saturated photosynthesis intensity (Pb/max, mgC mg Chl-1 h-1) and photosynthesis efficiency (alpha/b mgC mg Chl -1 h-1/ W m-2). The results have shown that variability of photosynthetic parameters of surface phytoplankton during the year varied by one order of values: Pb/max - from 1 to 11 mg C mg Chl-1 h-1 and (alpha/b - from 0.04 to 0.35 mg C mg Chl-1 h-1/Wm-2. Temporal dynamics was characterised by increasing of the values from winter to summer and decreasing to the end of the year. The vertical profiles of Pb/max and alpha/b had opposite character of change: values of Pb/max decreased with depth, alpha/b - increased. Photosynthetic parameters changed with depth more significantly at time of stratification, than - without stratification. The influence of temperature, nitrate concentration and light intensity on Pb/max rather evidence, but temperature and optical depth effect on Pb/max more significantly. Depth-dependent variability of photosynthesis efficiency is generally effected of nutrient concentration. Vertical uniformity of maximum quantum yield of photosynthesis (Fm) and spectral mean absorption coefficient of phytoplankton (aph) in euphotic zone were obtained for cold period of year. In summer - Fm increased from surface to bottom of euphotic zone, aph - decrease. Fm and aph values of surface phytoplankton depended on chlorophyll concentration but their changes had opposite direction: Fm - increased and aph - decreased when chlorophyll concentration grew. As a result Pb/max = Z Fm aph Ik T and alpha/b = Z Fm aph T (where Z - a dimensional constant equal to the atomic mass of carbon, Ik - the photon flux density at which photosynthesis rate becomes light saturated, (mol photons m-2 s-1; T - a constant value for transition from seconds by one hour) change in 2-3 times for the range of chlorophyll concentration (0.05-10.0 mg m-3 ). Pb/max and alpha/b have been approximated by one-peaked curve with maximum at 3 mg m -3 Chl a concentration. These relationships could be used for modeling of Pb/max and alpha/b based on surface chlorophyll concentrations from satellite colour images.
Clow, David W.; Ingersoll, George P.; Mast, M. Alisa; Turk, John T.; Campbell, Donald H.
2002-01-01
Depth-integrated snowpack chemistry was measured just prior to maximum snowpack depth during the winters of 1992-1999 at 12 sites co-located with National Atmospheric Deposition Program/National Trend Network (NADP/NTN) sites in the central and southern Rocky Mountains, USA. Winter volume-weighted mean wet-deposition concentrations were calculated for the NADP/NTN sites, and the data were compared to snowpack concentrations using the paired t-test and the Wilcoxon signed-rank test. No statistically significant differences were indicated in concentrations of SO42- or NO3- (p>0.1). Small, but statistically significant differences (p???0.03) were indicated for all other solutes analyzed. Differences were largest for Ca2+ concentrations, which on average were 2.3??eql-1 (43%) higher in the snowpack than in winter NADP/NTN samples. Eolian carbonate dust appeared to influence snowpack chemistry through both wet and dry deposition, and the effect increased from north to south. Dry deposition of eolian carbonates was estimated to have neutralized an average of 6.9??eql-1 and a maximum of 12??eql-1 of snowpack acidity at the southernmost sites. The good agreement between snowpack and winter NADP/NTN SO42- and NO3- concentrations indicates that for those solutes the two data sets can be combined to increase data density in high-elevation areas, where few NADP/NTN sites exist. This combination of data sets will allow for better estimates of atmospheric deposition of SO42- and NO3- across the Rocky Mountain region.
NASA Astrophysics Data System (ADS)
Clow, David W.; Ingersoll, George P.; Mast, M. Alisa; Turk, John T.; Campbell, Donald H.
Depth-integrated snowpack chemistry was measured just prior to maximum snowpack depth during the winters of 1992-1999 at 12 sites co-located with National Atmospheric Deposition Program/National Trend Network (NADP/NTN) sites in the central and southern Rocky Mountains, USA. Winter volume-weighted mean wet-deposition concentrations were calculated for the NADP/NTN sites, and the data were compared to snowpack concentrations using the paired t-test and the Wilcoxon signed-rank test. No statistically significant differences were indicated in concentrations of SO 42- or NO 3- ( p>0.1). Small, but statistically significant differences ( p⩽0.03) were indicated for all other solutes analyzed. Differences were largest for Ca 2+ concentrations, which on average were 2.3 μeq l -1 (43%) higher in the snowpack than in winter NADP/NTN samples. Eolian carbonate dust appeared to influence snowpack chemistry through both wet and dry deposition, and the effect increased from north to south. Dry deposition of eolian carbonates was estimated to have neutralized an average of 6.9 μeq l -1 and a maximum of 12 μeq l -1 of snowpack acidity at the southernmost sites. The good agreement between snowpack and winter NADP/NTN SO 42- and NO 3- concentrations indicates that for those solutes the two data sets can be combined to increase data density in high-elevation areas, where few NADP/NTN sites exist. This combination of data sets will allow for better estimates of atmospheric deposition of SO 42- and NO 3- across the Rocky Mountain region.
Langohr, G Daniel G; Willing, Ryan; Medley, John B; Athwal, George S; Johnson, James A
2016-04-01
Implant design parameters can be changed during reverse shoulder arthroplasty (RSA) to improve range of motion and stability; however, little is known regarding their impact on articular contact mechanics. The purpose of this finite element study was to investigate RSA contact mechanics during abduction for different neck-shaft angles, glenosphere sizes, and polyethylene cup depths. Finite element RSA models with varying neck-shaft angles (155°, 145°, 135°), sizes (38 mm, 42 mm), and cup depths (deep, normal, shallow) were loaded with 400 N at physiological abduction angles. The contact area and maximum contact stress were computed. The contact patch and the location of maximum contact stress were typically located inferomedially in the polyethylene cup. On average for all abduction angles investigated, reducing the neck-shaft angle reduced the contact area by 29% for 155° to 145° and by 59% for 155° to 135° and increased maximum contact stress by 71% for 155° to 145° and by 286% for 155° to 135°. Increasing the glenosphere size increased the contact area by 12% but only decreased maximum contact stress by 2%. Decreasing the cup depth reduced the contact area by 40% and increased maximum contact stress by 81%, whereas increasing the depth produced the opposite effect (+52% and -36%, respectively). The location of the contact patch and maximum contact stress in this study matches the area of damage seen frequently on clinical retrievals. This finding suggests that damage to the inferior cup due to notching may be potentiated by contact stresses. Increasing the glenosphere diameter improved the joint contact area and did not affect maximum contact stress. However, although reducing the neck-shaft angle and cup depth can improve range of motion, our study shows that this also has some negative effects on RSA contact mechanics, particularly when combined. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kayode, John Stephen; Nawawi, M. N. M.; Abdullah, Khiruddin B.; Khalil, Amin E.
2017-01-01
The integration of Aeromagnetic data and remotely sensed imagery with the intents of mapping the subsurface geological structures in part of the South-western basement complex of Nigeria was developed using the PCI Geomatica Software. 2013. The data obtained from the Nigerian Geological Survey Agency; was corrected using Regional Residual Separation of the Total Magnetic field anomalies enhanced, and International Geomagnetic Reference Field removed. The principal objective of this study is, therefore, to introduce a rapid and efficient method of subsurface structural depth estimate and structural index evaluation through the incorporation of the Euler Deconvolution technique into PCI Geomatica 2013 to prospect for subsurface geological structures. The shape and depth of burial helped to define these structures from the regional aeromagnetic map. The method enabled various structural indices to be automatically delineated for an index of between 0.5 SI and 3.0 SI at a maximum depth of 1.1 km that clearly showed the best depths estimate for all the structural indices. The results delineate two major magnetic belts in the area; the first belt shows an elongated ridge-like structure trending mostly along the NorthNortheast-SouthSouthwest and the other anomalies trends primarily in the Northeast, Northwest, Northeast-Southwest parts of the study area that could be attributed to basement complex granitic intrusions from the tectonic history of the area. The majority of the second structures showed various linear structures different from the first structure. Basically, a significant offset was delineated at the core segment of the study area, suggesting a major subsurface geological feature that controls mineralisation in this area.
Seitz, Andrew C.; Wilson, Derek; Nielsen, Jennifer L.
2002-01-01
To maintain healthy commercial and sport fisheries for Pacific halibut (Hippoglossus stenolepis), critical habitat must be defined by determining life history patterns on a daily and seasonal basis. Pop-up satellite archival transmitting (PSAT) tags provide a fisheries-independent method of collecting environmental preference data (depth and ambient water temperature) as well as daily geolocation estimates based on ambient light conditions. In this study, 14 adult halibut (107-165 cm FL) were tagged and released with PSAT tags in and around Resurrection Bay, Alaska. Commercial fishermen recovered two tags, while five tags transmitted data to ARGOS satellites. Horizontal migration was not consistent among fish as three halibut remained in the vicinity of release while four traveled up to 358 km from the release site. Vertical migration was not consistent among fish and over time, but they spent most their time between 150-350 m. The minimum and maximum depths reached by any of the halibut were 2m and 502m, respectively. The fish preferred water temperatures of roughly 6 °C while experiencing ambient temperatures between 4.3 °C and 12.2 °C. Light attenuation with depth prevented existing geolocation software and light sensing hardware from accurately estimating geoposition, however, information from temperature, depth, ocean bathymetry, and pop-off locations provided inference on fish movement in the study area. PSAT tags were a viable tool for determining daily and seasonal behavior and identifying critical halibut habitat, which will aid fisheries managers in future decisions regarding commercial and sport fishing regulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pompos, A; Choy, H; Jia, X
2015-06-15
Purpose: Maximum available kinetic energy of accelerated heavy ions is a critical parameter to consider during the establishment of a heavy ion therapy center. It dictates the maximum range in tissue and determines the size and cost of ion gantry. We have started planning our heavy ion therapy center and we report on the needed ion range. Methods: We analyzed 50 of random SBRT-spine, SBRT- lung, prostate and pancreatic cancer patients from our photon clinic. In the isocentric axial CT cut we recorded the maximum water equivalent depth (WED4Field) of PTV’s most distal edge in four cardinal directions and alsomore » in a beam direction that required the largest penetration, WEDGantry. These depths were then used to calculate the percentage of our patients we would be able to treat as a function of available maximum carbon and helium beam energy. Based on the Anterior-Posterior WED for lung patients and the maximum available ion energy we estimated the largest possible non-coplanar beam entry angle φ (deviation from vertical) in the isocentric vertical sagittal plane. Results: We found that if 430MeV/u C-12, equivalently 220MeV/u He-4, beams are available, more than 96% (98%) of all patients can be treated without any gantry restrictions (in cardinals angles only) respectively. If the energy is reduced to 400MeV/u C-12, equivalently 205MeV/u He-4, the above fractions reduce to 80% (87%) for prostate and 88% (97%) for other sites. This 7% energy decrease translates to almost 5% gantry size and cost decrease for both ions. These energy limits in combination with the WED in the AP direction for lung patients resulted in average non-coplanar angles of φ430MeV/u = 68°±8° and φ400MeV/u = 65°±10° if nozzle clearance permits them. Conclusion: We found that the two worldwide most common maximum carbon beam energies will treat above 80% of all our patients.« less
Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.
2011-01-01
Remote sensing offers an efficient means of mapping bathymetry in river systems, but this approach has been applied primarily to clear-flowing, gravel bed streams. This study used field spectroscopy and radiative transfer modeling to assess the feasibility of spectrally based depth retrieval in a sand-bed river with a higher suspended sediment concentration (SSC) and greater water turbidity. Attenuation of light within the water column was characterized by measuring the amount of downwelling radiant energy at different depths and calculating a diffuse attenuation coefficient, Kd. Attenuation was strongest in blue and near-infrared bands due to scattering by suspended sediment and absorption by water, respectively. Even for red wavelengths with the lowest values of Kd, only a small fraction of the incident light propagated to the bed, restricting the range of depths amenable to remote sensing. Spectra recorded above the water surface were used to establish a strong, linear relationship (R2 = 0.949) between flow depth and a simple band ratio; even under moderately turbid conditions, depth remained the primary control on reflectance. Constraints on depth retrieval were examined via numerical modeling of radiative transfer within the atmosphere and water column. SSC and sensor radiometric resolution limited both the maximum detectable depth and the precision of image-derived depth estimates. Thus, although field spectra indicated that the bathymetry of turbid channels could be remotely mapped, model results implied that depth retrieval in sediment-laden rivers would be limited to shallow depths (on the order of 0.5 m) and subject to a significant degree of uncertainty. ?? 2011 by the American Geophysical Union.
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo
2017-01-01
Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.
NASA Astrophysics Data System (ADS)
Wright, C.
2009-03-01
P waves from earthquakes south of Taiwan, recorded by the BATS seismic array and CWB seismic network, were used define the P wavespeed structure between depths of 100 and 800 km below the Philippines region. The presence of a low wavespeed zone in the upper mantle is inferred, although the details are unclear. Wavespeeds in the uppermost mantle are low, as expected for seismic energy propagating within an oceanic plate. The estimated depths of the 410- and 660-km discontinuities are 325 and 676 km respectively. The unusually shallow depth of the upper discontinuity below and to the east of Luzon is inferred by clearly resolving the travel-time branch produced by refraction through the transition zone. A possible explanation for the northern part of the region covered is that seismic energy reaches its maximum depth within or close to the cool, subducted oceanic South China Sea slab where subduction has been slow and relatively recent. Further south, however, the presence of a broken remnant of the South China Sea slab, formed during a period of shallower subduction, is suggested at depths below 300 km to explain the broad extent of the elevated 410-km discontinuity. The 660-km discontinuity is slightly deeper than usual, implying that low temperatures persist to lower mantle depths. The wavespeed gradients within the transition zone between depths of 450 and 610 km are higher than those predicted by both the pyrolite and piclogite models of the mantle, possibly due to the presence of water in the transition zone.
NASA Astrophysics Data System (ADS)
Suetsugu, D.; Shiobara, H.; Sugioka, H.; Kanazawa, T.; Fukao, Y.
2005-12-01
We determined depths of the mantle discontinuities (the 410-km and 660-km discontinuities) beneath the South Pacific Superswell using waveform data from broadband ocean bottom seismograph (BBOBS) array to image presumed mantle plumes and their temperature anomalies. Seismic structure beneath this region had not previously been well explored in spite of its significance for mantle dynamics. The region is characterized by a topographic high of more than 680 m (Adam and Bonneville, 2005), a concentration of hotspot chains (e.g., Society, Cook-Austral, Marquesas, and Pitcairn) whose volcanic rocks have isotopic characteristics suggesting deep mantle origin, and a broad low velocity anomaly in the lower mantle revealed by seismic tomography. These observations suggest the presence of a whole-mantle scale upwelling beneath the region, which is called a 'superplume' (McNutt, 1998). However, the seismic structure has been only poorly resolved so far and the maximum depth of anomalous material beneath the hotspots has not yet been determined, mainly due to the sparseness of seismic stations in the region. To improve the seismic coverage, we deployed an array of 10 BBOBS over the French Polynesia area from 2003 to 2005. The BBOBS has been developed by Earthquake Research Institute of University of Tokyo and are equipped with the broadband CMG-3T/EBB sensor. The observation was conducted as a Japan-France cooperative project (Suetsugu et al., 2005, submitted to EOS). We computed receiver functions from the BBOBS data to detect Ps waves from the mantle discontinuities. The Velocity Spectrum Stacking method (Gurrola et al., 1994) were employed to enhance the Ps waves for determination of the discontinuity depths, in which receiver functions were stacked in a depth-velocity space. The Ps-waves from the mantle discontinuities were successfully detected at the most of the BBOBS stations, from which the discontinuity depths were determined with the Iasp91 velocity model. The 410-km discontinuity depths were estimated to be 403-431 km over the Superswell region, which are not substantially different from the global average considering the estimation error of 10 km. The 660-km discontinuity depths were also determined to be 654-674 km, close to the global average, at most of the stations. Data from a station near the Society hot spot, however, provide an anomalously shallow depth of 623 km, indicating a presence of a local hot anomaly at the bottom of the mantle transition zone beneath near the Society hot spot. Taking into consideration a possible effect of velocity anomalies on the depth estimation, the shallow anomaly is significant. The present result suggests that the thermal anomalies are not obvious in the Superswell-scale, but present locally beneath the Society hot spot.
NASA Astrophysics Data System (ADS)
Zhu, Xiao-Hua; Nakamura, Hirohiko; Dong, Menghong; Nishina, Ayako; Yamashiro, Toru
2017-03-01
From 2003 to 2011, current surveys, using an acoustic Doppler current profiler (ADCP) mounted on the Ferry Naminoue, were conducted across the Tokara Strait (TkS). Resulting velocity sections (1234) were used to estimate major tidal current constituents in the TkS. The semidiurnal M2 tidal current (maximum amplitude 27 cm s-1) was dominant among all the tidal constituents, and the diurnal K1 tidal current (maximum amplitude 21 cm s-1) was the largest among all the diurnal tidal constituents. Over the section, the ratios, relative to M2, of averaged amplitudes of M2, S2, N2, K2, K1, O1, P1, and Q1 tidal currents were 1.00:0.44:0.21:0.12:0.56:0.33:0.14:0.10. Tidal currents estimated from the ship-mounted ADCP data were in good agreement with those from the mooring ADCP data. Their root-mean-square difference for the M2 tidal current amplitude was 2.0 cm s-1. After removing the tidal currents, the annual-mean of the net volume transport (NVT) through the TkS ± its standard derivation was 23.03 ± 3.31 Sv (Sv = 106 m3 s-1). The maximum (minimum) monthly mean NVT occurred in July (November) with 24.60 (21.47) Sv. NVT values from the ship-mounted ADCP were in good agreement with previous geostrophic volume transports calculated from conductivity temperature depth data, but the former showed much finer temporal structure than those from the geostrophic calculation.
NASA Astrophysics Data System (ADS)
Kinoshita, M.; Kawamura, K.; Lin, W.
2015-12-01
During the Nankai Trough Seismogenic Zone Experiments (NanTroSEIZE) of the Integrated Ocean Drilling Program (IODP), the advanced piston corer temperature (APC-T) tool was used to determine in situ formation temperatures while piston coring down to ~200 m below sea floor. When the corer is fired into the formation, temperature around the shoe abruptly increases due to the frictional heating. The temperature rise due to the frictional heat at the time of penetration is 10 K or larger. We found that the frictional temperature rise (=maximum temperature) increases with increasing depth, and that its intersection at the seafloor seems non-zero. Frictional heat energy is proportional to the maximum temperature rise, which is confirmed by a FEM numerical simulation of 2D cylindrical system. Here we use the result of numerical simulation to convert the observed temperature rise into the frictional heat energy. The frictional heat energy is represented as the product of the shooting length D and the shear stress (τ) between the pipe and the sediment. Assuming a coulomb slip regime, the shear stress is shows as: τ= τ0 + μ*(Sv-Pp), where τ0 is the cohesive stress, μ the dynamic frictional coefficient between the pipe and the sediment, Sv the normal stress at the pipe, and Pp the pore pressure. This can explain the non-zero intersection as well as depth-dependent increase for the frictional heating observed in the APC-T data. Assuming a hydrostatic state and by using the downhole bulk density data, we estimated the friction coefficient for each APC-T measurement. For comparison, we used the vane-shear strength measured on core samples to estimate the friction coefficients. The frictional coefficients μ were estimated as ranging 0.01 - 0.06, anomalously lower than expected for shallow marine sediments. They were lower than those estimated from vane-shear data, which range 0.05 to 0.2. Still, both estimates exhibit a significant increase in the friction coefficient at Site C0012, which dominates in the hemipelagic sediment in the Shikoku Basin. The anomalously low values suggest either fluid injection between the pipe and the sediment during the measurement, or some other uncertainties in converting the observed temperature rise to the frictional heat generation.
Subpixel based defocused points removal in photon-limited volumetric dataset
NASA Astrophysics Data System (ADS)
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Maraka, Harsha Vardhan R.; Ryle, James P.; Sheridan, John T.
2017-03-01
The asymptotic property of the maximum likelihood estimator (MLE) has been utilized to reconstruct three-dimensional (3D) sectional images in the photon counting imaging (PCI) regime. At first, multiple 2D intensity images, known as Elemental images (EI), are captured. Then the geometric ray-tracing method is employed to reconstruct the 3D sectional images at various depth cues. We note that a 3D sectional image consists of both focused and defocused regions, depending on the reconstructed depth position. The defocused portion is redundant and should be removed in order to facilitate image analysis e.g., 3D object tracking, recognition, classification and navigation. In this paper, we present a subpixel level three-step based technique (i.e. involving adaptive thresholding, boundary detection and entropy based segmentation) to discard the defocused sparse-samples from the reconstructed photon-limited 3D sectional images. Simulation results are presented demonstrating the feasibility and efficiency of the proposed method.
Models of recurrent strike-slip earthquake cycles and the state of crustal stress
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.
1991-01-01
Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.
NASA Technical Reports Server (NTRS)
Aliev, N.; Alimov, T.; Kakhkharov, M.; Makhmudov, B. M.; Rakhimova, N.; Tashpulatov, R.; Kalmykov, N. N.; Khristiansen, G. B.; Prosin, V. V.
1985-01-01
The Samarkand extensive air showers (EAS) array was used to measure the mean and individual lateral distribution functions (LDF) of EAS Cerenkov light. The analysis of the individual parameters b showed that the mean depth of EAS maximum and the variance of the depth distribution of maxima of EAS with energies of approx. 2x10 to the 15th power eV can properly be described in terms of Kaidalov-Martirosyan quark-gluon string model (QGSM).
Equatorial Currents in the Indian Ocean Based on Measurements in February 2017
NASA Astrophysics Data System (ADS)
Neiman, V. G.; Frey, D. I.; Ambrosimov, A. K.; Kaplunenko, D. D.; Morozov, E. G.; Shapovalov, S. M.
2018-03-01
We analyze the results of measurements of the Tareev equatorial undercurrent in the Indian Ocean in February 2017. Sections from 3° S to 3°45' N along 68° and 65° E crossed the current with measurements of the temperature, salinity, and current velocity at oceanographic stations. The maximum velocity of this eastward flow was recorded precisely at the equator. The velocity at a depth of 50 m was approximately 60 cm/s. The transport of the Tareev Current was estimated at 9.8 Sv (1 Sv = 106 m3/s).
Gamma-ray bursts from superconducting cosmic strings at large redshifts
NASA Technical Reports Server (NTRS)
Babul, Arif; Paczynski, Bohdan; Spergel, David
1987-01-01
The relation between cusp events and gamma-rays bursts is investigated. The optical depth of the universe to X-rays and gamma-rays of various energies is calculated and discussed. The cosmological evolution of cosmic strings is examined, and the energetics and time-scales related to the cusp phenomena are estimated. It is noted that it is possible to have energy bursts with a duration of a few seconds or less from cusps at z = 1000; the maximum amount of energy associated with such an event is limited to 10 to the 7th ergs/sq cm.
Ammonia volatilization and nitrogen retention: how deep to incorporate urea?
Rochette, Philippe; Angers, Denis A; Chantigny, Martin H; Gasser, Marc-Olivier; MacDonald, J Douglas; Pelster, David E; Bertrand, Normand
2013-11-01
Incorporation of urea decreases ammonia (NH) volatilization, but field measurements are needed to better quantify the impact of placement depth. In this study, we measured the volatilization losses after banding of urea at depths of 0, 2.5, 5, 7.5, and 10 cm in a slightly acidic (pH 6) silt loam soil using wind tunnels. Mineral nitrogen (N) concentration and pH were measured in the top 2 cm of soil to determine the extent of urea N migration and the influence of placement depth on the availability of ammoniacal N for volatilization near the soil surface. Ammonia volatilization losses were 50% of applied N when urea was banded at the surface, and incorporation of the band decreased emissions by an average of 7% cm (14% cm when expressed as a percentage of losses after surface banding). Incorporating urea at depths >7.5 cm therefore resulted in negligible NH emissions and maximum N retention. Cumulative losses increased exponentially with increasing maximum NH-N and pH values measured in the surface soil during the experiment. However, temporal variations in these soil properties were poorly related to the temporal variations in NH emission rates, likely as a result of interactions with other factors (e.g., water content and NH-N adsorption) on, and fixation by, soil particles. Laboratory and field volatilization data from the literature were summarized and used to determine a relationship between NH losses and depth of urea incorporation. When emissions were expressed as a percentage of losses for a surface application, the mean reduction after urea incorporation was approximately 12.5% cm. Although we agree that the efficiency of urea incorporation to reduce NH losses varies depending on several soil properties, management practices, and climatic conditions, we propose that this value represents an estimate of the mean impact of incorporation depth that could be used when site-specific information is unavailable. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Structural Analysis of Ogygis Rupes Lobate Scarp on Mars.
NASA Astrophysics Data System (ADS)
Herrero-Gil, A.; Ruiz, J.; Romeo, I.; Egea-González, I.
2016-12-01
Ogygis Rupes is a 200 kilometers long lobate scarp, striking N30ºE, with approximately 2km of maximum structural relief. It is located in Aonia Terra, in the southern hemisphere of Mars near the northeast margin of Argyre impact basin. Similar to other large lobate scarps on Mercury or Mars, it shows a roughly arcuate to linear form, and an asymmetric cross section with a steeply rising scarp face and a gently declining back scarp. This asymmetry suggests that Ogygis Rupes is the topographic expression of a ESE-vergent thrust fault. By using the Mars Orbiter Laser Altimeter data and the Mars imagery available we have measure the horizontal shortening on impact craters cross-cut by this lobate scarp to obtain a minimum value for the horizontal offset of the underling fault. Two complementary methods were used to estimate fault geometry parameters as fault displacement, dip angle and depth of faulting: (i) analyzing topographic profiles together with the horizontal shortening estimations from cross-cut craters to create balanced cross sections on the basis of the thrust fault propagation folding [1]; (ii) using a forward mechanical dislocation method [2], which predicts fault geometry by comparing model outputs with real topography. The significant size of the fault underlying this lobate scarp suggests that its detachment is located at a main rheological change, for which we have obtained a preliminary depth value of around 30 kilometers by the methods listed above. Estimates of the depth of faulting in similar lobate scarps [3] have been associated to the depth of the brittle-ductile transition. [1] Suppe (1983), Am. J. Sci., 283, 648-721; Seeber and Sorlien (2000), Geol. Soc. Am. Bull., 112, 1067-1079. [2] Toda et al. (1998) JGR, 103, 24543-24565. [3] i.e. Schultz and Watters (2001) Geophys. Res. Lett., 28, 4659-4662; Ruiz et al. (2008) EPSL, 270, 1-12; Egea-Gonzalez et al. (2012) PSS, 60, 193-198; Mueller et al. (2014) EPSL, 408, 100-109.
Experimental investigation of the Peregrine Breather of gravity waves on finite water depth
NASA Astrophysics Data System (ADS)
Dong, G.; Liao, B.; Ma, Y.; Perlin, M.
2018-06-01
A series of laboratory experiments were performed to study the Peregrine Breather (PB) evolution in a wave flume of finite depth and deep water. Experimental cases were selected with water depths k0h (k0 is the wave number and h is the water depth) varying from 3.11 to 8.17 and initial steepness k0a0 (a0 is the background wave amplitude) in the range 0.06 to 0.12, and the corresponding initial Ursell number in the range 0.03 to 0.061. Experimental results indicate that the water depth plays an important role in the formation of the extreme waves in finite depth; the maximum wave amplification of the PB packets is also strongly dependent on the initial Ursell number. For experimental cases with the initial Ursell number larger than 0.05, the maximum crest amplification can exceed three. If the initial Ursell number is nearly 0.05, a shorter propagation distance is needed for maximum amplification of the height in deeper water. A time-frequency analysis using the wavelet transform reveals that the energy of the higher harmonics is almost in-phase with the carrier wave. The contribution of the higher harmonics to the extreme wave is significant for the cases with initial Ursell number larger than 0.05 in water depth k0h < 5.0. Additionally, the experimental results are compared with computations based on both the nonlinear Schrödinger (NLS) equation and the Dysthe equation, both with a dissipation term. It is found that both models with a dissipation term can predict the maximum amplitude amplification of the primary waves. However, the Dysthe equation also can predict the group horizontal asymmetry.
Soil thermal properties at two different sites on James Ross Island in the period 2012/13
NASA Astrophysics Data System (ADS)
Hrbáček, Filip; Láska, Kamil
2015-04-01
James Ross Island (JRI) is the largest island in the eastern part of the Antarctic Peninsula. Ulu Peninsula in the northern part of JRI is considered the largest ice free area in the Maritime Antarctica region. However, information about permafrost on JRI, active layer and its soil properties in general are poorly known. In this study, results of soil thermal measurements at two different sites on Ulu Peninsula are presented between 1 April 2012 and 30 April 2013. The study sites are located (1) on an old Holocene marine terrace (10 m a. s. l.) in the closest vicinity of Johann Gregor Mendel (JGM) Station and (2) on top of a volcanic plateau named Johnson Mesa (340 m a. s. l.) about 4 km south of the JGM Station. The soil temperatures were measured at 30 min interval using platinum resistance thermometers Pt100/8 in two profiles up to 200 cm at JGM Station and 75 cm at Johnson Mesa respectively. Decagon 10HS volumetric water content sensors were installed up 30 cm at Johnson Mesa to 50 cm at JGM Station, while Hukseflux HFP01 soil heat flux sensors were used for direct monitoring of soil physical properties at 2.5 cm depth at both sites. The mean soil temperature varied between -5.7°C at 50 cm and -6.3°C at 5 cm at JGM Station, while that for Johnson Mesa varied between -6.9°C at 50 cm and -7.1°C at 10 cm. Maximum active layer thickness estimated from 0 °C isotherm reached 52 cm at JGM Station and 50 cm at Johnson Mesa respectively which corresponded with maximum observed annual temperature at 50 cm at both sites. The warmest part of both profiles detected at 50 cm depth corresponded with maximum thickness of active layer, estimated from 0°C isotherm, reached 52 cm at JGM Station and 50 cm at Johnson Mesa respectively. Volumetric water content at 5 cm varied around 0.25 m3m-3 at both sites. The slight increase to 0.32 m3m-3 was observed at JGM Station at 50 cm and at Johnson Mesa at 30 cm depth. Soil texture analysis showed distinctly higher share of coarser fraction >2 mm at Johnson Mesa than at JGM Station. Comparison of both sites indicated that mean ground temperature at 50 cm depth was higher by 1.2 °C at JGM station, although the active layer was thicker by 2 cm only. It can therefore be concluded that soil physical properties like texture and moisture may significantly affect thermal regime at boundary between AL and permafrost table during individual thawing seasons.
Estimation of In Situ Stresses with Hydro-Fracturing Tests and a Statistical Method
NASA Astrophysics Data System (ADS)
Lee, Hikweon; Ong, See Hong
2018-03-01
At great depths, where borehole-based field stress measurements such as hydraulic fracturing are challenging due to difficult downhole conditions or prohibitive costs, in situ stresses can be indirectly estimated using wellbore failures such as borehole breakouts and/or drilling-induced tensile failures detected by an image log. As part of such efforts, a statistical method has been developed in which borehole breakouts detected on an image log are used for this purpose (Song et al. in Proceedings on the 7th international symposium on in situ rock stress, 2016; Song and Chang in J Geophys Res Solid Earth 122:4033-4052, 2017). The method employs a grid-searching algorithm in which the least and maximum horizontal principal stresses ( S h and S H) are varied, and the corresponding simulated depth-related breakout width distribution as a function of the breakout angle ( θ B = 90° - half of breakout width) is compared to that observed along the borehole to determine a set of S h and S H having the lowest misfit between them. An important advantage of the method is that S h and S H can be estimated simultaneously in vertical wells. To validate the statistical approach, the method is applied to a vertical hole where a set of field hydraulic fracturing tests have been carried out. The stress estimations using the proposed method were found to be in good agreement with the results interpreted from the hydraulic fracturing test measurements.
NASA Astrophysics Data System (ADS)
Karimi, Kurosh; Shirzaditabar, Farzad
2017-08-01
The analytic signal of magnitude of the magnetic field’s components and its first derivatives have been employed for locating magnetic structures, which can be considered as point-dipoles or line of dipoles. Although similar methods have been used for locating such magnetic anomalies, they cannot estimate the positions of anomalies in noisy states with an acceptable accuracy. The methods are also inexact in determining the depth of deep anomalies. In noisy cases and in places other than poles, the maximum points of the magnitude of the magnetic vector components and Az are not located exactly above 3D bodies. Consequently, the horizontal location estimates of bodies are accompanied by errors. Here, the previous methods are altered and generalized to locate deeper models in the presence of noise even at lower magnetic latitudes. In addition, a statistical technique is presented for working in noisy areas and a new method, which is resistant to noise by using a ‘depths mean’ method, is made. Reduction to the pole transformation is also used to find the most possible actual horizontal body location. Deep models are also well estimated. The method is tested on real magnetic data over an urban gas pipeline in the vicinity of Kermanshah province, Iran. The estimated location of the pipeline is accurate in accordance with the result of the half-width method.
A Statistical Guide to the Design of Deep Mutational Scanning Experiments
Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia
2016-01-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710
Heritability of myopia and ocular biometrics in Koreans: the healthy twin study.
Kim, Myung Hun; Zhao, Di; Kim, Woori; Lim, Dong-Hui; Song, Yun-Mi; Guallar, Eliseo; Cho, Juhee; Sung, Joohon; Chung, Eui-Sang; Chung, Tae-Young
2013-05-01
To estimate the heritabilities of myopia and ocular biometrics among different family types among a Korean population. We studied 1508 adults in the Healthy Twin Study. Spherical equivalent, axial length, anterior chamber depth, and corneal astigmatism were measured by refraction, corneal topography, and A-scan ultrasonography. To see the degree of resemblance among different types of family relationships, intraclass correlation coefficients (ICC) were calculated. Variance-component methods were applied to estimate the genetic contributions to eye phenotypes as heritability based on the maximum likelihood estimation. Narrow sense heritability was calculated as the proportion of the total phenotypic variance explained by additive genetic effects, and linear and nonlinear effects of age, sex, and interactions between age and sex were adjusted. A total of 240 monozygotic twin pairs, 45 dizygotic twin pairs, and 938 singleton adult family members who were first-degree relatives of twins in 345 families were included in the study. ICCs for spherical equivalent from monozygotic twins, pooled first-degree pairs, and spouse pairs were 0.83, 0.34, and 0.20, respectively. The ICCs of other ocular biometrics were also significantly higher in monozygotic twins compared with other relative pairs, with greater consistency and conformity. The estimated narrow sense heritability (95% confidence interval) was 0.78 (0.71-0.84) for spherical equivalent; 0.86 (0.82-0.90) for axial length; 0.83 (0.76-0.91) for anterior chamber depth; and 0.70 (0.63-0.77) for corneal astigmatism. The estimated heritability of spherical equivalent and ocular biometrics in the Korean population suggests the compelling evidence that all traits are highly heritable.
Sonier, Marcus; Wronski, Matt; Yeboah, Collins
2015-03-08
Lens dose is a concern during the treatment of facial lesions with anterior electron beams. Lead shielding is routinely employed to reduce lens dose and minimize late complications. The purpose of this work is twofold: 1) to measure dose pro-files under large-area lead shielding at the lens depth for clinical electron energies via film dosimetry; and 2) to assess the accuracy of the Pinnacle treatment planning system in calculating doses under lead shields. First, to simulate the clinical geometry, EBT3 film and 4 cm wide lead shields were incorporated into a Solid Water phantom. With the lead shield inside the phantom, the film was positioned at a depth of 0.7 cm below the lead, while a variable thickness of solid water, simulating bolus, was placed on top. This geometry was reproduced in Pinnacle to calculate dose profiles using the pencil beam electron algorithm. The measured and calculated dose profiles were normalized to the central-axis dose maximum in a homogeneous phantom with no lead shielding. The resulting measured profiles, functions of bolus thickness and incident electron energy, can be used to estimate the lens dose under various clinical scenarios. These profiles showed a minimum lead margin of 0.5 cm beyond the lens boundary is required to shield the lens to ≤ 10% of the dose maximum. Comparisons with Pinnacle showed a consistent overestimation of dose under the lead shield with discrepancies of ~ 25% occur-ring near the shield edge. This discrepancy was found to increase with electron energy and bolus thickness and decrease with distance from the lead edge. Thus, the Pinnacle electron algorithm is not recommended for estimating lens dose in this situation. The film measurements, however, allow for a reasonable estimate of lens dose from electron beams and for clinicians to assess the lead margin required to reduce the lens dose to an acceptable level.
Luo, Huifang; Wang, Jierui; Zhang, Shuang; Mi, Congbo
2018-05-01
The frontal sinus, due to its unique anatomical features, has become an important element in research for individual identification. Previous studies have demonstrated the use of frontal sinus as an indicator for sex discrimination; however, the sex discrimination rate using frontal sinus was lower compared to that using the traditional morphological methods. In order to improve the sex discrimination percentage, we developed a new method involving the measurement of the frontal sinus index and frontal sinus area from lateral cephalogram radiographs. In this study, 475 digital lateral cephalograms of adult Han citizens from Xinjiang were included. The maximum height, depth, and area of the frontal sinus were calculated using the NemoCeph NX software. The frontal sinus index (ratio of the maximum height to the depth of frontal sinus) was also computed. Statistical analysis results showed significant differences in the frontal sinus index and area between males and females. Discriminant function equation derived from this study differentiated between sexes with 76.6% accuracy. The results demonstrated that the use of frontal sinus index and area for sex discrimination was more accurate than using the frontal sinus index alone. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Elawadi, Eslam; Zaman, Haider; Batayneh, Awni; Mogren, Saad; Laboun, Abdalaziz; Ghrefat, Habes; Zumlot, Taisser
2013-09-01
The Ifal (Midyan) Basin is one of the well defined basins along the Red Sea coast, north-western Saudi Arabia. Location, geometry, thick sedimentary cover and structural framework qualify this basin for groundwater, oil and mineral occurrences. In spite of being studied by two airborne magnetic surveys during 1962 and 1983, structural interpretation of the area from a magnetic perspective, and its uses for hydrogeological and environmental investigations, has not been attempted. This work thus presents interpretation of the aeromagnetic data for basement depth estimation and tectonic framework delineation, which both have a role in controlling groundwater flow and accumulation in the Ifal Basin. A maximum depth of 3.5km is estimated for the basement surface by this study. In addition, several faulted and tilted blocks, perpendicularly dissected by NE-trending faults, are delineated within the structural framework of the study area. It is also observed that the studied basin is bounded by NW- and NE-trending faults. All these multi-directional faults/fracture systems in the Ifal Basin could be considered as conduits for groundwater accumulation, but with a possibility of environmental contamination from the surrounding soils and rock bodies.
A Cadaveric Analysis of the Optimal Radiographic Angle for Evaluating Trochlear Depth.
Weinberg, Douglas Stanley; Gilmore, Allison; Guraya, Sahejmeet S; Wang, David M; Liu, Raymond W
2017-02-01
Disorders of the patellofemoral joint are common. Diagnosis and management often involves the use tangential imaging of the patella and trochlear grove, with the sunrise projection being the most common. However, imaging protocols vary between institutions, and limited data exist to determine which radiographic projections provide optimal visualization of the trochlear groove at its deepest point. Plain radiographs of 48 cadaveric femora were taken at various beam-femur angles and the maximum trochlear depth was measured; a tilt-board apparatus was used to elevate the femur in 5-degree increments between 40 and 75 degrees. A corollary experiment was undertaken to investigate beam-femur angles osteologically: digital representations of each bone were created with a MicroScribe digitizer, and trochlear depth was measured on all specimens at beam-femur angles from 0 to 75 degrees. The results of the radiographic and digitizer experiments showed that the maximum trochlear grove depth occurred at a beam-femur angle of 50 degrees. These results suggest that the optimal beam-femur angle for visualizing maximum trochlear depth is 50 degrees. This is significantly lower than the beam-femur angle of 90 degrees typically used in the sunrise projection. Clinicians evaluating trochlear depth on sunrise projections may be underestimating maximal depth and evaluating a nonarticulating portion of the femur. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muscara, Joseph; Kupperman, David S.; Bakhtiari, Sasab
2002-07-01
This paper discusses round-robin exercises using the NRC steam generator (SG) mock-up at Argonne National Laboratory to assess inspection reliability. The purpose of the round robins was to assess the current reliability of SG tubing inservice inspection, determine the probability of detection (POD) as function of flaw size or severity, and assess the capability for sizing of flaws. For the round robin and subsequent evaluation completed in 2001, eleven teams participated. Bobbin and rotating coil mock-up data collected by qualified industry personnel were evaluated. The mock-up contains hundreds of cracks and simulations of artifacts such as corrosion deposits and tubemore » support plates that make detection and characterization of cracks more difficult in operating steam generators than in most laboratory situations. An expert Task Group from industry, Argonne National Laboratory, and the NRC have reviewed the signals from the laboratory-grown cracks used in the mock-up to ensure that they provide reasonable simulations of those obtained in the field. The mock-up contains 400 tube openings. Each tube contains nine 22.2-mm (7/8-in.) diameter, 30.5-cm (1-ft) long, Alloy 600 test sections. The flaws are located in the tube sheet near the roll transition zone (RTZ), in the tube support plate (TSP), and in the free-span. The flaws are primarily intergranular stress corrosion cracks (axial and circumferential, ID and OD) though intergranular attack (IGA) wear and fatigue cracks are also present, as well as cracks in dents. In addition to the simulated tube sheet and TSP the mock-up has simulated sludge and magnetite deposits. A multiparameter eddy current algorithm, validated for mock-up flaws, provided a detailed isometric plot for every flaw and was used to establish the reference state of defects in the mock-up. The detection results for the 11 teams were used to develop POD curves as a function of maximum depth, voltage and the parameter m p, for the various types of flaws. The POD curves were represented as linear logistic curves, and the curve parameters were determined by the method of Maximum Likelihood. The effect of both statistical uncertainties inherent in sampling from distributions and the uncertainties due to errors in the estimates of maximum depth and m p was investigated. The 95% one-sided confidence limits (OSL), which include errors in maximum depth estimates, are presented along with the POD curves. For the second round robin a reconfigured mock-up is being used to evaluate the effectiveness of eddy current array probes. The primary emphasis is on the X-Probe. Progress with the X-Probe round robin is discussed in this paper. (authors)« less
NASA Astrophysics Data System (ADS)
Suárez, Gerardo; Sánchez, Osvaldo
1996-01-01
Studies of locally recorded microearthquakes and the centroidal depths of the largest earthquakes analyzed using teleseismic data show that the maximum depth of thrust faulting along the Mexican subduction zone is anomalously shallow. This observed maximum depth of about 25 ± 5 km is about half of that observed in most subduction zones of the world. A leveling line that crosses the rupture zone of the 19 September 1985 Michoacan event was revisited after the earthquake and it shows anomalously low deformation during the earthquake. The comparison between the observed coseismic uplift and dislocation models of the seismogenic interplate contact that extend to depths ranging from 20 to 40 km shows that the maximum depth at which seismic slip took place is about 20 km. This unusually shallow and narrow zone of seismogenic coupling apparently results in the occurrence of thrust events along the Mexican subduction zone that are smaller than would be expected for a trench where a relatively young slab subducts at a rapid rate of relative motion. A comparison with the Chilean subduction zone shows that the plate interface in Mexico is half that in Chile, not only in the down-dip extent of the seismogenic zone of plate contact, but also in the distance of the trench from the coast and in the thickness of the upper continental plate. It appears that the narrow plate contact produced by this particular plate geometry in Mexico is the controlling variable defining the size of the largest characteristic earthquakes in the Mexican subduction zone.
Compositional characterization of asteroid (16) Psyche
NASA Astrophysics Data System (ADS)
Sanchez, Juan; Reddy, Vishnu; Shepard, Michael K.; Thomas, Cristina; Cloutis, Edward
2016-10-01
We present near-infrared spectra (0.7-2.5 microns) of asteroid (16) Psyche obtained with the NASA Infrared Telescope Facility. Rotationally-resolved spectra were obtained during three nights between December 2015 and February 2016. These data have been combined with three-dimensional shape models of Psyche generated with the SHAPE software package (Magri et al. 2007). From each spectrum, the band center, band depth and spectral slope were measured. We found that the band center varies from 0.92 to 0.94 microns with rotation phase, with an average value of 0.932±0.006 microns. The band depth was found to vary from 1.0 to 1.5±0.1%. Spectral slope values range from 0.25 to 0.35±0.01 microns-1, with rotation phase. We observed a possible anti-correlation between band depth and radar albedo. Using the band depth along with a new laboratory spectral calibration we estimated that Psyche has an average orthopyroxene abundance of 6±1%. The mass-deficit region of Psyche (longitudes ~ 0°-40°), characterized by having the highest radar albedo of the asteroid, also shows the highest value for the spectral slope and the minimum band depth, while the antipode of this region (longitudes ~ 180°-230°), where the radar albedo reaches its lowest value, shows a maximum in band depth and less steep spectral slopes. These results could suggest that the metal-poor antipode region has thicker regolith rich in pyroxene compared to the mass-deficit region.
NASA Astrophysics Data System (ADS)
Dürig, Tobias; Gudmundsson, Magnus T.; Dellino, Pierfrancesco
2015-05-01
Two methods are introduced to estimate the depth of origin of ejecta trajectories (depth to magma level in conduit) and the diameter of a conduit in an erupting crater, using analysis of videos from the Eyjafjallajökull 2010 eruption to evaluate their applicability. Both methods rely on the identification of straight, initial trajectories of fast ejecta, observed near the crater rims before they are appreciably bent by air drag and gravity. In the first method, through tracking these straight trajectories and identifying a cut-off angle, the inner diameter and the depth level of the vent can be constrained. In the second method, the intersection point of straight trajectories from individual pulses is used to determine the maximum possible depth from which the tracked ejecta originated and the width of the region from which the pulses emanated. The two methods give nearly identical results on the depth to magma level in the crater of Eyjafjallajökull on 8 to 10 May of 51 ± 7 m. The inner vent diameter, at the level of origin of the pulses and ejecta, is found to have been 8 to 15 m. These methods open up the possibility to feed (near) real-time monitoring systems with otherwise inaccessible information about vent geometry during an ongoing eruption and help defining important eruption source parameters.
Shelly, David R.; Hardebeck, Jeanne L.
2010-01-01
We precisely locate 88 tremor families along the central San Andreas Fault using a 3D velocity model and numerous P and S wave arrival times estimated from seismogram stacks of up to 400 events per tremor family. Maximum tremor amplitudes vary along the fault by at least a factor of 7, with by far the strongest sources along a 25 km section of the fault southeast of Parkfield. We also identify many weaker tremor families, which have largely escaped prior detection. Together, these sources extend 150 km along the fault, beneath creeping, transitional, and locked sections of the upper crustal fault. Depths are mostly between 18 and 28 km, in the lower crust. Epicenters are concentrated within 3 km of the surface trace, implying a nearly vertical fault. A prominent gap in detectible activity is located directly beneath the region of maximum slip in the 2004 magnitude 6.0 Parkfield earthquake.
Characterization of highly multiplexed monolithic PET / gamma camera detector modules.
Pierce, L A; Pedemonte, S; DeWitt, D; MacDonald, L; Hunter, W C J; Van Leemput, K; Miyaoka, R
2018-03-29
PET detectors use signal multiplexing to reduce the total number of electronics channels needed to cover a given area. Using measured thin-beam calibration data, we tested a principal component based multiplexing scheme for scintillation detectors. The highly-multiplexed detector signal is no longer amenable to standard calibration methodologies. In this study we report results of a prototype multiplexing circuit, and present a new method for calibrating the detector module with multiplexed data. A [Formula: see text] mm 3 LYSO scintillation crystal was affixed to a position-sensitive photomultiplier tube with [Formula: see text] position-outputs and one channel that is the sum of the other 64. The 65-channel signal was multiplexed in a resistive circuit, with 65:5 or 65:7 multiplexing. A 0.9 mm beam of 511 keV photons was scanned across the face of the crystal in a 1.52 mm grid pattern in order to characterize the detector response. New methods are developed to reject scattered events and perform depth-estimation to characterize the detector response of the calibration data. Photon interaction position estimation of the testing data was performed using a Gaussian Maximum Likelihood estimator and the resolution and scatter-rejection capabilities of the detector were analyzed. We found that using a 7-channel multiplexing scheme (65:7 compression ratio) with 1.67 mm depth bins had the best performance with a beam-contour of 1.2 mm FWHM (from the 0.9 mm beam) near the center of the crystal and 1.9 mm FWHM near the edge of the crystal. The positioned events followed the expected Beer-Lambert depth distribution. The proposed calibration and positioning method exhibited a scattered photon rejection rate that was a 55% improvement over the summed signal energy-windowing method.
Geometries of geoelectrical structures in central Tibetan Plateau from INDEPTH magnetotelluric data
NASA Astrophysics Data System (ADS)
Vozar, J.; Jones, A. G.; Le Pape, F.
2012-12-01
Magnetotelluric (MT) data collected on N-S profiles crossing the Banggong-Nujiang Suture (BNS), which separates the Qiangtang and Lhasa Terranes in central Tibet, as a part of InterNational DEep Profiling of Tibet and the Himalaya project (INDEPTH) are modeled by 2D, 3D inversion codes and 1D petro-physical package LitMod. The modeling exhibits regional resistive and conductive structures correlated with ShuangHu Suture, Tanggula Mountains and strike-slip faults like BengCo-Jiali fault in the south. The BNS is not manifested in the geoelectrical models as a strong crustal regional structure. The strike direction azimuth of mid and lower crustal structures estimated from horizontal slices from 3D modeling (N110°E) is slightly different from one estimated by 2D strike analysis (N100°E). Orientation of crustal structures is perpendicular to convergence direction in this area. The deepest lower crustal conductors are correlated to areas with maximum Moho depth obtained from satellite gravity data. The anisotropic 2D modeling reveals that lower crustal conductor in Lhasa Terrane is anisotropic. This anisotropy can be interpreted as a proof for crustal channel flow below Lhasa Terrane. But same Lhasa lower crust conductor from isotropic 3D modeling can be interpreted more likely as 3D lower Indian crust structure, located to the east from line 500, than geoelectrical anisotropic crustal flow. From deep electromagnetic sounding, supported by independent integrated petro-physical investigation, we can estimate the next upper-mantle conductive layer at depths from 200 km to 250 km below the Lhasa Terrane and less resistive Tibetan lithosphere below the Qiangtang Terrane with conductive upper-mantle in depths about 120 km.
Webb, R.M.T.; Wieczorek, M.E.; Nolan, B.T.; Hancock, T.C.; Sandstrom, M.W.; Barbash, J.E.; Bayless, E.R.; Healy, R.W.; Linard, J.
2008-01-01
Pesticide leaching through variably thick soils beneath agricultural fields in Morgan Creek, Maryland was simulated for water years 1995 to 2004 using LEACHM (Leaching Estimation and Chemistry Model). Fifteen individual models were constructed to simulate five depths and three crop rotations with associated pesticide applications. Unsaturated zone thickness averaged 4.7 m but reached a maximum of 18.7 m. Average annual recharge to ground water decreased from 15.9 to 11.1 cm as the unsaturated zone increased in thickness from 1 to 10 m. These point estimates of recharge are at the lower end of previously published values, which used methods that integrate over larger areas capturing focused recharge in the numerous detention ponds in the watershed. The total amount of applied and leached masses for five parent pesticide compounds and seven metabolites were estimated for the 32-km2 Morgan Creek watershed by associating each hectare to the closest one-dimensional model analog of model depth and crop rotation scenario as determined from land-use surveys. LEACHM parameters were set such that branched, serial, first-order decay of pesticides and metabolites was realistically simulated. Leaching is predicted to be greatest for shallow soils and for persistent compounds with low sorptivity. Based on simulation results, percent parent compounds leached within the watershed can be described by a regression model of the form e−depth (a ln t½−b ln KOC) where t 1/2 is the degradation half-life in aerobic soils, K OC is the organic carbon normalized sorption coefficient, and a and b are fitted coefficients (R 2 = 0.86, p value = 7 × 10−9).
Nonlinear attenuation of S-waves and Love waves within ambient rock
NASA Astrophysics Data System (ADS)
Sleep, Norman H.; Erickson, Brittany A.
2014-04-01
obtain scaling relationships for nonlinear attenuation of S-waves and Love waves within sedimentary basins to assist numerical modeling. These relationships constrain the past peak ground velocity (PGV) of strong 3-4 s Love waves from San Andreas events within Greater Los Angeles, as well as the maximum PGV of future waves that can propagate without strong nonlinear attenuation. During each event, the shaking episode cracks the stiff, shallow rock. Over multiple events, this repeated damage in the upper few hundred meters leads to self-organization of the shear modulus. Dynamic strain is PGV divided by phase velocity, and dynamic stress is strain times the shear modulus. The frictional yield stress is proportional to depth times the effective coefficient of friction. At the eventual quasi-steady self-organized state, the shear modulus increases linearly with depth allowing inference of past typical PGV where rock over the damaged depth range barely reaches frictional failure. Still greater future PGV would cause frictional failure throughout the damaged zone, nonlinearly attenuating the wave. Assuming self-organization has taken place, estimated maximum past PGV within Greater Los Angeles Basins is 0.4-2.6 m s-1. The upper part of this range includes regions of accumulating sediments with low S-wave velocity that may have not yet compacted, rather than having been damaged by strong shaking. Published numerical models indicate that strong Love waves from the San Andreas Fault pass through Whittier Narrows. Within this corridor, deep drawdown of the water table from its currently shallow and preindustrial levels would nearly double PGV of Love waves reaching Downtown Los Angeles.
Stress Study on Southern Segment of Longmenshan Fault Constrained by Focal Mechanism Data
NASA Astrophysics Data System (ADS)
Yang, Y.; Liang, C.; Su, J.; Zhou, L.
2016-12-01
The Longmenshan fault (LMSF) lies at the eastern margin of Tibetan plateau and constitutes the boundary of the active Bayankala block and rigid Sichuan basin. This fault was misinterpreted as an inactive fault before the great Wenchuan earthquake. Five years after the devastating event, the Lushan MS 7.0 stroke the southern segment of the LMSF but fractured in a very limited scale and formed a seismic gap between the two earthquakes. In this study, we determined focal mechanisms of earthquakes with magnitude M≥3 from Jan 2008 to July 2014 in the southern segment of LMSF, and then applied the damped linear inversion to derive the regional stress field based on the focal mechanisms. Focal mechanisms of 755 earthquakes in total were determined. We further used a damped linear inversion technique to produce a 2D stress map in upper crust in the study region. A dominant thrust regime is determined south of the seismic gap, with a horizontal maximum compression oriented in NWW-SEE. But in the area to the north of the seismic gap is characterized as a much more complex stress environment. To the west of the Dujiangyan city, there appear to be a seismic gap in the Pengguan complex. The maximum compressions show the anti-clockwise and clockwise patterns to the south and north of this small gap. Thus the small gap seems to be an asperity that causes the maximum compression to rotate around it. While combined the maximum compression pattern with the focal solutions of strong earthquakes (Mw≥5) in this region, two of those strong earthquakes located near the back-range-fault have strikes parallel to the Miyaluo fault. Considering a large amount of earthquakes in Lixian branch, the Miyaluo fault may be extended to LMSF following the great Wenchuan earthquake. Investigations on the stress field of different depths indicate complex spatial variations. The Pengguan complex is almost aseismic in shallow depth in its central part. In deeper depth, the maximum compressions show the NNW-SSE and NE-SW directions to the north and south of the seismic gap respectively, this are surprisingly different from that of the shallower depth. Thus the maximum compressions vary with depth may imply the movement in depth is decoupled from the movement in shallow depth. This work was partially supported by National Natural Science Foundation of China (41340009).
Optimal combination of illusory and luminance-defined 3-D surfaces: A role for ambiguity.
Hartle, Brittney; Wilcox, Laurie M; Murray, Richard F
2018-04-01
The shape of the illusory surface in stereoscopic Kanizsa figures is determined by the interpolation of depth from the luminance edges of adjacent inducing elements. Despite ambiguity in the position of illusory boundaries, observers reliably perceive a coherent three-dimensional (3-D) surface. However, this ambiguity may contribute additional uncertainty to the depth percept beyond what is expected from measurement noise alone. We evaluated the intrinsic ambiguity of illusory boundaries by using a cue-combination paradigm to measure the reliability of depth percepts elicited by stereoscopic illusory surfaces. We assessed the accuracy and precision of depth percepts using 3-D Kanizsa figures relative to luminance-defined surfaces. The location of the surface peak was defined by illusory boundaries, luminance-defined edges, or both. Accuracy and precision were assessed using a depth-discrimination paradigm. A maximum likelihood linear cue combination model was used to evaluate the relative contribution of illusory and luminance-defined signals to the perceived depth of the combined surface. Our analysis showed that the standard deviation of depth estimates was consistent with an optimal cue combination model, but the points of subjective equality indicated that observers consistently underweighted the contribution of illusory boundaries. This systematic underweighting may reflect a combination rule that attributes additional intrinsic ambiguity to the location of the illusory boundary. Although previous studies show that illusory and luminance-defined contours share many perceptual similarities, our model suggests that ambiguity plays a larger role in the perceptual representation of illusory contours than of luminance-defined contours.
Variability of aerosol optical depth and aerosol radiative forcing over Northwest Himalayan region
NASA Astrophysics Data System (ADS)
Saheb, Shaik Darga; Kant, Yogesh; Mitra, D.
2016-05-01
In recent years, the aerosol loading in India is increasing that has significant impact on the weather/climatic conditions. The present study discusses the analysis of temporal (monthly and seasonal) variation of aerosol optical depth(AOD) by the ground based observations from sun photometer and estimate the aerosol radiative forcing and heating rate over selected station Dehradun in North western Himalayas, India during 2015. The in-situ measurements data illustrate that the maximum seasonal average AOD observed during summer season AOD at 500nm ≍ 0.59+/-0.27 with an average angstrom exponent, α ≍0.86 while minimum during winter season AOD at 500nm ≍ 0.33+/-0.10 with angstrom exponent, α ≍1.18. The MODIS and MISR derived AOD was also compared with the ground measured values and are good to be in good agreement. Analysis of air mass back trajectories using HYSPLIT model reveal that the transportation of desert dust during summer months. The Optical Properties of Aerosols and clouds (OPAC) model was used to compute the aerosol optical properties like single scattering albedo (SSA), Angstrom coefficient (α) and Asymmetry(g) parameter for each day of measurement and they are incorporated in a Discrete Ordinate Radiative Transfer model, i.e Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART) to estimate the direct short-wave (0.25 to 4 μm) Aerosol Radiative forcing at the Surface (SUR), the top-of-atmosphere (TOA) and Atmosphere (ATM). The maximum Aerosol Radiative Forcing (ARF) was observed during summer months at SUR ≍ -56.42 w/m2, at TOA ≍-21.62 w/m2 whereas in ATM ≍+34.79 w/m2 with corresponding to heating rate 1.24°C/day with in lower atmosphere.
Curie point depth in the SW Caribbean using the radially averaged spectra of magnetic anomalies
NASA Astrophysics Data System (ADS)
Salazar, Juan M.; Vargas, Carlos A.; Leon, Hermann
2017-01-01
We have estimated the Curie Point Depth (CPD) using the average radial power spectrum in a tectonically complex area located in the SW Caribbean basin. Data analyzed came from the World Digital Magnetic Anomaly Map, and three methods have been used to compare results and evaluate uncertainties: Centroid, Spectral Peak, and Forward Modeling. Results show a match along the three methods, suggesting that the CPD values in the area ranging between 6 km and 50 km. The results share the following characteristics: A) High values (> 30 km) are in continental regions; B) There is a trend of maximum CPD values along the SW-NE direction, starting from the Central Cordillera in Colombia to the Maracaibo Lake in Venezuela; C) There is a maximum CPD at the Sierra Nevada de Santa Marta (Colombia) as well as between Costa Rica - Nicaragua and Nicaragua - Honduras borders. The lowest CPD values (< 20 km) are associated with the coastal regions and offshore. We also tested results by estimating the geothermal gradient and comparing measured observations of the study area. Our results suggest at least five thermal terrains in the SW Caribbean Basin: A) The area that is comprising the Venezuela Basin, the Beata Ridge and the Colombia Basin up to longitude parallel to the Providencia Throat. B) The area that includes zones to the north of the Cocos Ridge and Panam Basin up to the trench. C) The orogenic region of the northern Andes and including areas of the Santa Marta Massif. D) The continental sector that encompasses Nicaragua, northern Costa Rica and eastern of Honduras. E) Corresponds to areas of the northern Venezuela and Colombia, NW of Colombia, the Panamanian territory and the transition zones between the Upper and Lower Nicaragua Rise.
SWEAT: Snow Water Equivalent with AlTimetry
NASA Astrophysics Data System (ADS)
Agten, Dries; Benninga, Harm-Jan; Diaz Schümmer, Carlos; Donnerer, Julia; Fischer, Georg; Henriksen, Marie; Hippert Ferrer, Alexandre; Jamali, Maryam; Marinaci, Stefano; Mould, Toby JD; Phelan, Liam; Rosker, Stephanie; Schrenker, Caroline; Schulze, Kerstin; Emanuel Telo Bordalo Monteiro, Jorge
2017-04-01
To study how the water cycle changes over time, satellite and airborne remote sensing missions are typically employed. Over the last 40 years of satellite missions, the measurement of true water inventories stored in sea and land ice within the cryosphere have been significantly hindered by uncertainties introduced by snow cover. Being able to determine the thickness of this snow cover would act to reduce such error, improving current estimations of hydrological and climate models, Earth's energy balance (albedo) calculations and flood predictions. Therefore, the target of the SWEAT (Snow Water Equivalent with AlTimetry) mission is to directly measure the surface Snow Water Equivalent (SWE) on sea and land ice within the polar regions above 60°and below -60° latitude. There are no other satellite missions currently capable of directly measuring SWE. In order to achieve this, the proposed mission will implement a novel combination of Ka- and Ku-band radioaltimeters (active microwave sensors), capable of penetrating into the snow microstructure. The Ka-band altimeter (λ ≈ 0.8 cm) provides a low maximum snow pack penetration depth of up to 20 cm for dry snow at 37 GHz, since the volume scattering of snow dominates over the scattering caused by the underlying ice surface. In contrast, the Ku-band altimeter (λ ≈ 2 cm) provides a high maximum snowpack penetration depth of up to 15 m in high latitudes regions with dry snow, as volume scattering is decreased by a factor of 55. The combined difference in Ka- and Ku-band signal penetration results will provide more accurate and direct determination of SWE. Therefore, the SWEAT mission aims to improve estimations of global SWE interpreted from passive microwave products, and improve the reliability of numerical snow and climate models.
Distribution and Rate of Methane Oxidation in Sediments of the Florida Everglades †
King, Gary M.; Roslev, Peter; Skovgaard, Henrik
1990-01-01
Rates of methane emission from intact cores were measured during anoxic dark and oxic light and dark incubations. Rates of methane oxidation were calculated on the basis of oxic incubations by using the anoxic emissions as an estimate of the maximum potential flux. This technique indicated that methane oxidation consumed up to 91% of the maximum potential flux in peat sediments but that oxidation was negligible in marl sediments. Oxygen microprofiles determined for intact cores were comparable to profiles measured in situ. Thus, the laboratory incubations appeared to provide a reasonable approximation of in situ activities. This was further supported by the agreement between measured methane fluxes and fluxes predicted on the basis of methane profiles determined by in situ sampling of pore water. Methane emissions from peat sediments, oxygen concentrations and penetration depths, and methane concentration profiles were all sensitive to light-dark shifts as determined by a combination of field and laboratory analyses. Methane emissions were lower and oxygen concentrations and penetration depths were higher under illuminated than under dark conditions; the profiles of methane concentration changed in correspondence to the changes in oxygen profiles, but the estimated flux of methane into the oxic zone changed negligibly. Sediment-free, root-associated methane oxidation showed a pattern similar to that for methane oxidation in the core analyses: no oxidation was detected for roots growing in marl sediment, even for roots of Cladium jamaicense, which had the highest activity for samples from peat sediments. The magnitude of the root-associated oxidation rates indicated that belowground plant surfaces may not markedly increase the total capacity for methane consumption. However, the data collectively support the notion that the distribution and activity of methane oxidation have a major impact on the magnitude of atmospheric fluxes from the Everglades. PMID:16348299
Depth-estimation-enabled compound eyes
NASA Astrophysics Data System (ADS)
Lee, Woong-Bi; Lee, Heung-No
2018-04-01
Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.
Yun, Joho; Kim, Hyeon Woo; Lee, Jong-Hyun
2016-01-01
A micro electrical impedance spectroscopy (EIS)-on-a-needle for depth profiling (μEoN-DP) with a selective passivation layer (SPL) on a hypodermic needle was recently fabricated to measure the electrical impedance of biotissues along with the penetration depths. The SPL of the μEoN-DP enabled the sensing interdigitated electrodes (IDEs) to contribute predominantly to the measurement by reducing the relative influence of the connection lines on the sensor output. The discrimination capability of the μEoN-DP was verified using phosphate-buffered saline (PBS) at various concentration levels. The resistance and capacitance extracted through curve fitting were similar to those theoretically estimated based on the mixing ratio of PBS and deionized water; the maximum discrepancies were 8.02% and 1.85%, respectively. Depth profiling was conducted using four-layered porcine tissue to verify the effectiveness of the discrimination capability of the μEoN-DP. The magnitude and phase between dissimilar porcine tissues (fat and muscle) were clearly discriminated at the optimal frequency of 1 MHz. Two kinds of simulations, one with SPL and the other with complete passivation layer (CPL), were performed, and it was verified that the SPL was advantageous over CPL in the discrimination of biotissues in terms of sensor output. PMID:28009845
Yun, Joho; Kim, Hyeon Woo; Lee, Jong-Hyun
2016-12-21
A micro electrical impedance spectroscopy (EIS)-on-a-needle for depth profiling (μEoN-DP) with a selective passivation layer (SPL) on a hypodermic needle was recently fabricated to measure the electrical impedance of biotissues along with the penetration depths. The SPL of the μEoN-DP enabled the sensing interdigitated electrodes (IDEs) to contribute predominantly to the measurement by reducing the relative influence of the connection lines on the sensor output. The discrimination capability of the μEoN-DP was verified using phosphate-buffered saline (PBS) at various concentration levels. The resistance and capacitance extracted through curve fitting were similar to those theoretically estimated based on the mixing ratio of PBS and deionized water; the maximum discrepancies were 8.02% and 1.85%, respectively. Depth profiling was conducted using four-layered porcine tissue to verify the effectiveness of the discrimination capability of the μEoN-DP. The magnitude and phase between dissimilar porcine tissues (fat and muscle) were clearly discriminated at the optimal frequency of 1 MHz. Two kinds of simulations, one with SPL and the other with complete passivation layer (CPL), were performed, and it was verified that the SPL was advantageous over CPL in the discrimination of biotissues in terms of sensor output.
Acoustic tracking of sperm whales in the Gulf of Alaska using a two-element vertical array and tags.
Mathias, Delphine; Thode, Aaron M; Straley, Jan; Andrews, Russel D
2013-09-01
Between 15 and 17 August 2010, a simple two-element vertical array was deployed off the continental slope of Southeast Alaska in 1200 m water depth. The array was attached to a vertical buoy line used to mark each end of a longline fishing set, at 300 m depth, close to the sound-speed minimum of the deep-water profile. The buoy line also served as a depredation decoy, attracting seven sperm whales to the area. One animal was tagged with both a LIMPET dive depth-transmitting satellite and bioacoustic "B-probe" tag. Both tag datasets were used as an independent check of various passive acoustic schemes for tracking the whale in depth and range, which exploited the elevation angles and relative arrival times of multiple ray paths recorded on the array. Analytical tracking formulas were viable up to 2 km range, but only numerical propagation models yielded accurate locations up to at least 35 km range at Beaufort sea state 3. Neither localization approach required knowledge of the local bottom bathymetry. The tracking system was successfully used to estimate the source level of an individual sperm whale's "clicks" and "creaks" and predict the maximum detection range of the signals as a function of sea state.
NASA Astrophysics Data System (ADS)
Beckmann, Aike; Hense, Inga
2007-12-01
This study considers an important biome in aquatic environments, the subsurface ecosystem that evolves under low mixing conditions, from a theoretical point of view. Employing a conceptual model that involves phytoplankton, a limiting nutrient and sinking detritus, we use a set of key characteristics (thickness, depth, biomass amplitude/productivity) to qualitatively and quantitatively describe subsurface biomass maximum layers (SBMLs) of phytoplankton. These SBMLs are defined by the existence of two community compensation depths in the water column, which confine the layer of net community production; their depth coincides with the upper nutricline. Analysing the results of a large ensemble of simulations with a one-dimensional numerical model, we explore the parameter dependencies to obtain fundamental steady-state relationships that connect primary production, mortality and grazing, remineralization, vertical diffusion and detrital sinking. As a main result, we find that we can distinguish between factors that determine the vertically integrated primary production and others that affect only depth and shape (thickness and biomass amplitude) of this subsurface production layer. A simple relationship is derived analytically, which can be used to estimate the steady-state primary productivity in the subsurface oligotrophic ocean. The fundamental nature of the results provides further insight into the dynamics of these “hidden” ecosystems and their role in marine nutrient cycling.
Huizinga, Richard J.; Rydlund, Jr., Paul H.
2004-01-01
The evaluation of scour at bridges throughout the state of Missouri has been ongoing since 1991 in a cooperative effort by the U.S. Geological Survey and Missouri Department of Transportation. A variety of assessment methods have been used to identify bridges susceptible to scour and to estimate scour depths. A potential-scour assessment (Level 1) was used at 3,082 bridges to identify bridges that might be susceptible to scour. A rapid estimation method (Level 1+) was used to estimate contraction, pier, and abutment scour depths at 1,396 bridge sites to identify bridges that might be scour critical. A detailed hydraulic assessment (Level 2) was used to compute contraction, pier, and abutment scour depths at 398 bridges to determine which bridges are scour critical and would require further monitoring or application of scour countermeasures. The rapid estimation method (Level 1+) was designed to be a conservative estimator of scour depths compared to depths computed by a detailed hydraulic assessment (Level 2). Detailed hydraulic assessments were performed at 316 bridges that also had received a rapid estimation assessment, providing a broad data base to compare the two scour assessment methods. The scour depths computed by each of the two methods were compared for bridges that had similar discharges. For Missouri, the rapid estimation method (Level 1+) did not provide a reasonable conservative estimate of the detailed hydraulic assessment (Level 2) scour depths for contraction scour, but the discrepancy was the result of using different values for variables that were common to both of the assessment methods. The rapid estimation method (Level 1+) was a reasonable conservative estimator of the detailed hydraulic assessment (Level 2) scour depths for pier scour if the pier width is used for piers without footing exposure and the footing width is used for piers with footing exposure. Detailed hydraulic assessment (Level 2) scour depths were conservatively estimated by the rapid estimation method (Level 1+) for abutment scour, but there was substantial variability in the estimates and several substantial underestimations.
NASA Astrophysics Data System (ADS)
Weston, Keith; Jickells, Timothy D.; Carson, Damien S.; Clarke, Andrew; Meredith, Michael P.; Brandon, Mark A.; Wallace, Margaret I.; Ussher, Simon J.; Hendry, Katharine R.
2013-05-01
A study was carried out to assess primary production and associated export flux in the coastal waters of the western Antarctic Peninsula at an oceanographic time-series site. New, i.e., exportable, primary production in the upper water-column was estimated in two ways; by nutrient deficit measurements, and by primary production rate measurements using separate 14C-labelled radioisotope and 15N-labelled stable isotope uptake incubations. The resulting average annual exportable primary production estimates at the time-series site from nutrient deficit and primary production rates were 13 and 16 mol C m-2, respectively. Regenerated primary production was measured using 15N-labelled ammonium and urea uptake, and was low throughout the sampling period. The exportable primary production measurements were compared with sediment trap flux measurements from 2 locations; the time-series site and at a site 40 km away in deeper water. Results showed ˜1% of the upper mixed layer exportable primary production was exported to traps at 200 m depth at the time-series site (total water column depth 520 m). The maximum particle flux rate to sediment traps at the deeper offshore site (total water column depth 820 m) was lower than the flux at the coastal time-series site. Flux of particulate organic carbon was similar throughout the spring-summer high flux period for both sites. Remineralisation of particulate organic matter predominantly occurred in the upper water-column (<200 m depth), with minimal remineralisation below 200 m, at both sites. This highly productive region on the Western Antarctic Peninsula is therefore best characterised as 'high recycling, low export'.
NASA Astrophysics Data System (ADS)
Boisson, Guillaume; Kerbiriou, Paul; Drazic, Valter; Bureller, Olivier; Sabater, Neus; Schubert, Arno
2014-03-01
Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.
Comparison of Soil Quality Index Using Three Methods
Mukherjee, Atanu; Lal, Rattan
2014-01-01
Assessment of management-induced changes in soil quality is important to sustaining high crop yield. A large diversity of cultivated soils necessitate identification development of an appropriate soil quality index (SQI) based on relative soil properties and crop yield. Whereas numerous attempts have been made to estimate SQI for major soils across the World, there is no standard method established and thus, a strong need exists for developing a user-friendly and credible SQI through comparison of various available methods. Therefore, the objective of this article is to compare three widely used methods to estimate SQI using the data collected from 72 soil samples from three on-farm study sites in Ohio. Additionally, challenge lies in establishing a correlation between crop yield versus SQI calculated either depth wise or in combination of soil layers as standard methodology is not yet available and was not given much attention to date. Predominant soils of the study included one organic (Mc), and two mineral (CrB, Ko) soils. Three methods used to estimate SQI were: (i) simple additive SQI (SQI-1), (ii) weighted additive SQI (SQI-2), and (iii) statistically modeled SQI (SQI-3) based on principal component analysis (PCA). The SQI varied between treatments and soil types and ranged between 0–0.9 (1 being the maximum SQI). In general, SQIs did not significantly differ at depths under any method suggesting that soil quality did not significantly differ for different depths at the studied sites. Additionally, data indicate that SQI-3 was most strongly correlated with crop yield, the correlation coefficient ranged between 0.74–0.78. All three SQIs were significantly correlated (r = 0.92–0.97) to each other and with crop yield (r = 0.65–0.79). Separate analyses by crop variety revealed that correlation was low indicating that some key aspects of soil quality related to crop response are important requirements for estimating SQI. PMID:25148036
Numerical Study of Mechanical Response of Pure Titanium during Shot Peening
NASA Astrophysics Data System (ADS)
Wang, Y. M.; Cheng, J. P.; Yang, H. P.; Zhang, C. H.
2018-05-01
Mechanical response of pure titanium impacted by a steel ball was simulated using finite element method to investigate stress and strain evolution during shot peening. It is indicated that biaxial residual stress was obtained in the surface layer while in the interior triaxial residual stress existed because the S33 was comparable to S11 and S22. With decreasing the depth from the top surface, the stress was higher during impacting, but the stress relief extent became more significant when the ball rebounded. Therefore the maximum residual stress was formed in the subsurface layer with depth of 130 μm. As for the residual strain, it is shown that the maximum residual strain LE33 was obtained at the depth of 60 μm corresponding to the maximum shear stress during impacting.
NASA Astrophysics Data System (ADS)
Cherkasheva, A.; Nöthig, E.-M.; Bauerfeind, E.; Melsheimer, C.; Bracher, A.
2013-04-01
Current estimates of global marine primary production range over a factor of two. Improving these estimates requires an accurate knowledge of the chlorophyll vertical profiles, since they are the basis for most primary production models. At high latitudes, the uncertainty in primary production estimates is larger than globally, because here phytoplankton absorption shows specific characteristics due to the low-light adaptation, and in situ data and ocean colour observations are scarce. To date, studies describing the typical chlorophyll profile based on the chlorophyll in the surface layer have not included the Arctic region, or, if it was included, the dependence of the profile shape on surface concentration was neglected. The goal of our study was to derive and describe the typical Greenland Sea chlorophyll profiles, categorized according to the chlorophyll concentration in the surface layer and further monthly resolved profiles. The Greenland Sea was chosen because it is known to be one of the most productive regions of the Arctic and is among the regions in the Arctic where most chlorophyll field data are available. Our database contained 1199 chlorophyll profiles from R/Vs Polarstern and Maria S. Merian cruises combined with data from the ARCSS-PP database (Arctic primary production in situ database) for the years 1957-2010. The profiles were categorized according to their mean concentration in the surface layer, and then monthly median profiles within each category were calculated. The category with the surface layer chlorophyll (CHL) exceeding 0.7 mg C m-3 showed values gradually decreasing from April to August. A similar seasonal pattern was observed when monthly profiles were averaged over all the surface CHL concentrations. The maxima of all chlorophyll profiles moved from the greater depths to the surface from spring to late summer respectively. The profiles with the smallest surface values always showed a subsurface chlorophyll maximum with its median magnitude reaching up to three times the surface concentration. While the variability of the Greenland Sea season in April, May and June followed the global non-monthly resolved relationship of the chlorophyll profile to surface chlorophyll concentrations described by the model of Morel and Berthon (1989), it deviated significantly from the model in the other months (July-September), when the maxima of the chlorophyll are at quite different depths. The Greenland Sea dimensionless monthly median profiles intersected roughly at one common depth within each category. By applying a Gaussian fit with 0.1 mg C m-3 surface chlorophyll steps to the median monthly resolved chlorophyll profiles of the defined categories, mathematical approximations were determined. They generally reproduce the magnitude and position of the CHL maximum, resulting in an average 4% underestimation in Ctot (and 2% in rough primary production estimates) when compared to in situ estimates. These mathematical approximations can be used as the input to the satellite-based primary production models that estimate primary production in the Arctic regions.
NASA Astrophysics Data System (ADS)
Gregory Lough, R.; Mountain, David G.
A set of vertically stratified MOCNESS tows made on the southern flank of Georges Bank in spring 1981 and 1983 was analyzed to examine the relationship between larval cod and haddock feeding success and turbulent dissipation in a stratified water column. Observed feeding ratios (mean no. prey larval gut -1) for three size classes of larvae were compared with estimated ingestion rates using the Rothschild and Osborn ( Journal of Plankton Research, 10, 1988, 465-474) predator-prey encounter rate model. Simulation of contact rates requires parameter estimates of larval fish and their prey cruising speeds, density of prey, and turbulent velocity of the water column. Turbulent dissipation was estimated from a formulation by James ( Estuarine and Coastal Marine Science, 5, 1977, 339-353) incorporating both a wind a tidal component. Larval ingestion rates were based on swallowing probabilities derived from calm-water laboratory observations. Model-predicted turbulence profiles generally showed that dissipation rates were low to moderate (10 -11-10 -7 W kg -1). Turbulence was minimal at or below the pycnocline (≈ 25 m) with higher values(1-2 orders of magnitude) near the surface due to wind mixing and at depth due to shear in the tidal current near bottom. In a stratified water column during the day, first-feeding larvae (5-6 mm) were located mostly within or above the pycnocline coincident with their copepod prey (nauplii and copepodites). The 7-8 mm larvae were most abundant within the pycnocline, whereas the 9-10 mm larvae were found within and below the pycnocline. Feeding ratios were relatively low in early morning following darkness when the wind speed was low, but increased by a factor of 2-13 by noon and evening when the wind speed doubled. Comparison of depth-specific feeding ratios with estimated ingestion rates, derived from turbulence-affected contact rates, generally were reasonable after allowing for an average gut evacuation time (4 h), and in many cases the observed and estimated values had similar profiles. However, differences in vertical profiles may be attributed to differential digestion time, pursuit behavior affected by high turbulence, vertical migration of the larger larvae, an optimum light level for feeding, smaller-scale prey patchiness, and the gross estimates of turbulence. Response-surface estimation of averaged feeding ratios as a function of averaged prey density (0-50 m) with a minimum water-column turbulence value predicted that 5-6 mm larvae have a maximum feeding response at the highest prey densities (> 30 prey 1 -1) and lower turbulence estimates (<10 -10 W kg -1). The 7-8 mm and 9-10 mm larvae also have a maximum feeding response at high prey densities and low turbulence, but it extends to lower prey densities (> 10 prey 1 -1) as turbulence increases to intermidiate levels, clearly showing an interaction effect. In general, maximum feeding ratios occur at low to intermediate levels of turbulence where average prey density is greater than 10-20 prey 1 -1.
Koottathape, Natthavoot; Takahashi, Hidekazu; Finger, Wernerj; Kanehira, Masafumi; Iwasaki, Naohiko; Aoyagi, Yujin
2012-06-01
Although attritive and abrasive wear of recent composite resins has been substantially reduced, in vitro wear testing with reasonably simulating devices and quantitative determination of resulting wear is still needed. Three-dimensional scanning methods are frequently used for this purpose. The aim of this trial was to compare maximum depth of wear and volume loss of composite samples, evaluated with a contact profilometer and a non-contact CCD camera imaging system, respectively. Twenty-three random composite specimens with wear traces produced in a ball-on-disc sliding device, using poppy seed slurry and PMMA suspension as third-body media, were evaluated with the contact profilometer (TalyScan 150, Taylor Hobson LTD, Leicester, UK) and with the digital CCD microscope (VHX1000, KEYENCE, Osaka, Japan). The target parameters were maximum depth of the wear and volume loss.Results - The individual time of measurement needed with the non-contact CCD method was almost three hours less than that with the contact method. Both, maximum depth of wear and volume loss data, recorded with the two methods were linearly correlated (r(2) > 0.97; p < 0.01). The contact scanning method and the non-contact CCD method are equally suitable for determination of maximum depth of wear and volume loss of abraded composite resins.
Electron fluence correction factors for various materials in clinical electron beams.
Olivares, M; DeBlois, F; Podgorsak, E B; Seuntjens, J P
2001-08-01
Relative to solid water, electron fluence correction factors at the depth of dose maximum in bone, lung, aluminum, and copper for nominal electron beam energies of 9 MeV and 15 MeV of the Clinac 18 accelerator have been determined experimentally and by Monte Carlo calculation. Thermoluminescent dosimeters were used to measure depth doses in these materials. The measured relative dose at dmax in the various materials versus that of solid water, when irradiated with the same number of monitor units, has been used to calculate the ratio of electron fluence for the various materials to that of solid water. The beams of the Clinac 18 were fully characterized using the EGS4/BEAM system. EGSnrc with the relativistic spin option turned on was used to optimize the primary electron energy at the exit window, and to calculate depth doses in the five phantom materials using the optimized phase-space data. Normalizing all depth doses to the dose maximum in solid water stopping power ratio corrected, measured depth doses and calculated depth doses differ by less than +/- 1% at the depth of dose maximum and by less than 4% elsewhere. Monte Carlo calculated ratios of doses in each material to dose in LiF were used to convert the TLD measurements at the dose maximum into dose at the center of the TLD in the phantom material. Fluence perturbation correction factors for a LiF TLD at the depth of dose maximum deduced from these calculations amount to less than 1% for 0.15 mm thick TLDs in low Z materials and are between 1% and 3% for TLDs in Al and Cu phantoms. Electron fluence ratios of the studied materials relative to solid water vary between 0.83+/-0.01 and 1.55+/-0.02 for materials varying in density from 0.27 g/cm3 (lung) to 8.96 g/cm3 (Cu). The difference in electron fluence ratios derived from measurements and calculations ranges from -1.6% to +0.2% at 9 MeV and from -1.9% to +0.2% at 15 MeV and is not significant at the 1sigma level. Excluding the data for Cu, electron fluence correction factors for open electron beams are approximately proportional to the electron density of the phantom material and only weakly dependent on electron beam energy.
Ramsey, Elijah W.; Nelson, G.
2005-01-01
To maximize the spectral distinctiveness (information) of the canopy reflectance, an atmospheric correction strategy was implemented to provide accurate estimates of the intrinsic reflectance from the Earth Observing 1 (EO1) satellite Hyperion sensor signal. In rendering the canopy reflectance, an estimate of optical depth derived from a measurement of downwelling irradiance was used to drive a radiative transfer simulation of atmospheric scattering and attenuation. During the atmospheric model simulation, the input whole-terrain background reflectance estimate was changed to minimize the differences between the model predicted and the observed canopy reflectance spectra at 34 sites. Lacking appropriate spectrally invariant scene targets, inclusion of the field and predicted comparison maximized the model accuracy and, thereby, the detail and precision in the canopy reflectance necessary to detect low percentage occurrences of invasive plants. After accounting for artifacts surrounding prominent absorption features from about 400nm to 1000nm, the atmospheric adjustment strategy correctly explained 99% of the observed canopy reflectance spectra variance. Separately, model simulation explained an average of 88%??9% of the observed variance in the visible and 98% ?? 1% in the near-infrared wavelengths. In the 34 model simulations, maximum differences between the observed and predicted reflectances were typically less than ?? 1% in the visible; however, maximum reflectance differences higher than ?? 1.6% (?2.3%) at more than a few wavelengths were observed at three sites. In the near-infrared wavelengths, maximum reflectance differences remained less than ??3% for 68% of the comparisons (??1 standard deviation) and less than ??6% for 95% of the comparisons (??2 standard deviation). Higher reflectance differences in the visible and near-infrared wavelengths were most likely associated with problems in the comparison, not in the model generation. ?? 2005 US Government.
Depth interval estimates from motion parallax and binocular disparity beyond interaction space.
Gillam, Barbara; Palmisano, Stephen A; Govan, Donovan G
2011-01-01
Static and dynamic observers provided binocular and monocular estimates of the depths between real objects lying well beyond interaction space. On each trial, pairs of LEDs were presented inside a dark railway tunnel. The nearest LED was always 40 m from the observer, with the depth separation between LED pairs ranging from 0 up to 248 m. Dynamic binocular viewing was found to produce the greatest (ie most veridical) estimates of depth magnitude, followed next by static binocular viewing, and then by dynamic monocular viewing. (No significant depth was seen with static monocular viewing.) We found evidence that both binocular and monocular dynamic estimates of depth were scaled for the observation distance when the ground plane and walls of the tunnel were visible up to the nearest LED. We conclude that both motion parallax and stereopsis provide useful long-distance depth information and that motion-parallax information can enhance the degree of stereoscopic depth seen.
Estimation of the Probable Maximum Flood for a Small Lowland River in Poland
NASA Astrophysics Data System (ADS)
Banasik, K.; Hejduk, L.
2009-04-01
The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K., Byczkowski A., Gładecki J., 2003: Prediction of T-year flood discharge from a small river basin using direct and indirect methods. Annals of Warsaw Agricultural University - SGGW, Land Reclamation, No 34, p. 3 - 8. 2.Nash J.E., 1957. The form of the instantaneous unit hydrograph. Publ. IAHS, nr 59, p.202-213. 3.Ozga-Zielińska M. & Ozga-Zielinski B., 2003. The floodgenerativity of rivers as a measure of danger for hydrotechnical structures and determination of flood protection zones (in Polish with English summary). Gospodarka Wodna, no 1, p. 10-17. 4.Shalaby A.,I., 1995. Sensitivity to probable maximum flood. Journal of Irrigation and Drainage Engineering. Vol. 121, No. 5, p. 327-337. 5.USDA-SCS (Soil Conservation Service), 1986. TR-55: Urban hydrolgy for small watershed. Wasington, D.C. 6.Wilson E.M., 1993. Engineering hydrology. MacMillan, London.
Estimation of the Probable Maximum Flood for a Small Lowland River in Poland
NASA Astrophysics Data System (ADS)
Banasik, K.; Hejduk, L.
2009-04-01
The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K., Byczkowski A., Gładecki J., 2003: Prediction of T-year flood discharge from a small river basin using direct and indirect methods. Annals of Warsaw Agricultural University - SGGW, Land Reclamation, No 34, p. 3 - 8. 2.Nash J.E., 1957. The form of the instantaneous unit hydrograph. Publ. IAHS, nr 59, p.202-213. 3.Ozga-Zielińska M. & Ozga-Zielinski B., 2003. The floodgenerativity of rivers as a measure of danger for hydrotechnical structures and determination of flood protection zones (in Polish with English summary). Gospodarka Wodna, no 1, p. 10-17. 4.Shalaby A.,I., 1995. Sensitivity to probable maximum flood. Journal of Irrigation and Drainage Engineering. Vol. 121, No. 5, p. 327-337. 5.USDA-SCS (Soil Conservation Service), 1986. TR-55: Urban hydrolgy for small watershed. Wasington, D.C. 6. Wilson E.M., 1993. Engineering hydrology. MacMillan, London.
There’s plenty of light at the bottom: statistics of photon penetration depth in random media
Martelli, Fabrizio; Binzoni, Tiziano; Pifferi, Antonio; Spinelli, Lorenzo; Farina, Andrea; Torricelli, Alessandro
2016-01-01
We propose a comprehensive statistical approach describing the penetration depth of light in random media. The presented theory exploits the concept of probability density function f(z|ρ, t) for the maximum depth reached by the photons that are eventually re-emitted from the surface of the medium at distance ρ and time t. Analytical formulas for f, for the mean maximum depth 〈zmax〉 and for the mean average depth reached by the detected photons at the surface of a diffusive slab are derived within the framework of the diffusion approximation to the radiative transfer equation, both in the time domain and the continuous wave domain. Validation of the theory by means of comparisons with Monte Carlo simulations is also presented. The results are of interest for many research fields such as biomedical optics, advanced microscopy and disordered photonics. PMID:27256988
Junwei, Zhang; Jinping, Li; Xiaojuan, Quan
2013-01-01
The permafrost degradation is the fundamental cause generating embankment diseases and pavement diseases in permafrost region while the permafrost degradation is related with temperature. Based on the field monitoring results of ground temperature along G214 Highway in high temperature permafrost regions, both the ground temperatures in superficial layer and the annual average temperatures under the embankment were discussed, respectively, for concrete pavements and asphalt pavements. The maximum depth of temperature field under the embankment for concrete pavements and asphalt pavements was also studied by using the finite element method. The results of numerical analysis indicate that there were remarkable seasonal differences of the ground temperatures in superficial layer between asphalt pavement and concrete pavement. The maximum influencing depth of temperature field under the permafrost embankment for every pavement was under the depth of 8 m. The thawed cores under both embankments have close relation with the maximum thawed depth, the embankment height, and the service time. The effective measurements will be proposed to keep the thermal stabilities of highway embankment by the results.
Jinping, Li; Xiaojuan, Quan
2013-01-01
The permafrost degradation is the fundamental cause generating embankment diseases and pavement diseases in permafrost region while the permafrost degradation is related with temperature. Based on the field monitoring results of ground temperature along G214 Highway in high temperature permafrost regions, both the ground temperatures in superficial layer and the annual average temperatures under the embankment were discussed, respectively, for concrete pavements and asphalt pavements. The maximum depth of temperature field under the embankment for concrete pavements and asphalt pavements was also studied by using the finite element method. The results of numerical analysis indicate that there were remarkable seasonal differences of the ground temperatures in superficial layer between asphalt pavement and concrete pavement. The maximum influencing depth of temperature field under the permafrost embankment for every pavement was under the depth of 8 m. The thawed cores under both embankments have close relation with the maximum thawed depth, the embankment height, and the service time. The effective measurements will be proposed to keep the thermal stabilities of highway embankment by the results. PMID:24027444
NASA Astrophysics Data System (ADS)
Liu, Di; Mishra, Ashok K.; Yu, Zhongbo
2016-07-01
This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Mass composition results from the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Riggi, Simone; Pierre Auger Collaboration
2011-03-01
The present paper reports the recent composition results obtained by the Pierre Auger Observatory using both hybrid and surface detector data. The reconstruction of the shower longitudinal profile and depth of maximum with the fluorescence detector is described. The measured average depth of maximum and its fluctuations as function of the primary energy is presented. The sensitivity of rise time parameters measured with the ground stations and the obtained composition results are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Connell, D.R.
1986-12-01
The method of progressive hypocenter-velocity inversion has been extended to incorporate S-wave arrival time data and to estimate S-wave velocities in addition to P-wave velocities. S-wave data to progressive inversion does not completely eliminate hypocenter-velocity tradeoffs, but they are substantially reduced. Results of a P and S-wave progressive hypocenter-velocity inversion at The Geysers show that the top of the steam reservoir is clearly defined by a large decrease of V/sub p//V/sub s/ at the condensation zone-production zone contact. The depth interval of maximum steam production coincides with minimum observed V/sub p//V/sub s/, and V/sub p//V/sub s/ increses below the shallowmore » primary production zone suggesting that reservoir rock becomes more fluid saturated. The moment tensor inversion method was applied to three microearthquakes at The Geysers. Estimated principal stress orientations were comparable to those estimated using P-wave firstmotions as constraints. Well constrained principal stress orientations were obtained for one event for which the 17 P-first motions could not distinguish between normal-slip and strike-slip mechanisms. The moment tensor estimates of principal stress orientations were obtained using far fewer stations than required for first-motion focal mechanism solutions. The three focal mechanisms obtained here support the hypothesis that focal mechanisms are a function of depth at The Geysers. Progressive inversion as developed here and the moment tensor inversion method provide a complete approach for determining earthquake locations, P and S-wave velocity structure, and earthquake source mechanisms.« less
NASA Astrophysics Data System (ADS)
Pelland, Noel A.; Eriksen, Charles C.; Cronin, Meghan F.
2017-06-01
Heat and salt balances in the upper 200 m are examined using data from Seaglider spatial surveys June 2008 to January 2010 surrounding a NOAA surface mooring at Ocean Station Papa (OSP; 50°N, 145°W). A least-squares approach is applied to repeat Seaglider survey and moored measurements to solve for unknown or uncertain monthly three-dimensional circulation and vertical diffusivity. Within the surface boundary layer, the estimated heat and salt balances are dominated throughout the surveys by turbulent flux, vertical advection, and for heat, radiative absorption. When vertically integrated balances are considered, an estimated upwelling of cool water balances the net surface input of heat, while the corresponding large import of salt across the halocline due to upwelling and diffusion is balanced by surface moisture input and horizontal import of fresh water. Measurement of horizontal gradients allows the estimation of unresolved vertical terms over more than one annual cycle; diffusivity in the upper-ocean transition layer decreases rapidly to the depth of the maximum near-surface stratification in all months, with weak seasonal modulation in the rate of decrease and profile amplitude. Vertical velocity is estimated to be on average upward but with important monthly variations. Results support and expand existing evidence concerning the importance of horizontal advection in the balances of heat and salt in the Gulf of Alaska, highlight time and depth variability in difficult-to-measure vertical transports in the upper ocean, and suggest avenues of further study in future observational work at OSP.
A Statistical Guide to the Design of Deep Mutational Scanning Experiments.
Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia
2016-09-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.
Buckwalter, T.F.; Squillace, P.J.
1995-01-01
Hydrologic data were evaluated from four areas of western Pennsylvania to estimate the minimum depth of well surface casing needed to prevent contamination of most of the fresh ground-water resources by oil and gas wells. The areas are representative of the different types of oil and gas activities and of the ground-water hydrology of most sections of the Appalachian Plateaus Physiographic Province in western Pennsylvania. Approximate delineation of the base of the fresh ground-water system was attempted by interpreting the following hydrologic data: (1) reports of freshwater and saltwater in oil and gas well-completion reports, (2) water well-completion reports, (3) geophysical logs, and (4) chemical analyses of well water. Because of the poor quality and scarcity of ground-water data, the altitude of the base of the fresh ground-water system in the four study areas cannot be accurately delineated. Consequently, minimum surface-casing depths for oil and gas wells cannot be estimated with confidence. Conscientious and reliable reporting of freshwater and saltwater during drilling of oil and gas wells would expand the existing data base. Reporting of field specific conductance of ground water would greatly enhance the value of the reports of ground water in oil and gas well-completion records. Water-bearing zones in bedrock are controlled mostly by the presence of secondary openings. The vertical and horizontal discontinuity of secondary openings may be responsible, in part, for large differences in altitudes of freshwater zones noted on completion records of adjacent oil and gas wells. In upland and hilltop topographies, maximum depths of fresh ground water are reported from several hundred feet below land surface to slightly more than 1,000 feet, but the few deep reports are not substantiated by results of laboratory analyses of dissolved-solids concentrations. Past and present drillers for shallow oil and gas wells commonly install surface casing to below the base of readily observed fresh ground water. Casing depths are selected generally to maximize drilling efficiency and to stop freshwater from entering the well and subsequently interfering with hydrocarbon recovery. The depths of surface casing generally are not selected with ground-water protection in mind. However, on the basis of existing hydrologic data, most freshwater aquifers generally are protected with current casing depths. Minimum surface-casing depths for deep gas wells are prescribed by Pennsylvania Department of Environmental Resources regulations and appear to be adequate to prevent ground-water contamination, in most respects, for the only study area with deep gas fields examined in Crawford County.
Marine-target craters on Mars? An assessment study
Ormo, J.; Dohm, J.M.; Ferris, J.C.; Lepinette, A.; Fairen, A.G.
2004-01-01
Observations of impact craters on Earth show that a water column at the target strongly influences lithology and morphology of the resultant crater. The degree of influence varies with the target water depth and impactor diameter. Morphological features detectable in satellite imagery include a concentric shape with an inner crater inset within a shallower outer crater, which is cut by gullies excavated by the resurge of water. In this study, we show that if oceans, large seas, and lakes existed on Mars for periods of time, marine-target craters must have formed. We make an assessment of the minimum and maximum amounts of such craters based on published data on water depths, extent, and duration of putative oceans within "contacts 1 and 2," cratering rate during the different oceanic phases, and computer modeling of minimum impactor diameters required to form long-lasting craters in the seafloor of the oceans. We also discuss the influence of erosion and sedimentation on the preservation and exposure of the craters. For an ocean within the smaller "contact 2" with a duration of 100,000 yr and the low present crater formation rate, only ???1-2 detectable marine-target craters would have formed. In a maximum estimate with a duration of 0.8 Gyr, as many as 1400 craters may have formed. An ocean within the larger "contact 1-Meridiani," with a duration of 100,000 yr, would not have received any seafloor craters despite the higher crater formation rate estimated before 3.5 Gyr. On the other hand, with a maximum duration of 0.8 Gyr, about 160 seafloor craters may have formed. However, terrestrial examples show that most marine-target craters may be covered by thick sediments. Ground penetrating radar surveys planned for the ESA Mars Express and NASA 2005 missions may reveal buried craters, though it is uncertain if the resolution will allow the detection of diagnostic features of marine-target craters. The implications regarding the discovery of marine-target craters on Mars is not without significance, as such discoveries would help address the ongoing debate of whether large water bodies occupied the northern plains of Mars and would help constrain future paleoclimatic reconstructions. ?? Meteoritical Society, 2004.
Near-fault peak ground velocity from earthquake and laboratory data
McGarr, A.; Fletcher, Joe B.
2007-01-01
We test the hypothesis that peak ground velocity (PGV) has an upper bound independent of earthquake magnitude and that this bound is controlled primarily by the strength of the seismogenic crust. The highest PGVs, ranging up to several meters per second, have been measured at sites within a few kilometers of the causative faults. Because the database for near-fault PGV is small, we use earthquake slip models, laboratory experiments, and evidence from a mining-induced earthquake to investigate the factors influencing near-fault PGV and the nature of its scaling. For each earthquake slip model we have calculated the peak slip rates for all subfaults and then chosen the maximum of these rates as an estimate of twice the largest near-fault PGV. Nine slip models for eight earthquakes, with magnitudes ranging from 6.5 to 7.6, yielded maximum peak slip rates ranging from 2.3 to 12 m/sec with a median of 5.9 m/sec. By making several adjustments, PGVs for small earthquakes can be simulated from peak slip rates measured during laboratory stick-slip experiments. First, we adjust the PGV for differences in the state of stress (i.e., the difference between the laboratory loading stresses and those appropriate for faults at seismogenic depths). To do this, we multiply both the slip and the peak slip rate by the ratio of the effective normal stresses acting on fault planes measured at 6.8 km depth at the KTB site, Germany (deepest available in situ stress measurements), to those acting on the laboratory faults. We also adjust the seismic moment by replacing the laboratory fault with a buried circular shear crack whose radius is chosen to match the experimental unloading stiffness. An additional, less important adjustment is needed for experiments run in triaxial loading conditions. With these adjustments, peak slip rates for 10 stick-slip events, with scaled moment magnitudes from -2.9 to 1.0, range from 3.3 to 10.3 m/sec, with a median of 5.4 m/sec. Both the earthquake and laboratory results are consistent with typical maximum peak slip rates averaging between 5 and 6 m/sec or corresponding maximum near-fault PGVs between 2.5 and 3 m/sec at seismogenic depths, independent of magnitude. Our ability to replicate maximum slip rates in the fault zones of earthquakes by adjusting the corresponding laboratory rates using the ratio of effective normal stresses acting on the fault planes suggests that the strength of the seismogenic crust is the important factor limiting the near-fault PGV.
NASA Astrophysics Data System (ADS)
Lee, Min Sun; Kim, Kyeong Yun; Ko, Guen Bae; Lee, Jae Sung
2017-05-01
In this study, we developed a proof-of-concept prototype PET system using a pair of depth-of-interaction (DOI) PET detectors based on the proposed DOI-encoding method and digital silicon photomultiplier (dSiPM). Our novel cost-effective DOI measurement method is based on a triangular-shaped reflector that requires only a single-layer pixelated crystal and single-ended signal readout. The DOI detector consisted of an 18 × 18 array of unpolished LYSO crystal (1.47 × 1.47 × 15 mm3) wrapped with triangular-shaped reflectors. The DOI information was encoded by depth-dependent light distribution tailored by the reflector geometry and DOI correction was performed using four-step depth calibration data and maximum-likelihood (ML) estimation. The detector pair and the object were placed on two motorized rotation stages to demonstrate 12-block ring PET geometry with 11.15 cm diameter. Spatial resolution was measured and phantom and animal imaging studies were performed to investigate imaging performance. All images were reconstructed with and without the DOI correction to examine the impact of our DOI measurement. The pair of dSiPM-based DOI PET detectors showed good physical performances respectively: 2.82 and 3.09 peak-to-valley ratios, 14.30% and 18.95% energy resolution, and 4.28 and 4.24 mm DOI resolution averaged over all crystals and all depths. A sub-millimeter spatial resolution was achieved at the center of the field of view (FOV). After applying ML-based DOI correction, maximum 36.92% improvement was achieved in the radial spatial resolution and a uniform resolution was observed within 5 cm of transverse PET FOV. We successfully acquired phantom and animal images with improved spatial resolution and contrast by using the DOI measurement. The proposed DOI-encoding method was successfully demonstrated in the system level and exhibited good performance, showing its feasibility for animal PET applications with high spatial resolution and sensitivity.
NASA Astrophysics Data System (ADS)
Goldberg, R. A.; Jackman, C. H.; Baker, D. N.; Herrero, F. A.
Highly relativistic electron precipitation events (HREs) can provide a major source of energy affecting ionization levels and minor constituents in the mesosphere. Based on satellite data, these events are most pronounced during the minimum of the solar sunspot cycle, increasing in intensity, spectral hardness and frequency of occurrence as solar activity declines. Furthermore, although the precipitating flux is modulated diurnally in local time, the noontime maximum is very broad, exceeding several hours. Since such events can be sustained up to several days, their integrated effect in the mesosphere can dominate over those of other external sources such as relativistic electron precipitation events (REPs) and auroral precipitation. In this work, the effects of HRE relativistic electrons on the neutral minor constituents OH and O3 are modeled during a modest HRE, to estimate their anticipated impact on mesospheric heating and dynamics. The data to be discussed and analyzed were obtained by rocket at Poker Flat, Alaska on May 13, 1990 during an HRE observed at midday near the peak of the sunspot cycle. Solid state detectors were used to measure the electron fluxes and their energy spectra. An x-ray scintillator was included to measure bremsstrahlung x-rays produced by energetic electrons impacting the upper atmosphere; however, these were found to make a negligible contribution to the energy deposition during this particular HRE event. Hence, the energy deposition produced by the highly relativistic electrons dominated within the mesosphere and was used exclusively to infer changes in the middle atmospheric minor constituent abundances. By employing a two-dimensional photochemical model developed for this region at Goddard Space Fight Center, it has been found that for this event, peak modifications in the neutral minor species occurred near 80 km. A maximum enhancement for OH was calculated to be over 40% at the latitude of the launch site, which in turn induced a maximum depletion of O3 in excess of 30%. Since this particular HRE occurred near solar maximum, it was of modest intensity and spectral hardness, parameters which could grow significantly as solar minimum is approached. Estimates of mesospheric OH enhancement and O3 depletion have also been made for more intense HRE events, as might be expected during the declining phase of the solar cycle. The findings imply that the energy deposition from highly relativistic electrons during more intense HREs could modulate the concentration of important minor species within the mesosphere to much higher levels than estimated for the observed HRE. By causing O3 destruction, the electron precipitation can also modify the penetration depth of solar UV radiation, which may affect thermal properties of the mesosphere to depths approaching 60 km.
Bohidar, R N; Sullivan, J P; Hermance, J F
2001-01-01
In view of the increasing demand on ground water supplies in the northeastern United States, it is imperative to develop appropriate methods to geophysically characterize the most widely used sources of ground water in the region: shallow unconfined aquifers consisting of well-sorted, stratified glacial deposits laid down in bedrock valleys and channels. The gravity method, despite its proven value in delineating buried bedrock valleys elsewhere, is seldom used by geophysical contractors in this region. To demonstrate the method's effectiveness for evaluating such aquifers, a pilot study was undertaken in the Palmer River Basin in southeastern Massachusetts. Because bedrock is so shallow beneath this aquifer (maximum depth is 30 m), the depth-integrated mass deficiency of the overlying unconsolidated material was small, so that the observed gravity anomaly was on the order of 1 milligal (mGal) or less. Thus data uncertainties were significant. Moreover, unlike previous gravity studies elsewhere, we had no a priori information on the density of the sediment. Under such circumstances, it is essential to include model constraints and weighted least-squares in the inversion procedure. Among the model constraints were water table configuration, bedrock outcrops, and depth to bedrock from five water wells. Our procedure allowed us to delineate depth to bedrock along a 3.5 km profile with a confidence interval of 1.8 m at a nominal depth of 17 m. Moreover, we obtained a porosity estimate in the range of 39% to 44%. Thus the gravity method, with appropriate refinements, is an effective tool for the reconnaissance of shallow unconfined aquifers.
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
Regulation of water flux through tropical forest canopy trees: do universal rules apply?
Meinzer, F C; Goldstein, G; Andrade, J L
2001-01-01
Tropical moist forests are notable for their richness in tree species. The presence of such a diverse tree flora presents potential problems for scaling up estimates of water use from individual trees to entire stands and for drawing generalizations about physiological regulation of water use in tropical trees. We measured sapwood area or sap flow, or both, in 27 co-occurring canopy species in a Panamanian forest to determine the extent to which relationships between tree size, sapwood area and sap flow were species-specific, or whether they were constrained by universal functional relationships between tree size, conducting xylem area, and water use. For the 24 species in which active xylem area was estimated over a range of size classes, diameter at breast height (DBH) accounted for 98% of the variation in sapwood area and 67% of the variation in sapwood depth when data for all species were combined. The DBH alone also accounted for > or = 90% of the variation in both maximum and total daily sap flux density in the outermost 2 cm of sapwood for all species taken together. Maximum sap flux density measured near the base of the tree occurred at about 1,400 h in the largest trees and 1,130 h in the smallest trees studied, and DBH accounted for 93% of the variation in the time of day at which maximum sap flow occurred. The shared relationship between tree size and time of maximum sap flow at the base of the tree suggests that a common relationship between diurnal stem water storage capacity and tree size existed. These results are consistent with a recent hypothesis that allometric scaling of plant vascular systems, and therefore water use, is universal.
NASA Astrophysics Data System (ADS)
Rowley, David
2017-04-01
On a spherical Earth, the mean elevation ( -2440m) would be everywhere at a mean Earth radius from the center. This directly links an elevation at the surface to physical dimensions of the Earth, including surface area and volume that are at most very slowly evolving components of the Earth system. Earth's mean elevation thus provides a framework within which to consider changes in heights of Earth's solid surface as a function of time. In this paper the focus will be on long-term, non-glacially controlled sea level. Long-term sea level has long been argued to be largely controlled by changes in ocean basin volume related to changes in area-age distribution of oceanic lithosphere. As generally modeled by Pitman (1978) and subsequent workers, the age-depth relationship of oceanic lithosphere, including both the ridge depth and coefficients describing the age-depth relationship are assumed constant. This paper examines the consequences of adhering to these assumptions when placed within the larger framework of maintaining a constant mean radius of the Earth. Self-consistent estimates of long-term sea level height and changes in mean depth of the oceanic crust are derived from the assumption that the mean elevation and corresponding mean radius are unchanging aspects of Earth's shorter-term evolution. Within this context, changes in mean depth of the oceanic crust, corresponding with changes in mean age of the oceanic lithosphere, acting over the area of the oceanic crust represent a volume change that is required to be balanced by a compensating equal but opposite volume change under the area of the continental crust. Models of paleo-cumulative hypsometry derived from a starting glacial isostatic adjustment (GIA)-corrected ice-free hypsometry that conserve mean elevation provide a basis for understanding how these compensating changes impact global hypsometry and particularly estimates of global mean shoreline height. Paleo-shoreline height and areal extent of flooding can be defined as the height and corresponding cumulative area of the solid surface of the Earth at which the integral of area as a function of elevation, from the maximum depth upwards, equals the volume of ocean water filling it with respect to cumulative paleo-hypsometry. Present height of the paleo-shoreline is the height on the GIA-corrected cumulative hypsometry at an area equal to the areal extent of flooding. Paleogeographic estimates of global extent of ocean flooding from the Middle Jurassic to end Eocene, when combined with conservation of mean elevation and ocean water volume allow an explicit estimate of the paleo-height and present height of the paleo-shoreline. The best-fitting estimate of present height of the paleo-shoreline, equivalent to a long-term "eustatic" sea level curve, implies very modest (25±22m) changes in long-term sea level above the ice-free sea level height of +40m. These, in turn, imply quite limited changes in mean depth of the oceanic crust (15±11m), and mean age of the oceanic lithosphere ( 62.1±2.4 my) since the Middle Jurassic.
Preliminary studies of the effect of thinning techniques over muon production profiles
NASA Astrophysics Data System (ADS)
Tomishiyo, G.; Souza, V.
2017-06-01
In the context of air shower simulations, thinning techniques are employed to reduce computational time and storage requirements. These techniques are tailored to preserve locally mean quantities during shower development, such as the average number of particles in a given atmosphere layer, and to not induce systematic shifts in shower observables, such as the depth of shower maximum. In this work we investigate thinning effects on the determination of the depth in which the shower has the maximum muon production {X}\\max μ -{sim}. We show preliminary results in which the thinning factor and maximum thinning weight might influence the determination of {X}\\max μ -{sim}
NASA Astrophysics Data System (ADS)
Schoellhamer, David H.; Manning, Andrew J.; Work, Paul A.
2017-06-01
Erodibility of cohesive sediment in the Sacramento-San Joaquin River Delta (Delta) was investigated with an erosion microcosm. Erosion depths in the Delta and in the microcosm were estimated to be about one floc diameter over a range of shear stresses and times comparable to half of a typical tidal cycle. Using the conventional assumption of horizontally homogeneous bed sediment, data from 27 of 34 microcosm experiments indicate that the erosion rate coefficient increased as eroded mass increased, contrary to theory. We believe that small erosion depths, erosion rate coefficient deviation from theory, and visual observation of horizontally varying biota and texture at the sediment surface indicate that erosion cannot solely be a function of depth but must also vary horizontally. We test this hypothesis by developing a simple numerical model that includes horizontal heterogeneity, use it to develop an artificial time series of suspended-sediment concentration (SSC) in an erosion microcosm, then analyze that time series assuming horizontal homogeneity. A shear vane was used to estimate that the horizontal standard deviation of critical shear stress was about 30% of the mean value at a site in the Delta. The numerical model of the erosion microcosm included a normal distribution of initial critical shear stress, a linear increase in critical shear stress with eroded mass, an exponential decrease of erosion rate coefficient with eroded mass, and a stepped increase in applied shear stress. The maximum SSC for each step increased gradually, thus confounding identification of a single well-defined critical shear stress as encountered with the empirical data. Analysis of the artificial SSC time series with the assumption of a homogeneous bed reproduced the original profile of critical shear stress, but the erosion rate coefficient increased with eroded mass, similar to the empirical data. Thus, the numerical experiment confirms the small-depth erosion hypothesis. A linear model of critical shear stress and eroded mass is proposed to simulate small-depth erosion, assuming that the applied and critical shear stresses quickly reach equilibrium.
Schoellhamer, David H.; Manning, Andrew J.; Work, Paul A.
2017-01-01
Erodibility of cohesive sediment in the Sacramento-San Joaquin River Delta (Delta) was investigated with an erosion microcosm. Erosion depths in the Delta and in the microcosm were estimated to be about one floc diameter over a range of shear stresses and times comparable to half of a typical tidal cycle. Using the conventional assumption of horizontally homogeneous bed sediment, data from 27 of 34 microcosm experiments indicate that the erosion rate coefficient increased as eroded mass increased, contrary to theory. We believe that small erosion depths, erosion rate coefficient deviation from theory, and visual observation of horizontally varying biota and texture at the sediment surface indicate that erosion cannot solely be a function of depth but must also vary horizontally. We test this hypothesis by developing a simple numerical model that includes horizontal heterogeneity, use it to develop an artificial time series of suspended-sediment concentration (SSC) in an erosion microcosm, then analyze that time series assuming horizontal homogeneity. A shear vane was used to estimate that the horizontal standard deviation of critical shear stress was about 30% of the mean value at a site in the Delta. The numerical model of the erosion microcosm included a normal distribution of initial critical shear stress, a linear increase in critical shear stress with eroded mass, an exponential decrease of erosion rate coefficient with eroded mass, and a stepped increase in applied shear stress. The maximum SSC for each step increased gradually, thus confounding identification of a single well-defined critical shear stress as encountered with the empirical data. Analysis of the artificial SSC time series with the assumption of a homogeneous bed reproduced the original profile of critical shear stress, but the erosion rate coefficient increased with eroded mass, similar to the empirical data. Thus, the numerical experiment confirms the small-depth erosion hypothesis. A linear model of critical shear stress and eroded mass is proposed to simulate small-depth erosion, assuming that the applied and critical shear stresses quickly reach equilibrium.
NASA Astrophysics Data System (ADS)
Goetz, Jason; Marcer, Marco; Bodin, Xavier; Brenning, Alexander
2017-04-01
Snow depth mapping in open areas using close range aerial imagery is just one of the many cases where developments in structure-from-motion and multi-view-stereo (SfM-MVS) 3D reconstruction techniques have been applied for geosciences - and with good reason. Our ability to increase the spatial resolution and frequency of observations may allow us to improve our understanding of how snow depth distribution varies through space and time. However, to ensure accurate snow depth observations from close range sensing we must adequately characterize the uncertainty related to our measurement techniques. In this study, we explore the spatial uncertainties of snow elevation models for estimation of snow depth in a complex alpine terrain from close range aerial imagery. We accomplish this by conducting repeat autonomous aerial surveys over a snow-covered active-rock glacier located in the French Alps. The imagery obtained from each flight of an unmanned aerial vehicle (UAV) is used to create an individual digital elevation model (DEM) of the snow surface. As result, we obtain multiple DEMs of the snow surface for the same site. These DEMs are obtained from processing the imagery with the photogrammetry software Agisoft Photoscan. The elevation models are also georeferenced within Photoscan using the geotagged imagery from an onboard GNSS in combination with ground targets placed around the rock glacier, which have been surveyed with highly accurate RTK-GNSS equipment. The random error associated with multi-temporal DEMs of the snow surface is estimated from the repeat aerial survey data. The multiple flights are designed to follow the same flight path and altitude above the ground to simulate the optimal conditions of repeat survey of the site, and thus try to estimate the maximum precision associated with our snow-elevation measurement technique. The bias of the DEMs is assessed with RTK-GNSS survey observations of the snow surface elevation of the area on and surrounding the rock glacier. Additionally, one of the challenges with processing snow cover imagery with SfM-MVS is dealing with the general homogeneity of the surface, which makes is difficult for automated-feature detection algorithms to identify key features for point matching. This challenge depends on the snow cover surface conditions, such as scale, lighting conditions (high vs. low contrast), and availability of snow-free features within a scene, among others. We attempt to explore this aspect by spatial modelling the factors influencing the precision and bias of the DEMs from image, flight, and terrain attributes.
Vadeboncoeur, Yvonne; Peterson, Garry; Vander Zanden, M Jake; Kalff, Jacob
2008-09-01
Attached algae play a minor role in conceptual and empirical models of lake ecosystem function but paradoxically form the energetic base of food webs that support a wide variety of fishes. To explore the apparent mismatch between perceived limits on contributions of periphyton to whole-lake primary production and its importance to consumers, we modeled the contribution of periphyton to whole-ecosystem primary production across lake size, shape, and nutrient gradients. The distribution of available benthic habitat for periphyton is influenced by the ratio of mean depth to maximum depth (DR = z/ z(max)). We modeled total phytoplankton production from water-column nutrient availability, z, and light. Periphyton production was a function of light-saturated photosynthesis (BPmax) and light availability at depth. The model demonstrated that depth ratio (DR) and light attenuation strongly determined the maximum possible contribution of benthic algae to lake production, and the benthic proportion of whole-lake primary production (BPf) declined with increasing nutrients. Shallow lakes (z < or =5 m) were insensitive to DR and were dominated by either benthic or pelagic primary productivity depending on trophic status. Moderately deep oligotrophic lakes had substantial contributions by benthic primary productivity at low depth ratios and when maximum benthic photosynthesis was moderate or high. Extremely large, deep lakes always had low fractional contributions of benthic primary production. An analysis of the world's largest lakes showed that the shapes of natural lakes shift increasingly toward lower depth ratios with increasing depth, maximizing the potential importance of littoral primary production in large-lake food webs. The repeatedly demonstrated importance of periphyton to lake food webs may reflect the combination of low depth ratios and high light penetration characteristic of large, oligotrophic lakes that in turn lead to substantial contributions of periphyton to autochthonous production.
FY-2015 Methyl Iodide Deep-Bed Adsorption Test Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soelberg, Nicholas Ray; Watson, Tony Leroy
2015-09-30
Nuclear fission produces fission and activation products, including iodine-129, which could evolve into used fuel reprocessing facility off-gas systems, and could require off-gas control to limit air emissions to levels within acceptable emission limits. Deep-bed methyl iodide adsorption testing has continued in Fiscal Year 2015 according to a multi-laboratory methyl iodide adsorption test plan. Updates to the deep-bed test system have also been performed to enable the inclusion of evaporated HNO 3 and increased NO 2 concentrations in future tests. This report summarizes the result of those activities. Test results showed that iodine adsorption from gaseous methyl iodide using reducedmore » silver zeolite (AgZ) resulted in initial iodine decontamination factors (DFs, ratios of uncontrolled and controlled total iodine levels) under 1,000 for the conditions of the long-duration test performed this year (45 ppm CH3I, 1,000 ppm each NO and NO 2, very low H 2O levels [3 ppm] in balance air). The mass transfer zone depth exceeded the cumulative 5-inch depth of 4 bed segments, which is deeper than the 2-4 inch depth estimated for the mass transfer zone for adsorbing I 2 using AgZ in prior deep-bed tests. The maximum iodine adsorption capacity for the AgZ under the conditions of this test was 6.2% (6.2 g adsorbed I per 100 g sorbent). The maximum Ag utilization was 51%. Additional deep-bed testing and analyses are recommended to (a) expand the data base for methyl iodide adsorption and (b) provide more data for evaluating organic iodide reactions and reaction byproducts for different potential adsorption conditions.« less
NASA Astrophysics Data System (ADS)
Putirka, K. D.
2006-05-01
The question as to whether any particular oceanic island is the result of a thermal mantle plume, is a question of whether volcanism is the result of passive upwelling, as at mid-ocean ridges, or active upwelling, driven by thermally buoyant material. When upwelling is passive, mantle temperatures reflect average or ambient upper mantle values. In contrast, sites of thermally driven active upwellings will have elevated (or excess) mantle temperatures, driven by some source of excess heat. Skeptics of the plume hypothesis suggest that the maximum temperatures at ocean islands are similar to maximum temperatures at mid-ocean ridges (Anderson, 2000; Green et al., 2001). Olivine-liquid thermometry, when applied to Hawaii, Iceland, and global MORB, belie this hypothesis. Olivine-liquid equilibria provide the most accurate means of estimating mantle temperatures, which are highly sensitive to the forsterite (Fo) contents of olivines, and the FeO content of coexisting liquids. Their application shows that mantle temperatures in the MORB source region are less than temperatures at both Hawaii and Iceland. The Siqueiros Transform may provide the most precise estimate of TpMORB because high MgO glass compositions there have been affected only by olivine fractionation, so primitive FeOliq is known; olivine thermometry yields TpSiqueiros = 1430 ±59°C. A global database of 22,000 MORB show that most MORB have slightly higher FeOliq than at Siqueiros, which translates to higher calculated mantle potential temperatures. If the values for Fomax (= 91.5) and KD (Fe-Mg)ol-liq (= 0.29) at Siqueiros apply globally, then upper mantle Tp is closer to 1485 ± 59°C. Averaging this global estimate with that recovered at Siqueiros yields TpMORB = 1458 ± 78°C, which is used to calculate plume excess temperatures, Te. The estimate for TpMORB defines the convective mantle geotherm, and is consistent with estimates from sea floor bathymetry and heat flow (Stein and Stein, 1992), and overlap within 1 sigma estimates from phase transitions at the 410 km (Jeanloz and Thompson, 1983) and 670 km (Hirose, 2002) seismic discontinuities. Variations in MORB FeOliq can be used to calculate the variance of TpMORB. FeOliq variations in global MORB show that 95% of the sub-MORB mantle has a T range of 165°C; 68% of MORB fall within temperature variations of ±30°C. In comparison, Te at Hawaii and Iceland are 1706°C and 1646°C respectively, and hence Te> is 248°C at Hawaii and 188°C at Iceland. Tp estimates at Hawaii and Iceland also exceed maximum Tp estimates at MORs (at 95% level) by 171 and 111°C respectively. These Te are in agreement with estimates derived from excess topography and dynamic models of mantle flow and melt generation (e.g., Sleep, 1990, Schilling, 1991, Ito et al., 1999). A clear result is that Hawaii and Iceland are hot relative to MORB. Rayleigh number calculations further show that for these Te, critical depths (i.e., the depths at which Ra > 1000) are < 130 km. Hawaii and Iceland are thus almost assuredly the result of thermally driven, active upwellings, or mantle plumes.
Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona
Bultman, Mark W.
2015-01-01
Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.
Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul
2016-09-29
The Everglades Depth Estimation Network (EDEN), with over 240 real-time gaging stations, provides hydrologic data for freshwater and tidal areas of the Everglades. These data are used to generate daily water-level and water-depth maps of the Everglades that are used to assess biotic responses to hydrologic change resulting from the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. The generation of EDEN daily water-level and water-depth maps is dependent on high quality real-time data from water-level stations. Real-time data are automatically checked for outliers by assigning minimum and maximum thresholds for each station. Small errors in the real-time data, such as gradual drift of malfunctioning pressure transducers, are more difficult to immediately identify with visual inspection of time-series plots and may only be identified during on-site inspections of the stations. Correcting these small errors in the data often is time consuming and water-level data may not be finalized for several months. To provide daily water-level and water-depth maps on a near real-time basis, EDEN needed an automated process to identify errors in water-level data and to provide estimates for missing or erroneous water-level data.The Automated Data Assurance and Management (ADAM) software uses inferential sensor technology often used in industrial applications. Rather than installing a redundant sensor to measure a process, such as an additional water-level station, inferential sensors, or virtual sensors, were developed for each station that make accurate estimates of the process measured by the hard sensor (water-level gaging station). The inferential sensors in the ADAM software are empirical models that use inputs from one or more proximal stations. The advantage of ADAM is that it provides a redundant signal to the sensor in the field without the environmental threats associated with field conditions at stations (flood or hurricane, for example). In the event that a station does malfunction, ADAM provides an accurate estimate for the period of missing data. The ADAM software also is used in the quality assurance and quality control of the data. The virtual signals are compared to the real-time data, and if the difference between the two signals exceeds a certain tolerance, corrective action to the data and (or) the gaging station can be taken. The ADAM software is automated so that, each morning, the real-time EDEN data are compared to the inferential sensor signals and digital reports highlighting potential erroneous real-time data are generated for appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.
Bayesian depth estimation from monocular natural images.
Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C
2017-05-01
Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.
Depth inpainting by tensor voting.
Kulkarni, Mandar; Rajagopalan, Ambasamudram N
2013-06-01
Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data.
Spatial patterns of mixing in the Solomon Sea
NASA Astrophysics Data System (ADS)
Alberty, M. S.; Sprintall, J.; MacKinnon, J.; Ganachaud, A.; Cravatte, S.; Eldin, G.; Germineaud, C.; Melet, A.
2017-05-01
The Solomon Sea is a marginal sea in the southwest Pacific that connects subtropical and equatorial circulation, constricting transport of South Pacific Subtropical Mode Water and Antarctic Intermediate Water through its deep, narrow channels. Marginal sea topography inhibits internal waves from propagating out and into the open ocean, making these regions hot spots for energy dissipation and mixing. Data from two hydrographic cruises and from Argo profiles are employed to indirectly infer mixing from observations for the first time in the Solomon Sea. Thorpe and finescale methods indirectly estimate the rate of dissipation of kinetic energy (ɛ) and indicate that it is maximum in the surface and thermocline layers and decreases by 2-3 orders of magnitude by 2000 m depth. Estimates of diapycnal diffusivity from the observations and a simple diffusive model agree in magnitude but have different depth structures, likely reflecting the combined influence of both diapycnal mixing and isopycnal stirring. Spatial variability of ɛ is large, spanning at least 2 orders of magnitude within isopycnal layers. Seasonal variability of ɛ reflects regional monsoonal changes in large-scale oceanic and atmospheric conditions with ɛ increased in July and decreased in March. Finally, tide power input and topographic roughness are well correlated with mean spatial patterns of mixing within intermediate and deep isopycnals but are not clearly correlated with thermocline mixing patterns.
NASA Astrophysics Data System (ADS)
Wang, W.; Lee, C.; Cochran, K. K.; Armstrong, R. A.
2016-02-01
Sinking particles play a pivotal role transferring material from the surface to the deeper ocean via the "biological pump". To quantify the extent to which these particles aggregate and disaggregate, and thus affect particle settling velocity, we constructed a box model to describe organic matter cycling. The box model was fit to chloropigment data sampled in the 2005 MedFlux project using Indented Rotating Sphere sediment traps operating in Settling Velocity (SV) mode. Because of the very different pigment compositions of phytoplankton and fecal pellets, chloropigments are useful as proxies to record particle exchange. The maximum likelihood statistical method was used to estimate particle aggregation, disaggregation, and organic matter remineralization rate constants. Eleven settling velocity categories collected by SV sediment traps were grouped into two sinking velocity classes (fast- and slow-sinking) to decrease the number of parameters that needed to be estimated. Organic matter degradation rate constants were estimated to be 1.2, 1.6, and 1.1 y^-1, which are equivalent to degradation half-lives of 0.60, 0.45, and 0.62 y^-1, at 313, 524, and 1918 m, respectively. Rate constants of chlorophyll a degradation to pheopigments (pheophorbide, pheophytin, and pyropheophorbide) were estimated to be 0.88, 0.93, and 1.2 y^-1, at 313, 524, and 1918 m, respectively. Aggregation rate constants varied little with depth, with the highest value being 0.07 y^-1 at 524 m. Disaggregation rate constants were highest at 524 m (14 y^-1) and lowest at 1918 m (9.6 y^-1)
Soil and surface temperatures at the Viking landing sites
NASA Technical Reports Server (NTRS)
Kieffer, H. H.
1976-01-01
The annual temperature range for the Martian surface at the Viking lander sites is computed on the basis of thermal parameters derived from observations made with the infrared thermal mappers. The Viking lander 1 (VL1) site has small annual variations in temperature, whereas the Viking lander 2 (VL2) site has large annual changes. With the Viking lander images used to estimate the rock component of the thermal emission, the daily temperature behavior of the soil alone is computed over the range of depths accessible to the lander; when the VL1 and VL2 sites were sampled, the daily temperature ranges at the top of the soil were 183 to 263 K and 183 to 268 K, respectively. The diurnal variation decreases with depth with an exponential scale of about 5 centimeters. The maximum temperature of the soil sampled from beneath rocks at the VL2 site is calculated to be 230 K. These temperature calculations should provide a reference for study of the active chemistry reported for the Martian soil.
Diffusional limits to the consumption of atmospheric methane by soils
Striegl, Robert G.
1993-01-01
Net transport of atmospheric gases into and out of soil systems is primarily controlled by diffusion along gas partial pressure gradients. Gas fluxes between soil and the atmosphere can therefore be estimated by a generalization of the equation for ordinary gaseous diffusion in porous unsaturated media. Consumption of CH4 by methylotrophic bacteria in the top several centimeters of soil causes the uptake of atmospheric CH4 by aerated soils. The capacity of the methylotrophs to consume CH4 commonly exceeds the potential of CH4 to diffuse from the atmosphere to the consumers. The maximum rate of uptake of atmospheric CH4 by soil is, therefore, limited by diffusion and can be calculated from soil physical properties and the CH4 concentration gradient. The CH4 concentration versus depth profile is theoretically described by the equation for gaseous diffusion with homogeneous chemical reaction in porous unsaturated media. This allows for calculation of the in situ rate of CH4 consumption within specified depth intervals.
Soil and surface temperatures at the viking landing sites.
Kieffer, H H
1976-12-11
The annual temperature range for the martian surface at the Viking lander sites is computed on the basis of thermal parameters derived from observations made with the infrared thermal mappers. The Viking lander 1 (VL1) site has small annual variations in temperature, whereas the Viking lander 2 (VL2) site has large annual changes. With the Viking lander images used to estimate the rock component of the thermal emission, the daily temperature behavior of the soil alone is computed over the range of depths accessible to the lander; when the VL1 and VL2 sites were sampled, the daily temperature ranges at the top of the soil were 183 to 263 K and 183 to 268 K, respectively. The diurnal variation decreases with depth with an exponential scale of about 5 centimeters. The maximum temperature of the soil sampled from beneath rocks at the VL2 site is calculated to be 230 K. These temperature calculations should provide a reference for study of the active chemistry reported for the martian soil.
Experimental study on the sensitive depth of backwards detected light in turbid media.
Zhang, Yunyao; Huang, Liqing; Zhang, Ning; Tian, Heng; Zhu, Jingping
2018-05-28
In the recent past, optical spectroscopy and imaging methods for biomedical diagnosis and target enhancing have been widely researched. The challenge to improve the performance of these methods is to know the sensitive depth of the backwards detected light well. Former research mainly employed a Monte Carlo method to run simulations to statistically describe the light sensitive depth. An experimental method for investigating the sensitive depth was developed and is presented here. An absorption plate was employed to remove all the light that may have travelled deeper than the plate, leaving only the light which cannot reach the plate. By measuring the received backwards light intensity and the depth between the probe and the plate, the light intensity distribution along the depth dimension can be achieved. The depth with the maximum light intensity was recorded as the sensitive depth. The experimental results showed that the maximum light intensity was nearly the same in a short depth range. It could be deduced that the sensitive depth was a range, rather than a single depth. This sensitive depth range as well as its central depth increased consistently with the increasing source-detection distance. Relationships between sensitive depth and optical properties were also investigated. It also showed that the reduced scattering coefficient affects the central sensitive depth and the range of the sensitive depth more than the absorption coefficient, so they cannot be simply added as reduced distinct coefficients to describe the sensitive depth. This study provides an efficient method for investigation of sensitive depth. It may facilitate the development of spectroscopy and imaging techniques for biomedical diagnosis and underwater imaging.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Comparison of artificial intelligence techniques for prediction of soil temperatures in Turkey
NASA Astrophysics Data System (ADS)
Citakoglu, Hatice
2017-10-01
Soil temperature is a meteorological data directly affecting the formation and development of plants of all kinds. Soil temperatures are usually estimated with various models including the artificial neural networks (ANNs), adaptive neuro-fuzzy inference system (ANFIS), and multiple linear regression (MLR) models. Soil temperatures along with other climate data are recorded by the Turkish State Meteorological Service (MGM) at specific locations all over Turkey. Soil temperatures are commonly measured at 5-, 10-, 20-, 50-, and 100-cm depths below the soil surface. In this study, the soil temperature data in monthly units measured at 261 stations in Turkey having records of at least 20 years were used to develop relevant models. Different input combinations were tested in the ANN and ANFIS models to estimate soil temperatures, and the best combination of significant explanatory variables turns out to be monthly minimum and maximum air temperatures, calendar month number, depth of soil, and monthly precipitation. Next, three standard error terms (mean absolute error (MAE, °C), root mean squared error (RMSE, °C), and determination coefficient ( R 2 )) were employed to check the reliability of the test data results obtained through the ANN, ANFIS, and MLR models. ANFIS (RMSE 1.99; MAE 1.09; R 2 0.98) is found to outperform both ANN and MLR (RMSE 5.80, 8.89; MAE 1.89, 2.36; R 2 0.93, 0.91) in estimating soil temperature in Turkey.
NASA Astrophysics Data System (ADS)
Kapicka, A.; Grison, H.; Petrovsky, E.; Jaksik, O.; Kodesova, R.
2015-12-01
Field measurements of magnetic susceptibility were carried out on regular grid, resulting in 101 data points at Brumovice and 65 at Vidim locality. Mass specific magnetic susceptibility χ and its frequency dependence χFD was used to estimate the significance of SP ferrimagnetic particles of pedogenic origin in topsoil horizons. The lowest magnetic susceptibility was obtained on the steep valley sides. Here the original topsoil was eroded and mixed by tillage with the soil substrate (loess). Soil profiles unaffected by erosion were investigated in detail. The vertical distribution of magnetic susceptibility along these "virgin" profiles was measured in laboratory on samples collected with 2-cm spacing. The differences between the distribution of susceptibility in the undisturbed soil profiles and the magnetic signal after uniform mixing of the soil material as a result of erosion and tillage are fundamental for the estimation of soil loss in the studied test fields. Maximum cumulative soil erosion depth in Brumovice and Vidim is around 100 cm and 50 cm respectively. The magnetic method is suitable for mapping at the chernozem localities and measurement of soil magnetic susceptibility is in this case useful and fast technique for quantitative estimation of soil loss caused by erosion. However, it is less suitable (due to lower magnetic differentiation with depth) in areas with luvisol as dominant soil unit. Acknowledgement: This study was supported by NAZV Agency of the Ministry of Agriculture of the Czech Republic through grant No QJ1230319.
Numerical Simulations of Mechanical Erosion from below by Creep on Rate-State Faults
NASA Astrophysics Data System (ADS)
Werner, M. J.; Rubin, A. M.
2012-04-01
The aim of this study is to increase our understanding of how earthquakes nucleate on frictionally-locked fault patches that are loaded by the growing stress concentrations at their boundaries due to aseismic creep. Such mechanical erosion from below of locked patches has previously been invoked by Gillard et al. (1996) to explain accelerating seismicity and increases in maximum earthquake magnitude on a strike-slip streak (a narrow ribbon of tightly clustered seismicity) in Kilauea's East rift, and it might also play a role in the loading of major locked strike-slip faults by creep from below the seismogenic zone. Gillard et al. (1996) provided simple analytical estimates of the size of and moment release within the eroding edge of the locked zone that matched the observed seismicity in Kilauea's East rift. However, an obvious, similar signal has not consistently been found before major strike-slip earthquakes. Here, we use simulations to determine to what extent the simple estimates by Gillard et al. survive a wider range of geometric configurations and slip histories. The boundary between the locked and creeping sections at the base of the seismogenic zone is modeled as a gradual, continuous transition between steady-state velocity-strengthening at greater depth to velocity-weakening surroundings at shallow depth, qualitatively consistent with laboratory estimates of the temperature dependence of (a-b). The goal is to expand the range of possible outcomes to broaden our range of expectations for the behavior of the eroding edge of the locked zones.
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
Technique for estimating depth of 100-year floods in Tennessee
Gamble, Charles R.; Lewis, James G.
1977-01-01
Preface: A method is presented for estimating the depth of the loo-year flood in four hydrologic areas in Tennessee. Depths at 151 gaging stations on streams that were not significantly affected by man made changes were related to basin characteristics by multiple regression techniques. Equations derived from the analysis can be used to estimate the depth of the loo-year flood if the size of the drainage basin is known.
Root reinforcement and its contribution to slope stability in the Western Ghats of Kerala, India
NASA Astrophysics Data System (ADS)
Lukose Kuriakose, Sekhar; van Beek, L. P. H.
2010-05-01
The Western Ghats of Kerala, India is prone to shallow landslides and consequent debris flows. An earlier study (Kuriakose et al., DOI:10.1002/esp.1794) with limited data had already demonstrated the possible effects of vegetation on slope hydrology and stability. Spatially distributed root cohesion is one of the most important data necessary to assess the effects of anthropogenic disturbances on the probability of shallow landslide initiation, results of which are reported in sessions GM6.1 and HS13.13/NH3.16. Thus it is necessary to the know the upper limits of reinforcement that the roots are able to provide and its spatial and vertical distribution in such an anthropogenically intervened terrain. Root tensile strength and root pull out tests were conducted on nine species of plants that are commonly found in the region. They are 1) Rubber (Hevea Brasiliensis), 2) Coconut Palm (Cocos nucifera), 3) Jackfruit trees (Artocarpus heterophyllus), 4) Teak (Tectona grandis), 5) Mango trees (Mangifera indica), 6) Lemon grass (Cymbopogon citratus), 7) Gambooge (Garcinia gummi-gutta), 8) Coffee (Coffea Arabica) and 9) Tea (Camellia sinensis). About 1500 samples were collected of which only 380 could be tested (in the laboratory) due to breakage of roots during the tests. In the successful tests roots failed in tension. Roots having diameters between 2 mm and 12 mm were tested. Each sample tested had a length of 15 cm. Root pull out tests were conducted in the field. Root tensile strength vs root diameter, root pull out strength vs diameter, root diameter vs root depth and root count vs root depth relationships were derived. Root cohesion was computed for nine most dominant plants in the region using the perpendicular root model of Wu et al. (1979) modified by Schimidt et al. (2001). A soil depth map was derived using regression kriging as suggested by Kuriakose et al., (doi:10.1016/j.catena.2009.05.005) and used along with the land use map of 2008 to distribute the computed root tensile strength both vertically and spatially. Root cohesion varies significantly with the type of land use and the depth of soil. The computation showed that a maximum root reinforcement of 40 kPa was available in the first 30 cm of soil while exponentially decreased with depth to just about 3 kPa at 3 m depth. Mixed crops land use unit had the maximum root cohesion while fallow land, degraded forest and young rubber plantation had the lowest root reinforcement. These are the upper limits of root reinforcement that the vegetation can provide. When the soil is saturated, the bond between soil and roots reduces and thus the applicable root reinforcement is limited by the root pullout strength. Root reinforcement estimated from pullout strength vs diameter relationships was significantly lower than those estimated from tensile strength vs diameter relationships.
Estimation of contraction scour in riverbed using SERF
Jiang, J.; Ganju, N.K.; Mehta, A.J.
2004-01-01
Contraction scour in a firm-clay estuarine riverbed is estimated at an oil-unloading terminal at the Port of Haldia in India, where a scour hole attained a maximum depth greater than 5 m relative to the original bottom. A linear equation for the erosion flux as a function of the excess bed shear stress was semicalibrated in a rotating-cylinder device called SERF (Simulator of Erosion Rate Function) and coupled to a hydrodynamic code to simulate the hole as a clear-water scour process. SERF, whose essential design is based on previous such devices, additionally included a load cell for in situ and rapid measurement of the eroded sediment mass. Based on SERF's performance and the degree of comparison between measured and simulated hole geometry, it appears that this device holds promise as a simple tool for prediction of scour in firm-clay beds. ?? ASCE.
Optimization of Crew Shielding Requirement in Reactor-Powered Lunar Surface Missions
NASA Technical Reports Server (NTRS)
Barghouty, Abdulnasser F.
2007-01-01
On the surface of the moon -and not only during heightened solar activities- the radiation environment As such that crew protection will be required for missions lasting in excess of six months. This study focuses on estimating the optimized crew shielding requirement for lunar surface missions with a nuclear option. Simple, transport-simulation based dose-depth relations of the three (galactic, solar, and fission) radiation sources am employed in a 1-dimensional optimization scheme. The scheme is developed to estimate the total required mass of lunar-regolith separating reactor from crew. The scheme was applied to both solar maximum and minimum conditions. It is shown that savings of up to 30% in regolith mass can be realized. It is argued, however, that inherent variation and uncertainty -mainly in lunar regolith attenuation properties in addition to the radiation quality factor- can easily defeat this and similar optimization schemes.
Factors governing water condensation in the Martian atmosphere
NASA Technical Reports Server (NTRS)
Colburn, David S.; Pollack, J. B.; Haberle, Robert M.
1988-01-01
Modeling results are presented suggesting a diurnal condensation cycle at high altitudes at some seasons and latitudes. In a previous paper, the use of atmospheric optical depth measurements at the Viking lander site to show diurnal variability of water condensation at different seasons of the Mars year was described. Factors influencing the amount of condensation include latitude, season, atmospheric dust content and water vapor content at the observation site. A one-dimensional radiative-convective model is used herein based on the diabatic heating routines under development for the Mars General Circulation Model. The model predicts atmospheric temperature profiles at any latitude, season, time of day and dust load. From these profiles and an estimate of the water vapor, one can estimate the maximum occurring at an early morning hour (AM) and the minimum in the late afternoon (PM). Measured variations in the atmospheric optical density between AM and PM measurements were interpreted as differences in AM and PM condensation.
Optimization of Crew Shielding Requirement in Reactor-Powered Lunar Surface Missions
NASA Technical Reports Server (NTRS)
Barghouty, A. F.
2007-01-01
On the surface of the moon and not only during heightened solar activities the radiation environment is such that crew protection will be required for missions lasting in excess of six months. This study focuses on estimating the optimized crew shielding requirement for lunar surface missions with a nuclear option. Simple, transport-simulation based dose-depth relations of the three radiation sources (galactic, solar, and fission) are employed in a one-dimensional optimization scheme. The scheme is developed to estimate the total required mass of lunar regolith separating reactor from crew. The scheme was applied to both solar maximum and minimum conditions. It is shown that savings of up to 30% in regolith mass can be realized. It is argued, however, that inherent variation and uncertainty mainly in lunar regolith attenuation properties in addition to the radiation quality factor can easily defeat this and similar optimization schemes.
NASA Astrophysics Data System (ADS)
Dogan, U.; Demir, D. O.; Cakir, Z.; Ergintav, S.; Cetin, S.; Ozdemir, A.; Reilinger, R. E.
2017-12-01
The 23 October 2011, Mw=7.2 Van Earthquake occurred in eastern Turkey on a thrust fault trending NE-SW and dipping to the north. We use GPS time series from the survey and continuous stations to determine coseismic deformation and to identify spatial and temporal changes in the near and far field due to postseismic processes (2011-2017). The coseismic deformation in the near field is derived from GPS data collected at 25 cadastral GPS survey sites. The coseismic horizontal displacements reach nearly 50 cm close to the surface trace of the fault that ruptured at depth during the earthquake. The density and distribution of the GPS sites allow us to better constrain the extent of the coseismic rupture using elastic dislocations on triangular faults embedded in a homogeneous, elastic half space. Modeling studies suggest that the coseismic rupture stopped west of the Erçek Lake before veering to the north. Estimated seismic moment is in good agreement with the seismologically and geodetically estimated seismic moment, estimated from the finite-fault model. Our preferred coseismic model consists of a simple elliptical slip patch centered at around 8 km depth with a maximum slip of about 2.5 m, consistent with the previous estimates based on InSAR measurements. The postseismic deformation field is derived from far field continuous GPS observations (10.2011 - 11.2017) and near field GPS campaigns (10.2011 - 09.2015). The postseismic time-series are fit better with a logarithmic than an exponential function, suggesting that the postseismic deformation is due to afterslip. Then, we modified our published postseismic model, using the coseismic model and data sets, extended until the end of 2017. The results show that during 6 years following the earthquake, after slip of up to 65 cm occurred at relatively shallow (< 10 km) depths, mostly above the deep coseismic slip that reaches depths > 15 km. New interpretations of the shallow afterslip, also, adds further evidence that the surface break observed after the earthquake was caused by coseismic stress changes rather than representing the coseismic fault. (This study is supported by TUBITAK no: 112Y109 project). Keywords: Van earthquake, GPS, coseismic, postseismic, deformation, elastic modeling
Vertical groundwater flow estimated from the bomb pulse of 36Cl and tritiogenic 3He
NASA Astrophysics Data System (ADS)
Mahara, Y.; Ohta, T.
2011-12-01
The boring well was approximately excavated to 400 m depth from the ground surface on the tableland in the Central Shimokita Peninsula, Japan. Collecting pore-water, some fresh boring cores were sampled on the site during the excavation of borehole. Samples of groundwater were collected by using the sampling device with the water inflating packer system to protect various contaminations, after excavating the borehole. The atmospheric maximum concentration in bomb pulse in the northern hemisphere was reported to observe in 1955 for 36Cl and in 1963 for 3H, respectively. Since the half-life of 36Cl is much longer than 3H, the decay loss of 36Cl was negligible small for a short time until sampling groundwater in 2001 and 2003. On the other hand, the half-life of 3H is very short compared with that of 36Cl. Most of 3H was converted into the tritiogenic 3He in groundwater for the past 38 years after rainwater infiltrating toward the groundwater table. Profiles of dissolved 4He concentration, tritiogenic 3He and 36Cl/Cl ratio were observed in groundwater of the borehole. The total dissolved 4He concentration ranged from 5.8×10-8 at the ground surface to 7.5×10-8 ccSTP/g at the depth of 200 m below the ground surface and it was almost equilibrated with the atmospheric 4He in pore-water (Fig. 1). The bomb pulses of tritiogenic 3He and 36Cl were left from the depth of 101 m below the ground surface to the depth of 132 m, respectively (Figs. 2 and 3). There was a slight difference in the location between the bomb pulse of 36Cl and that of tritiogenic 3He. The downward flow velocity of groundwater were simply estimated to be 2.8 m/y from the marked position of bomb pulse in the profile of 36Cl/Cl ratio and to be 2.7 m/y from the position of the bomb pulse peak of tritiogenic 3He, separately. These two rough estimations were good agreed with each other. The estimation suggests that the vertical flow of groundwater on the tableland is approximated with the downward piston flow with small diffusion without turbulence.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
NASA Astrophysics Data System (ADS)
Malinverno, A.; Saito, S.
2013-12-01
Borehole breakouts are sub-vertical hole enlargements that form on opposite sides of the borehole wall by local rock failure due to non-uniform stress. In a vertical borehole, the breakout direction is perpendicular to the maximum principal horizontal stress. Hence, borehole breakouts are key indicators of the present state of stress in the subsurface. Borehole breakouts were imaged by logging-while drilling (LWD) measurements collected in the Costa Rica Seismogenesis Project (CRISP, IODP Expedition 334). The borehole radius was estimated from azimuthal LWD density and ultrasonic measurements. The density-based borehole radius is based on the difference in scattered gamma rays measured by a near and a far detector, which is a function of the standoff between the tool and the borehole. Borehole radius can also be measured from the travel time of an ultrasonic wave reflected by the borehole wall. Density and ultrasonic measurements are sampled in 16 azimuthal sectors, i.e., every 22.5°. These measurements are processed to generate images that fully cover the borehole wall and that display borehole breakouts as two parallel, vertical bands of large hole radius 180° apart. For a quantitative interpretation, we fitted a simple borehole shape to the measured borehole radii using a Monte Carlo sampling algorithm that quantifies the uncertainty in the estimated borehole shape. The borehole shape is the outer boundary of a figure consisting of a concentric circle and an ellipse. The ellipse defines the width, depth, and orientation of the breakouts. We fitted the measured radii in 2 m depth intervals and identified reliable breakouts where the breakout depth was significant and where the orientation uncertainty and the angle spanned by the breakout were small. The results show breakout orientations that differ by about 90° in Sites U1378 (about 15 km landward of the deformation front, 525 m water depth) and U1379 (about 25 km landward of the deformation front, 126 m water depth). The maximum principal horizontal stress is directed NNE-SSW at Site U1378 and WSW-ENE at Site U1379. These directions are approximately parallel and perpendicular to NNE-directed GPS deformation vectors on land. On erosive convergent margins, a transition is expected to take place from a compressive regime near a frontal wedge to extension and subsidence moving landward of the deformation front. Our working hypothesis is that this transition may take place between Sites U1378, where the breakout orientation is consistent with NNE-SSW compression, and Site U1379, where the breakouts indicate NNE-SSW extension.
Fluid injection and induced seismicity
NASA Astrophysics Data System (ADS)
Kendall, Michael; Verdon, James
2016-04-01
The link between fluid injection, or extraction, and induced seismicity has been observed in reservoirs for many decades. In fact spatial mapping of low magnitude events is routinely used to estimate a stimulated reservoir volume. However, the link between subsurface fluid injection and larger felt seismicity is less clear and has attracted recent interest with a dramatic increase in earthquakes associated with the disposal of oilfield waste fluids. In a few cases, hydraulic fracturing has also been linked to induced seismicity. Much can be learned from past case-studies of induced seismicity so that we can better understand the risks posed. Here we examine 12 case examples and consider in particular controls on maximum event size, lateral event distributions, and event depths. Our results suggest that injection volume is a better control on maximum magnitude than past, natural seismicity in a region. This might, however, simply reflect the lack of baseline monitoring and/or long-term seismic records in certain regions. To address this in the UK, the British Geological Survey is leading the deployment of monitoring arrays in prospective shale gas areas in Lancashire and Yorkshire. In most cases, seismicity is generally located in close vicinity to the injection site. However, in some cases, the nearest events are up to 5km from the injection point. This gives an indication of the minimum radius of influence of such fluid injection projects. The most distant events are never more than 20km from the injection point, perhaps implying a maximum radius of influence. Some events are located in the target reservoir, but most occur below the injection depth. In fact, most events lie in the crystalline basement underlying the sedimentary rocks. This suggests that induced seismicity may not pose a leakage risk for fluid migration back to the surface, as it does not impact caprock integrity. A useful application for microseismic data is to try and forecast induced seismicity during injection, with the aim of mitigating large induced events before they happen. Microseismic event population statistics can be used to make forecasts about the future maximum event magnitude as the injection program continues. By making such forecasts, mitigating actions may be possible if forecast maximum magnitudes exceed a predefined limit.
3D depth-to-basement and density contrast estimates using gravity and borehole data
NASA Astrophysics Data System (ADS)
Barbosa, V. C.; Martins, C. M.; Silva, J. B.
2009-05-01
We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding the basin evolution and in detecting structural oil traps.
Formulating the shear stress distribution in circular open channels based on the Renyi entropy
NASA Astrophysics Data System (ADS)
Khozani, Zohreh Sheikh; Bonakdari, Hossein
2018-01-01
The principle of maximum entropy is employed to derive the shear stress distribution by maximizing the Renyi entropy subject to some constraints and by assuming that dimensionless shear stress is a random variable. A Renyi entropy-based equation can be used to model the shear stress distribution along the entire wetted perimeter of circular channels and circular channels with flat beds and deposited sediments. A wide range of experimental results for 12 hydraulic conditions with different Froude numbers (0.375 to 1.71) and flow depths (20.3 to 201.5 mm) were used to validate the derived shear stress distribution. For circular channels, model performance enhanced with increasing flow depth (mean relative error (RE) of 0.0414) and only deteriorated slightly at the greatest flow depth (RE of 0.0573). For circular channels with flat beds, the Renyi entropy model predicted the shear stress distribution well at lower sediment depth. The Renyi entropy model results were also compared with Shannon entropy model results. Both models performed well for circular channels, but for circular channels with flat beds the Renyi entropy model displayed superior performance in estimating the shear stress distribution. The Renyi entropy model was highly precise and predicted the shear stress distribution in a circular channel with RE of 0.0480 and in a circular channel with a flat bed with RE of 0.0488.
Depth of interaction decoding of a continuous crystal detector module.
Ling, T; Lewellen, T K; Miyaoka, R S
2007-04-21
We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be approximately 25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability.
Reiner, S.R.; Laczniak, R.J.; DeMeo, G.A.; Smith, J. LaRue; Elliott, P.E.; Nylund, W.E.; Fridrich, C.J.
2002-01-01
Oasis Valley is an area of natural ground-water discharge within the Death Valley regional ground-water flow system of southern Nevada and adjacent California. Ground water discharging at Oasis Valley is replenished from inflow derived from an extensive recharge area that includes the northwestern part of the Nevada Test Site (NTS). Because nuclear testing has introduced radionuclides into the subsurface of the NTS, the U.S. Department of Energy currently is investigating the potential transport of these radionuclides by ground water flow. To better evaluate any potential risk associated with these test-generated contaminants, a number of studies were undertaken to accurately quantify discharge from areas downgradient in the regional ground-water flow system from the NTS. This report refines the estimate of ground-water discharge from Oasis Valley. Ground-water discharge from Oasis Valley was estimated by quantifying evapotranspiration (ET), estimating subsurface outflow, and compiling ground-water withdrawal data. ET was quantified by identifying areas of ongoing ground-water ET, delineating areas of ET defined on the basis of similarities in vegetation and soil-moisture conditions, and computing ET rates for each of the delineated areas. A classification technique using spectral-reflectance characteristics determined from satellite imagery acquired in 1992 identified eight unique areas of ground-water ET. These areas encompass about 3,426 acres of sparsely to densely vegetated grassland, shrubland, wetland, and open water. Annual ET rates in Oasis Valley were computed with energy-budget methods using micrometeorological data collected at five sites. ET rates range from 0.6 foot per year in a sparse, dry saltgrass environment to 3.1 feet per year in dense meadow vegetation. Mean annual ET from Oasis Valley is estimated to be about 7,800 acre-feet. Mean annual ground-water discharge by ET from Oasis Valley, determined by removing the annual local precipitation component of 0.5 foot, is estimated to be about 6,000 acre-feet. Annual subsurface outflow from Oasis Valley into the Amargosa Desert is estimated to be between 30 and 130 acre-feet. Estimates of total annual ground-water withdrawal from Oasis Valley by municipal and non-municipal users in 1996 and 1999 are 440 acre-feet and 210 acre-feet, respectively. Based on these values, natural annual ground-water discharge from Oasis Valley is about 6,100 acre-feet. Total annual discharge was 6,500 acre-ft in 1996 and 6,300 acre-ft in 1999. This quantity of natural ground-water discharge from Oasis Valley exceeds the previous estimate made in 1962 by a factor of about 2.5. Water levels were measured in Oasis Valley to gain additional insight into the ET process. In shallow wells, water levels showed annual fluctuations as large as 7 feet and daily fluctuations as large as 0.2 foot. These fluctuations may be attributed to water loss associated with evapotranspiration. In shallow wells affected by ET, annual minimum depths to water generally occurred in winter or early spring shortly after daily ET reached minimum rates. Annual maximum depths to water generally occurred in late summer or fall shortly after daily ET reached maximum rates. The magnitude of daily water-level fluctuations generally increased as ET increased and decreased as depth to water increased.
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed
Balk, B.; Elder, K.; Baron, Jill S.
1998-01-01
Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff. In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado. Geostatistics and classical statistics were used to estimate SWE distribution across the watershed. Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances. Snow densities were spatially modeled through regression analysis. Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE. The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths. Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.
NASA Astrophysics Data System (ADS)
Valchev, Nikolay; Eftimova, Petya; Andreeva, Nataliya; Prodanov, Bogdan
2017-04-01
Coastal zone is among the fastest evolving areas worldwide. Ever increasing population inhabiting coastal settlements develops often conflicting economic and societal activities. The existing imbalance between the expansion of these activities, on one hand, and the potential to accommodate them in a sustainable manner, on the other, becomes a critical problem. Concurrently, coasts are affected by various hydro-meteorological phenomena such as storm surges, heavy seas, strong winds and flash floods, which intensities and occurrence frequency is likely to increase due to the climate change. This implies elaboration of tools capable of quick prediction of impact of those phenomena on the coast and providing solutions in terms of disaster risk reduction measures. One such tool is Bayesian network. Proposed paper describes the set-up of such network for Varna Bay (Bulgaria, Western Black Sea). It relates near-shore storm conditions to their onshore flood potential and ultimately to relevant impact as relative damage on coastal and manmade environment. Methodology for set-up and training of the Bayesian network was developed within RISC-KIT project (Resilience-Increasing Strategies for Coasts - toolKIT). Proposed BN reflects the interaction between boundary conditions, receptors, hazard, and consequences. Storm boundary conditions - maximum significant wave height and peak surge level, were determined on the basis of their historical and projected occurrence. The only hazard considered in this study is flooding characterized by maximum inundation depth. BN was trained with synthetic events created by combining estimated boundary conditions. Flood impact was modeled with the process-based morphodynamical model XBeach. Restaurants, sport and leisure facilities, administrative buildings, and car parks were introduced in the network as receptors. Consequences (impact) are estimated in terms of relative damage caused by given inundation depth. National depth-damage (susceptibility) curves were used to define the percentage of damage ranked as low, moderate, high and very high. Besides previously described components, BN includes also two hazard influencing disaster risk reduction (DRR) measures: re-enforced embankment of Varna Port wall and beach nourishment. As a result of training process the network is able to evaluate spatially varying hazards and damages for specific storm conditions. Moreover, it is able to predict where on the site the highest impact would occur and to quantify the mitigation capacity of proposed DRR measures. For example, it is estimated that storm impact would be considerably reduced in present conditions but vulnerability would be still high in climate change perspective.
Dynamic Propagation Channel Characterization and Modeling for Human Body Communication
Nie, Zedong; Ma, Jingjing; Li, Zhicheng; Chen, Hong; Wang, Lei
2012-01-01
This paper presents the first characterization and modeling of dynamic propagation channels for human body communication (HBC). In-situ experiments were performed using customized transceivers in an anechoic chamber. Three HBC propagation channels, i.e., from right leg to left leg, from right hand to left hand and from right hand to left leg, were investigated under thirty-three motion scenarios. Snapshots of data (2,800,000) were acquired from five volunteers. Various path gains caused by different locations and movements were quantified and the statistical distributions were estimated. In general, for a given reference threshold è = −10 dB, the maximum average level crossing rate of the HBC was approximately 1.99 Hz, the maximum average fade time was 59.4 ms, and the percentage of bad channel duration time was less than 4.16%. The HBC exhibited a fade depth of −4 dB at 90% complementary cumulative probability. The statistical parameters were observed to be centered for each propagation channel. Subsequently a Fritchman model was implemented to estimate the burst characteristics of the on-body fading. It was concluded that the HBC is motion-insensitive, which is sufficient for reliable communication link during motions, and therefore it has great potential for body sensor/area networks. PMID:23250278
Dynamic propagation channel characterization and modeling for human body communication.
Nie, Zedong; Ma, Jingjing; Li, Zhicheng; Chen, Hong; Wang, Lei
2012-12-18
This paper presents the first characterization and modeling of dynamic propagation channels for human body communication (HBC). In-situ experiments were performed using customized transceivers in an anechoic chamber. Three HBC propagation channels, i.e., from right leg to left leg, from right hand to left hand and from right hand to left leg, were investigated under thirty-three motion scenarios. Snapshots of data (2,800,000) were acquired from five volunteers. Various path gains caused by different locations and movements were quantified and the statistical distributions were estimated. In general, for a given reference threshold è = -10 dB, the maximum average level crossing rate of the HBC was approximately 1.99 Hz, the maximum average fade time was 59.4 ms, and the percentage of bad channel duration time was less than 4.16%. The HBC exhibited a fade depth of -4 dB at 90% complementary cumulative probability. The statistical parameters were observed to be centered for each propagation channel. Subsequently a Fritchman model was implemented to estimate the burst characteristics of the on-body fading. It was concluded that the HBC is motion-insensitive, which is sufficient for reliable communication link during motions, and therefore it has great potential for body sensor/area networks.
Characterising primary productivity measurements across a dynamic western boundary current region
NASA Astrophysics Data System (ADS)
Everett, Jason D.; Doblin, Martina A.
2015-06-01
Determining the magnitude of primary production (PP) in a changing ocean is a major research challenge. Thousands of estimates of marine PP exist globally, but there remain significant gaps in data availability, particularly in the Southern Hemisphere. In situ PP estimates are generally single-point measurements and therefore we rely on satellite models of PP in order to scale up over time and space. To reduce the uncertainty around the model output, these models need to be assessed against in situ measurements before use. This study examined the vertically-integrated productivity in four water-masses associated with the East Australian Current (EAC), the major western boundary current (WBC) of the South Pacific. We calculated vertically integrated PP from shipboard 14C PP estimates and then compared them to estimates from four commonly used satellite models (ESQRT, VGPM, VGPM-Eppley, VGPM-Kameda) to assess their utility for this region. Vertical profiles of the water-column show each water-mass had distinct temperature-salinity signatures. The depth of the fluorescence-maximum (fmax) increased from onshore (river plume) to offshore (EAC) as light penetration increased. Depth integrated PP was highest in river plumes (792±181 mg C m-2 d-1) followed by the EAC (534±116 mg C m-2 d-1), continental shelf (140±47 mg C m-2 d-1) and cyclonic eddy waters (121±4 mg C m-2 d-1). Surface carbon assimilation efficiency was greatest in the EAC (301±145 mg C (mg Chl-a)-1 d-1) compared to other water masses. All satellite primary production models tested underestimated EAC PP and overestimated continental shelf PP. The ESQRT model had the highest skill and lowest bias of the tested models, providing the best first-order estimates of PP on the continental shelf, including at a coastal time-series station, Port Hacking, which showed considerable inter-annual variability (155-2957 mg C m-2 d-1). This work provides the first estimates of depth integrated PP associated with the East Australian Current in temperate Australia. The ongoing intensification of all WBCs makes it critical to understand the variability in PP at the regional scale. More accurate predictions in the EAC region will require vertically-resolved in situ productivity and bio-optical measurements across multiple time scales to allow development of other models which simulate dynamic ocean conditions.
Assessing New and Old Methods in Paleomagnetic Paleothermometry: A Test Case at Mt. St. Helens, USA
NASA Astrophysics Data System (ADS)
Bowles, J. A.; Gerzich, D.; Jackson, M. J.
2017-12-01
Paleomagnetic data can be used to estimate deposit temperatures (Tdep) of pyroclastic density currents (PDCs). The typical method is to thermally demagnetize oriented lithic clasts incorporated into the PDC. If Tdep is less than the maximum Curie temperature (Tc), the clast is partially remagnetized in the PDC, and the unblocking temperature (Tub) at which this remagnetization is removed is an estimate of Tdep. In principle, juvenile clasts can also be used, and Tub-max is taken as the minimum Tdep. This all assumes blocking (Tb) and unblocking temperatures are equivalent and that the blocking spectrum remains constant through time. Recent evidence shows that Tc in many titanomagnetites is a strong function of thermal history due to a crystal-chemical reordering process. We therefore undertake a study designed to test some of these assumptions and to assess the extent to which the method may be biased by a Tb spectrum that shifts to higher T during cooling. We also explore a new magnetic technique that relies only on stratigraphic variations in Tc. Samples are from the May 18, 1980 PDCs at Mt. St. Helens, USA. Direct temperature measurements of the deposits were 297 - 367°C. At sites with oriented lithics, standard methods provide a Tdep range that overlaps with measured temperatures, but is systematically higher by a few 10s of °C. By contrast, pumice clasts all give Tdep_min estimates that greatly exceed lithic estimates and measured temperatures. We attribute this overestimate to two causes: 1) Tc and Tub systematically increase with depth as a result of the reordering process. This results in Tdep_min estimates that vary by 50°C and increase with depth. 2) MSH pumice is multi-domain, where Tub > Tb, resulting in a large overestimate in Tdep. At 5 sites, stratigraphic variations in Tc were conservatively interpreted in terms of Tdep as <300°C or >300°C. More sophisticated modeling of the time-temperature-depth evolution of Tc allows us to place tighter constraints on some deposits, and our preliminary interpretation suggests that PDC pulses became successively hotter throughout the day. This new method allows us to evaluate subtle temporal/spatial variabilities that may not be evident from direct measurements made at the surface. It also allows Tdep estimates to be made on PDCs where no lithic clasts are present.
Microtremors study applying the SPAC method in Colima state, Mexico.
NASA Astrophysics Data System (ADS)
Vázquez Rosas, R.; Aguirre González, J.; Mijares Arellano, H.
2007-05-01
One of the main parts of seismic risk studies is to determine the site effect. This can be estimated by means of the microtremors measurements. From the H/V spectral ratio (Nakamura, 1989), the predominant period of the site can be estimated. Although the predominant period by itself can not represent the site effect in a wide range of frequencies and doesn't provide information of the stratigraphy. The SPAC method (Spatial Auto-Correlation Method, Aki 1957), on the other hand, is useful to estimate the stratigraphy of the site. It is based on the simultaneous recording of microtremors in several stations deployed in an instrumental array. Through the spatial autocorrelation coefficient computation, the Rayleigh wave dispersion curve can be cleared. Finally the stratigraphy model (thickness, S and P wave velocity, and density of each layer) is estimated by fitting the theoretical dispersion curve with the observed one. The theoretical dispersion curve is initially computed using a proposed model. That model is modified several times until the theoretical curve fit the observations. This method requires of a minimum of three stations where the microtremors are observed simultaneously in all the stations. We applied the SPAC method to six sites in Colima state, Mexico. Those sites are Santa Barbara, Cerro de Ortega, Tecoman, Manzanillo and two in Colima city. Totally 16 arrays were carried out using equilateral triangles with different apertures with a minimum of 5 m and a maximum of 60 m. For recording microtremors we used short period (5 seconds) velocity type vertical sensors connected to a K2 (Kinemetrics) acquisition system. We could estimate the velocities of the most superficial layers reaching different depths in each site. For Santa Bárbara site the exploration depth was about 30 m, for Tecoman 12 m, for Manzanillo 35 m, for Cerro de Ortega 68 m, and the deepest site exploration was obtained in Colima city with a depth of around 73 m. The S wave velocities fluctuate between 230 m/s and 420 m/s for the most superficial layer. It means that, in general, the most superficial layers are quite competent. The superficial layer with smaller S wave velocity was observed in Tecoman, while that of largest S wave velocity was observed in Cerro de Ortega. Our estimations are consistent with down-hole velocity records obtained in Santa Barbara by previous studies.
Wong, Florence L.; Phillips, Eleyne L.; Johnson, Samuel Y.; Sliter, Ray W.
2012-01-01
Models of the depth to the base of Last Glacial Maximum and sediment thickness over the base of Last Glacial Maximum for the eastern Santa Barbara Channel are a key part of the maps of shallow subsurface geology and structure for offshore Refugio to Hueneme Canyon, California, in the California State Waters Map Series. A satisfactory interpolation of the two datasets that accounted for regional geologic structure was developed using geographic information systems modeling and graphics software tools. Regional sediment volumes were determined from the model. Source data files suitable for geographic information systems mapping applications are provided.
Radiocarbon constraints on the glacial ocean circulation and its impact on atmospheric CO2
NASA Astrophysics Data System (ADS)
Skinner, L. C.; Primeau, F.; Freeman, E.; de La Fuente, M.; Goodwin, P. A.; Gottschalk, J.; Huang, E.; McCave, I. N.; Noble, T. L.; Scrivner, A. E.
2017-07-01
While the ocean's large-scale overturning circulation is thought to have been significantly different under the climatic conditions of the Last Glacial Maximum (LGM), the exact nature of the glacial circulation and its implications for global carbon cycling continue to be debated. Here we use a global array of ocean-atmosphere radiocarbon disequilibrium estimates to demonstrate a ~689+/-53 14C-yr increase in the average residence time of carbon in the deep ocean at the LGM. A predominantly southern-sourced abyssal overturning limb that was more isolated from its shallower northern counterparts is interpreted to have extended from the Southern Ocean, producing a widespread radiocarbon age maximum at mid-depths and depriving the deep ocean of a fast escape route for accumulating respired carbon. While the exact magnitude of the resulting carbon cycle impacts remains to be confirmed, the radiocarbon data suggest an increase in the efficiency of the biological carbon pump that could have accounted for as much as half of the glacial-interglacial CO2 change.
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Komiskey, Matthew J.; Stuntebeck, Todd D.; Cox, Amanda L.; Frame, Dennis R.
2013-01-01
The effects of longitudinal slope on the estimation of discharge in a 0.762-meter (m) (depth at flume entrance) H flume were tested under controlled conditions with slopes from −8 to +8 percent and discharges from 1.2 to 323 liters per second. Compared to the stage-discharge rating for a longitudinal flume slope of zero, computed discharges were negatively biased (maximum −31 percent) when the flume was sloped downward from the front (entrance) to the back (exit), and positively biased (maximum 44 percent) when the flume was sloped upward. Biases increased with greater flume slopes and with lower discharges. A linear empirical relation was developed to compute a corrected reference stage for a 0.762-m H flume using measured stage and flume slope. The reference stage was then used to determine a corrected discharge from the stage-discharge rating. A dimensionally homogeneous correction equation also was developed, which could theoretically be used for all standard H-flume sizes. Use of the corrected discharge computation method for a sloped H flume was determined to have errors ranging from −2.2 to 4.6 percent compared to the H-flume measured discharge at a level position. These results emphasize the importance of the measurement of and the correction for flume slope during an edge-of-field study if the most accurate discharge estimates are desired.
Rare earth elements in pore waters from Cabo Friós western boundary upwelling system
NASA Astrophysics Data System (ADS)
Smoak, J. M.; Silva-Filho, E. V.; Rousseau, T.; Albuquerque, A. L.; Caldeira, P. P.; Moreira, M.
2015-12-01
Rare earth elements (REE) are a group of reactive trace elements in aqueous media, they have a coherent chemical behavior with however a subtle and gradual shift in physicochemical properties allowing their use as tracers of sources and processes. Uncertainties on their oceanic inputs and outputs still remains [Arsouze et al., 2009; Siddall et al., 2008; Tachikawa et al., 2003]. The water-sediment interface were early on identified as a relevant REE source due to the high distribution coefficient between sediments and pore waters [Elderfield and Sholkovitz, 1987] and substantially higher concentration then the water column [Abbott et al., 2015; Haley et al., 2004; Sholkovitz et al., 1989; Soyol-Erdene and Huh, 2013]. Here we present a cross shelf transect of 4 short pore waters REE profiles on a 680 km2 mud bank located in the region of Cabo Frio, Brazil. This study reveals similar trends at the four sites: a REE production zone reflected by a maximum in concentration at the top of the sediment evolving with depth toward a REE consumption zone reflected by a minimum in REE concentrations. PAAS normalized patterns shows 1) a progressive depletion in LREE with depth with HREE/LREE ratios comprised between 1.1 and 1.6 in the 2 first centimeters evolving gradually to ratios comprised between 2.8 and 4.7 above 7 cm 2) A sharp gradient in negative Ce anomaly with Ce/Ce* values reaching 0.3. With maximum Nd concentrations comprised between 780 and 1200 pmol.kg and considering that seawater Nd concentrations of Brazilian shelf bottom waters are comprised between 24 and 50 pmol.Kg-1 we apply the Fick´s First Law of diffusion and estimate that 340 +/- 90 nmol. m-2 Y-1 of Nd is released in the Cabo frio´s mudbank. This flux is in the same order of magnitude of recent estimates by [Abbott et al., 2015] in the slope of Oregon´s margin. Unraveling processes responsible for the REE production zone will help to refine the global REE fluxes estimates.
Wronski, Matt; Yeboah, Collins
2015-01-01
Lens dose is a concern during the treatment of facial lesions with anterior electron beams. Lead shielding is routinely employed to reduce lens dose and minimize late complications. The purpose of this work is twofold: 1) to measure dose profiles under large‐area lead shielding at the lens depth for clinical electron energies via film dosimetry; and 2) to assess the accuracy of the Pinnacle treatment planning system in calculating doses under lead shields. First, to simulate the clinical geometry, EBT3 film and 4 cm wide lead shields were incorporated into a Solid Water phantom. With the lead shield inside the phantom, the film was positioned at a depth of 0.7 cm below the lead, while a variable thickness of solid water, simulating bolus, was placed on top. This geometry was reproduced in Pinnacle to calculate dose profiles using the pencil beam electron algorithm. The measured and calculated dose profiles were normalized to the central‐axis dose maximum in a homogeneous phantom with no lead shielding. The resulting measured profiles, functions of bolus thickness and incident electron energy, can be used to estimate the lens dose under various clinical scenarios. These profiles showed a minimum lead margin of 0.5 cm beyond the lens boundary is required to shield the lens to ≤10% of the dose maximum. Comparisons with Pinnacle showed a consistent overestimation of dose under the lead shield with discrepancies of ∼25% occurring near the shield edge. This discrepancy was found to increase with electron energy and bolus thickness and decrease with distance from the lead edge. Thus, the Pinnacle electron algorithm is not recommended for estimating lens dose in this situation. The film measurements, however, allow for a reasonable estimate of lens dose from electron beams and for clinicians to assess the lead margin required to reduce the lens dose to an acceptable level. PACS number(s): 87.53.Bn, 87.53.Kn, 87.55.‐x, 87.55.D‐ PMID:27074448
Global Sea Surface Temperature: A Harmonized Multi-sensor Time-series from Satellite Observations
NASA Astrophysics Data System (ADS)
Merchant, C. J.
2017-12-01
This paper presents the methods used to obtain a new global sea surface temperature (SST) dataset spanning the early 1980s to the present, intended for use as a climate data record (CDR). The dataset provides skin SST (the fundamental measurement) and an estimate of the daily mean SST at depths compatible with drifting buoys (adjusting for skin and diurnal variability). The depth SST provided enables the CDR to be used with in situ records and centennial-scale SST reconstructions. The new SST timeseries is as independent as possible from in situ observations, and from 1995 onwards is harmonized to an independent satellite reference (namely, SSTs from the Advanced Along Track Scanning Radiometer (Advanced ATSR)). This maximizes the utility of our new estimates of variability and long-term trends in interrogating previous datasets tied to in situ observations. The new SSTs include full resolution (swath, level 2) data, single-sensor gridded data (level 3, 0.05 degree latitude-longitude grid) and a multi-sensor optimal analysis (level 4, same grid). All product levels are consistent. All SSTs have validated uncertainty estimates attached. The sensors used include all Advanced Very High Resolution Radiometers from NOAA-6 onwards and the ATSR series. AVHRR brightness temperatures (BTs) are calculated from counts using a new in-flight re-calibration for each sensor, ultimately linked through to the AATSR BT calibration by a new harmonization technique. Artefacts in AVHRR BTs linked to varying instrument temperature, orbital regime and solar contamination are significantly reduced. These improvements in the AVHRR BTs (level 1) translate into improved cloud detection and SST (level 2). For cloud detection, we use a Bayesian approach for all sensors. For the ATSRs, SSTs are derived with sufficient accuracy and sensitivity using dual-view coefficients. This is not the case for single-view AVHRR observations, for which a physically based retrieval is employed, using a hybrid maximum a posteriori / maximum likelihood retrieval, which optimises retrieval uncertainty and SST sensitivity for climate applications. Validation results will be presented along with examples of the variability and trends in SST evident in the dataset.
Rock spatial densities on the rims of the Tycho secondary craters in Mare Nectaris
NASA Astrophysics Data System (ADS)
Basilevsky, A. T.; Michael, G. G.; Kozlova, N. A.
2018-04-01
The aim of this work is to check whether the technique of estimation of age of small lunar craters based on spatial density of rock boulders on their rims described in Basilevsky et al. (2013, 2015b) and Li et al. (2017) for the craters < 1 km in diameter is applicable to the larger craters. The work presents the rock counts on the rims of four craters having diameters 1000, 1100, 1240 and 1400 m located in Mare Nectaris. These craters are secondaries of the primary crater Tycho, whose age was found to be 109 ± 4 Ma (Stoffler and Ryder, 2001) so this may be taken as the age of the four craters, too. Using the dependence of the rock spatial densities at the crater rims on the crater age for the case of mare craters (Li et al., 2017) our measured rock densities correspond to ages from ∼100 to 130 Ma. These estimates are reasonably close to the given age of the primary crater Tycho. This, in turn, suggests that this technique of crater age estimation is applicable to craters up to ∼1.5 km in diameter. For the four considered craters we also measured their depth/diameter ratios and the maximum angles of the crater inner slopes. For the considered craters it was found that with increasing crater diameter, the depth/diameter ratios and maximum angles of internal slopes increase, but the values of these parameters for specific craters may deviate significantly from the general trends. The deviations probably result from some dissimilarities in the primary crater geometries, that may be due to crater to crater differences in characteristics of impactors (e.g., in their bulk densities) and/or differences in the mechanical properties of the target. It may be possible to find secondaries of crater Tycho in the South pole area and, if so, they may be studied to check the specifics and rates of the rock boulder degradation in the lunar polar environment.
Size matters: Perceived depth magnitude varies with stimulus height.
Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S
2016-06-01
Both the upper and lower disparity limits for stereopsis vary with the size of the targets. Recently, Tsirlin, Wilcox, and Allison (2012) suggested that perceived depth magnitude from stereopsis might also depend on the vertical extent of a stimulus. To test this hypothesis we compared apparent depth in small discs to depth in long bars with equivalent width and disparity. We used three estimation techniques: a virtual ruler, a touch-sensor (for haptic estimates) and a disparity probe. We found that depth estimates were significantly larger for the bar stimuli than for the disc stimuli for all methods of estimation and different configurations. In a second experiment, we measured perceived depth as a function of the height of the bar and the radius of the disc. Perceived depth increased with increasing bar height and disc radius suggesting that disparity is integrated along the vertical edges. We discuss size-disparity correlation and inter-neural excitatory connections as potential mechanisms that could account for these results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uncertainty in cloud optical depth estimates made from satellite radiance measurements
NASA Technical Reports Server (NTRS)
Pincus, Robert; Szczodrak, Malgorzata; Gu, Jiujing; Austin, Philip
1995-01-01
The uncertainty in optical depths retrieved from satellite measurements of visible wavelength radiance at the top of the atmosphere is quantified. Techniques are briefly reviewed for the estimation of optical depth from measurements of radiance, and it is noted that these estimates are always more uncertain at greater optical depths and larger solar zenith angles. The lack of radiometric calibration for visible wavelength imagers on operational satellites dominates the uncertainty retrievals of optical depth. This is true for both single-pixel retrievals and for statistics calculated from a population of individual retrievals. For individual estimates or small samples, sensor discretization can also be significant, but the sensitivity of the retrieval to the specification of the model atmosphere is less important. The relative uncertainty in calibration affects the accuracy with which optical depth distributions measured by different sensors may be quantitatively compared, while the absolute calibration uncertainty, acting through the nonlinear mapping of radiance to optical depth, limits the degree to which distributions measured by the same sensor may be distinguished.
Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model
NASA Astrophysics Data System (ADS)
Prakash, Shashi; Kumar, Nitish; Kumar, Subrata
2016-09-01
CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.
NASA Astrophysics Data System (ADS)
Gischig, Valentin; Broccardo, Marco; Amann, Florian; Jalali, Mohammadreza; Esposito, Simona; Krietsch, Hannes; Doetsch, Joseph; Madonna, Claudio; Wiemer, Stefan; Loew, Simon; Giardini, Domenico
2016-04-01
A decameter in-situ stimulation experiment is currently being performed at the Grimsel Test Site in Switzerland by the Swiss Competence Center for Energy Research - Supply of Electricity (SCCER-SoE). The underground research laboratory lies in crystalline rock at a depth of 480 m, and exhibits well-documented geology that is presenting some analogies with the crystalline basement targeted for the exploitation of deep geothermal energy resources in Switzerland. The goal is to perform a series of stimulation experiments spanning from hydraulic fracturing to controlled fault-slip experiments in an experimental volume approximately 30 m in diameter. The experiments will contribute to a better understanding of hydro-mechanical phenomena and induced seismicity associated with high-pressure fluid injections. Comprehensive monitoring during stimulation will include observation of injection rate and pressure, pressure propagation in the reservoir, permeability enhancement, 3D dislocation along the faults, rock mass deformation near the fault zone, as well as micro-seismicity. The experimental volume is surrounded by other in-situ experiments (at 50 to 500 m distance) and by infrastructure of the local hydropower company (at ~100 m to several kilometres distance). Although it is generally agreed among stakeholders related to the experiments that levels of induced seismicity may be low given the small total injection volumes of less than 1 m3, detailed analysis of the potential impact of the stimulation on other experiments and surrounding infrastructure is essential to ensure operational safety. In this contribution, we present a procedure how induced seismic hazard can be estimated for an experimental situation that is untypical for injection-induced seismicity in terms of injection volumes, injection depths and proximity to affected objects. Both, deterministic and probabilistic methods are employed to estimate that maximum possible and the maximum expected induced earthquake magnitude. Deterministic methods are based on McGarr's upper limit for the maximum induced seismic moment. Probabilistic methods rely on estimates of Shapiro's seismogenic index and seismicity rates from past stimulation experiments that are scaled to injection volumes of interest. Using rate-and-state frictional modelling coupled to a hydro-mechanical fracture flow model, we demonstrate that large uncontrolled rupture events are unlikely to occur and that deterministic upper limits may be sufficiently conservative. The proposed workflow can be applied to similar injection experiments, for which hazard to nearby infrastructure may limit experimental design.
Scour assessments and sediment-transport simulation for selected bridge sites in South Dakota
Niehus, C.A.
1996-01-01
Scour at bridges is a major concern in the design of new bridges and in the evaluation of structural stability of existing bridges. Equations for estimating pier, contraction, and abutment scour have been developed from numerous laboratory studies using sand-bed flumes, but little verification of these scour equations has been done for actual rivers with various bed conditions. This report describes the results of reconnaissance and detailed scour assessments and a sediment-transport simulation for selected bridge sites in South Dakota. Reconnaissance scour assessments were done during 1991 for 32 bridge sites. The reconnaissance assessments for each bridge site included compilation of general and structural data, field inspection to record and measure pertinent scour variables, and evaluation of scour susceptibility using various scour-index forms. Observed pier scour at the 32 sites ranged from 0 to 7 feet, observed contraction scour ranged from 0 to 4 feet, and observed abutment scour ranged from 0 to 10 feet. Thirteen bridge sites having high potential for scour were selected for detailed assessments, which were accomplished during 1992-95. These detailed assessments included prediction of scour depths for 2-, 100-, and 500-year flows using selected published scour equations; measurement of scour during high flows; comparison of measured and predicted scour; and identification of which scour equations best predict actual scour. The medians of predicted pier-scour depth at each of the 13 bridge sites (using 13 scour equations) ranged from 2.4 to 6.8 feet for the 2-year flows and ranged from 3.4 to 13.3 feet for the 500-year flows. The maximum pier scour measured during high flows ranged from 0 to 8.5 feet. Statistical comparison (Spearman rank correlation) of predicted pier-scour depths (using flow data col- lected during scour measurements) indicate that the Laursen, Shen (method b), Colorado State University, and Blench (method b) equations correlate closer with measured scour than do the other prediction equations. The predicted pier-scour depths using the Varzeliotis and Carstens equations have weak statistical rela- tions with measured scour depths. Medians of predicted pier-scour depth from the Shen (method a), Chitale, Bata, and Carstens equations are statistically equal to the median of measured pier-scour depths, based on the Wilcoxon signed-ranks test. The medians of contraction scour depth at each of the 13 bridge sites (using one equation) ranged from -0.1 foot for the 2- year flows to 23.2 feet for the 500-year flows. The maximum contraction scour measured during high flows ranged from 0 to 3.0 feet. The contraction- scour prediction equation substantially overestimated the scour depths in almost all comparisons with the measured scour depths. A significant reason for this discrepancy is due to the wide flood plain (as wide as 5,000 feet) at most of the bridge sites that were investigated. One possible way to reduce this effect for bridge design is to make a decision on what is the effective approach section and thereby limit the size of the bridge flow approach width. The medians of abutment-scour depth at each of the 13 bridge sites (using five equations) ranged from 8.2 to 16.5 feet for the 2-year flows and ranged from 5.7 to 41 feet for the 500-year flows. The maximum abutment scour measured during high flows ranged from 0 to 4.0 feet. The abutment-scour prediction equations also substantially overestimated the scour depths in almost all comparisons with the measured scour depths. The Liu and others (live bed) equation predicted abutment-scour depths substantially lower than the other four abutment-scour equations and closer to the actual measured scour depths. However, this equation at times predicted greater scour depths for 2-year flows than it did for 500-year flows, making its use highly questionable. Again, limiting the bridge flow approach width would produce more reasonable predicted abutment scour.
Updates to Enhanced Geothermal System Resource Potential Estimate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad
The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less
Update to Enhanced Geothermal System Resource Potential Estimate: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad
2016-10-01
The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less
Structural properties of H-implanted InP crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bocchi, C.; Franzosi, P.; Lazzarini, L.
1993-07-01
H has been implanted in InP crystals at the energy E [equals] 100 keV and at different doses ranging from [sigma] [equals] 1 x 10[sup 13] to [sigma] [equals] 5 x 10[sup 16] cm[sup [minus]2]. The depth dependence of the elastic lattice strain has been investigated by high resolution X-ray diffractometry. The implantation produces a lattice dilation. The strain increases with increasing depth, reaches the maximum at about 0.75 [mu]m, and then decreases rapidly; moreover the maximum strain is proportional to the dose. No extended crystal defects have been detected by transmission electron microscopy up to [sigma] <1 x 10[supmore » 16] cm[sup [minus]2] a buried amorphous layer 28 nm in thickness has been observed at the same depth where the strain is maximum. The thickness of the amorphous layer increases by further increasing the dose and reaches a value of about 0.18 [mu]m for [sigma] [equals] 5 x 10[sup 16] cm[sup [minus]2].« less
Han, Sung-Ho; Farshchi-Heydari, Salman; Hall, David J
2010-01-20
A novel time-domain optical method to reconstruct the relative concentration, lifetime, and depth of a fluorescent inclusion is described. We establish an analytical method for the estimations of these parameters for a localized fluorescent object directly from the simple evaluations of continuous wave intensity, exponential decay, and temporal position of the maximum of the fluorescence temporal point-spread function. Since the more complex full inversion process is not involved, this method permits a robust and fast processing in exploring the properties of a fluorescent inclusion. This method is confirmed by in vitro and in vivo experiments. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Stability of polar frosts in spherical bowl-shaped craters on the moon, Mercury, and Mars
NASA Technical Reports Server (NTRS)
Ingersoll, Andrew P.; Svitek, Tomas; Murray, Bruce C.
1992-01-01
A model of spherical bowl-shaped craters is described and applied to the moon, Mercury, and Mars. The maximum temperature of permanently shadowed areas are calculated using estimates of the depth/diameter ratios of typical lunar bowl-shaped craters and assuming a saturated surface in which the craters are completely overlapping. For Mars, two cases are considered: water frost in radiative equilibrium and subliming CO2 frost in vapor equilibrium. Energy budgets and temperatures are used to determine whether a craterlike depression loses mass faster or slower than a flat horizontal surface. This reveals qualitatively whether the frost surface becomes rougher or smoother as it sublimes.
Calculating depths to shallow magnetic sources using aeromagnetic data from the Tucson Basin
Casto, Daniel W.
2001-01-01
Using gridded high-resolution aeromagnetic data, the performance of several automated 3-D depth-to-source methods was evaluated over shallow control sources based on how close their depth estimates came to the actual depths to the tops of the sources. For all three control sources, only the simple analytic signal method, the local wavenumber method applied to the vertical integral of the magnetic field, and the horizontal gradient method applied to the pseudo-gravity field provided median depth estimates that were close (-11% to +14% error) to the actual depths. Careful attention to data processing was required in order to calculate a sufficient number of depth estimates and to reduce the occurrence of false depth estimates. For example, to eliminate sampling bias, high-frequency noise and interference from deeper sources, it was necessary to filter the data before calculating derivative grids and subsequent depth estimates. To obtain smooth spatial derivative grids using finite differences, the data had to be gridded at intervals less than one percent of the anomaly wavelength. Before finding peak values in the derived signal grids, it was necessary to remove calculation noise by applying a low-pass filter in the grid-line directions and to re-grid at an interval that enabled the search window to encompass only the peaks of interest. Using the methods that worked best over the control sources, depth estimates over geologic sites of interest suggested the possible occurrence of volcanics nearly 170 meters beneath a city landfill. Also, a throw of around 2 kilometers was determined for a detachment fault that has a displacement of roughly 6 kilometers.
Luttrell, K.M.; Tong, X.; Sandwell, D.T.; Brooks, B.A.; Bevis, M.G.
2011-01-01
The great 27 February 2010 Mw 8.8 earthquake off the coast of southern Chile ruptured a ???600 km length of subduction zone. In this paper, we make two independent estimates of shear stress in the crust in the region of the Chile earthquake. First, we use a coseismic slip model constrained by geodetic observations from interferometric synthetic aperture radar (InSAR) and GPS to derive a spatially variable estimate of the change in static shear stress along the ruptured fault. Second, we use a static force balance model to constrain the crustal shear stress required to simultaneously support observed fore-arc topography and the stress orientation indicated by the earthquake focal mechanism. This includes the derivation of a semianalytic solution for the stress field exerted by surface and Moho topography loading the crust. We find that the deviatoric stress exerted by topography is minimized in the limit when the crust is considered an incompressible elastic solid, with a Poisson ratio of 0.5, and is independent of Young's modulus. This places a strict lower bound on the critical stress state maintained by the crust supporting plastically deformed accretionary wedge topography. We estimate the coseismic shear stress change from the Maule event ranged from-6 MPa (stress increase) to 17 MPa (stress drop), with a maximum depth-averaged crustal shear-stress drop of 4 MPa. We separately estimate that the plate-driving forces acting in the region, regardless of their exact mechanism, must contribute at least 27 MPa trench-perpendicular compression and 15 MPa trench-parallel compression. This corresponds to a depth-averaged shear stress of at least 7 MPa. The comparable magnitude of these two independent shear stress estimates is consistent with the interpretation that the section of the megathrust fault ruptured in the Maule earthquake is weak, with the seismic cycle relieving much of the total sustained shear stress in the crust. Copyright 2011 by the American Geophysical Union.
Bathymetric and hydraulic survey of the Matanuska River near Circle View Estates, Alaska
Conaway, Jeffrey S.
2008-01-01
An acoustic Doppler current profiler interfaced with a differentially corrected global positioning system was used to map bathymetry and multi-dimensional velocities on the Matanuska River near Circle View Estates, Alaska. Data were collected along four spur dikes and a bend in the river during a period of active bank erosion. These data were collected as part of a larger investigation into channel processes being conducted to aid land managers with development of a long-term management plan for land near the river. The banks and streambed are composed of readily erodible material and the braided channels frequently scour and migrate. Lateral channel migration has resulted in the periodic loss of properties and structures along the river for decades.For most of the survey, discharge of the Matanuska River was less than the 25th percentile of long-term streamflow. Despite this relatively low flow, measured water velocities were as high as 15 feet per second. The survey required a unique deployment of the acoustic Doppler current profiler in a tethered boat that was towed by a small inflatable raft. Data were collected along cross sections and longitudinal profiles. The bathymetric and velocity data document river conditions before the installation of an additional spur dike in 2006 and during a period of bank erosion. Data were collected along 1,700 feet of river in front of the spur dikes and along 1,500 feet of an eroding bank.Data collected at the nose of spur dikes 2, 3, and 4 were selected to quantify the flow hydraulics at the locations subject to the highest velocities. The measured velocities and flow depths were greatest at the nose of the downstream-most spur dike. The maximum point velocity at the spur dike nose was 13.3 feet per second and the maximum depth-averaged velocity was 11.6 feet per second. The maximum measured depth was 12.0 feet at the nose of spur dike 4 and velocities greater than 10 feet per second were measured to a depth of 10 feet.Data collected along an eroding bank provided details of the spatial distribution and variability in magnitude of velocities and flow depths while erosion was taking place. Erosion was concentrated in an area just downstream of the apex of a river bend. Measured velocities and flow depths were greater in the apex of the bend than in the area of maximum bank erosion. The maximum measured velocity was 12.9 feet per second at the apex and 11.2 feet per second in front of the eroding bank. The maximum measured depth was 10.2 feet at the apex and 5.2 feet in front of the eroding bank.
On the Subsurface Chlorophyll Maximum layer in the Black Sea Romanian shelf waters
NASA Astrophysics Data System (ADS)
Vasiliu, Dan; Gomoiu, Marian-Traian; Secrieru, Dan; Caraus, Ioan; Balan, Sorin
2013-04-01
By analyzing data recorded in 38 sampling stations (bottom depths between 16 and 200 m) covering the entire Romanian shelf, from the Danube's mouths to the southern part of the coast, the authors study Subsurface Chlorophyll Maximum (SCM) from May 2009 to April 2011. Chlorophyll a (Chla), seawater temperature, salinity, sigma T, dissolved oxygen, ph, beam attenuation, were measured over the water column depth with the CTD probe and averaged over 1-db intervals (about 1 m depth). Nutrients and phytoplankton qualitative and quantitative parameters were recorded from different depths according to water masses stratification (inscribed in the research protocol of the cruise). In late winter/early spring, due to strong mixing processes of water masses, SCM was not observed in the Black Sea shelf waters. In spring (May), the Danube's increased discharges, characteristic to that period, strongly affected the vertical distribution of Chla, particularly in the area of the Danube's direct influence, where Chla reached maximum in the surface layer (19.76 - 30.39 µg.l-1). In the deeper sampling stations, a relatively weak SCM (Chla within 0.77 - 1.21 µg.l-1) was observed, mainly at the lower limit of the euphotic zone (between 30 and 40 m depths). Here, the position and magnitude of SCM seemed to be controlled mainly by the light conditions; the seasonal thermocline was not well contoured yet. In the warm season, once the stratification becomes stronger, the magnitude of SCM increased (Chla varies between 1.45 - 2.12 µg.l-1). The SCM was well pronounced below the upper boundary of thermocline, at depths between 20 and 25 m, where the dissolved oxygen concentrations have also reached the highest values (>10 mg.l-1 O2), thus suggesting strong photosynthetic processes, where both nutrient and light conditions are favorable. A particular situation was found in July 2010, when abnormally high discharges from the Danube led to a well pronounced SCM (3.23 - 6.87 µg.l-1 Chla) above thermocline (within 8 - 12 m depths) in the shallow waters, the nutrients being not limitative factors. Keywords Chlorophyll a, Subsurface Chlorophyll Maximum layer, the Black Sea, the Danube
Takada, Kenta; Sato, Tatsuhiko; Kumada, Hiroaki; Koketsu, Junichi; Takei, Hideyuki; Sakurai, Hideyuki; Sakae, Takeji
2018-01-01
The microdosimetric kinetic model (MKM) is widely used for estimating relative biological effectiveness (RBE)-weighted doses for various radiotherapies because it can determine the surviving fraction of irradiated cells based on only the lineal energy distribution, and it is independent of the radiation type and ion species. However, the applicability of the method to proton therapy has not yet been investigated thoroughly. In this study, we validated the RBE-weighted dose calculated by the MKM in tandem with the Monte Carlo code PHITS for proton therapy by considering the complete simulation geometry of the clinical proton beam line. The physical dose, lineal energy distribution, and RBE-weighted dose for a 155 MeV mono-energetic and spread-out Bragg peak (SOBP) beam of 60 mm width were evaluated. In estimating the physical dose, the calculated depth dose distribution by irradiating the mono-energetic beam using PHITS was consistent with the data measured by a diode detector. A maximum difference of 3.1% in the depth distribution was observed for the SOBP beam. In the RBE-weighted dose validation, the calculated lineal energy distributions generally agreed well with the published measurement data. The calculated and measured RBE-weighted doses were in excellent agreement, except at the Bragg peak region of the mono-energetic beam, where the calculation overestimated the measured data by ~15%. This research has provided a computational microdosimetric approach based on a combination of PHITS and MKM for typical clinical proton beams. The developed RBE-estimator function has potential application in the treatment planning system for various radiotherapies. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Sato, Tatsuhiko; Kumada, Hiroaki; Koketsu, Junichi; Takei, Hideyuki; Sakurai, Hideyuki; Sakae, Takeji
2018-01-01
Abstract The microdosimetric kinetic model (MKM) is widely used for estimating relative biological effectiveness (RBE)-weighted doses for various radiotherapies because it can determine the surviving fraction of irradiated cells based on only the lineal energy distribution, and it is independent of the radiation type and ion species. However, the applicability of the method to proton therapy has not yet been investigated thoroughly. In this study, we validated the RBE-weighted dose calculated by the MKM in tandem with the Monte Carlo code PHITS for proton therapy by considering the complete simulation geometry of the clinical proton beam line. The physical dose, lineal energy distribution, and RBE-weighted dose for a 155 MeV mono-energetic and spread-out Bragg peak (SOBP) beam of 60 mm width were evaluated. In estimating the physical dose, the calculated depth dose distribution by irradiating the mono-energetic beam using PHITS was consistent with the data measured by a diode detector. A maximum difference of 3.1% in the depth distribution was observed for the SOBP beam. In the RBE-weighted dose validation, the calculated lineal energy distributions generally agreed well with the published measurement data. The calculated and measured RBE-weighted doses were in excellent agreement, except at the Bragg peak region of the mono-energetic beam, where the calculation overestimated the measured data by ~15%. This research has provided a computational microdosimetric approach based on a combination of PHITS and MKM for typical clinical proton beams. The developed RBE-estimator function has potential application in the treatment planning system for various radiotherapies. PMID:29087492
Wavelet extractor: A Bayesian well-tie and wavelet extraction program
NASA Astrophysics Data System (ADS)
Gunning, James; Glinsky, Michael E.
2006-06-01
We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.
NASA Astrophysics Data System (ADS)
Takagi, R.; Okada, T.; Yoshida, K.; Townend, J.; Boese, C. M.; Baratin, L. M.; Chamberlain, C. J.; Savage, M. K.
2016-12-01
We estimate shear wave velocity anisotropy in shallow crust near the Alpine fault using seismic interferometry of borehole vertical arrays. We utilized four borehole observations: two sensors are deployed in two boreholes of the Deep Fault Drilling Project in the hanging wall side, and the other two sites are located in the footwall side. Surface sensors deployed just above each borehole are used to make vertical arrays. Crosscorrelating rotated horizontal seismograms observed by the borehole and surface sensors, we extracted polarized shear waves propagating from the bottom to the surface of each borehole. The extracted shear waves show polarization angle dependence of travel time, indicating shear wave anisotropy between the two sensors. In the hanging wall side, the estimated fast shear wave directions are parallel to the Alpine fault. Strong anisotropy of 20% is observed at the site within 100 m from the Alpine fault. The hanging wall consists of mylonite and schist characterized by fault parallel foliation. In addition, an acoustic borehole imaging reveals fractures parallel to the Alpine fault. The fault parallel anisotropy suggest structural anisotropy is predominant in the hanging wall, demonstrating consistency of geological and seismological observations. In the footwall side, on the other hand, the angle between the fast direction and the strike of the Alpine fault is 33-40 degrees. Since the footwall is composed of granitoid that may not have planar structure, stress induced anisotropy is possibly predominant. The direction of maximum horizontal stress (SHmax) estimated by focal mechanisms of regional earthquakes is 55 degrees of the Alpine fault. Possible interpretation of the difference between the fast direction and SHmax direction is depth rotation of stress field near the Alpine fault. Similar depth rotation of stress field is also observed in the SAFOD borehole at the San Andreas fault.
NASA Astrophysics Data System (ADS)
Sassi, M. G.; Hoitink, A. J. F.; Vermeulen, B.; Hidayat, null
2011-06-01
Horizontal acoustic Doppler current profilers (H-ADCPs) can be employed to estimate river discharge based on water level measurements and flow velocity array data across a river transect. A new method is presented that accounts for the dip in velocity near the water surface, which is caused by sidewall effects that decrease with the width to depth ratio of a channel. A boundary layer model is introduced to convert single-depth velocity data from the H-ADCP to specific discharge. The parameters of the model include the local roughness length and a dip correction factor, which accounts for the sidewall effects. A regression model is employed to translate specific discharge to total discharge. The method was tested in the River Mahakam, representing a large river of complex bathymetry, where part of the flow is intrinsically three-dimensional and discharge rates exceed 8000 m3 s-1. Results from five moving boat ADCP campaigns covering separate semidiurnal tidal cycles are presented, three of which are used for calibration purposes, whereas the remaining two served for validation of the method. The dip correction factor showed a significant correlation with distance to the wall and bears a strong relation to secondary currents. The sidewall effects appeared to remain relatively constant throughout the tidal cycles under study. Bed roughness length is estimated at periods of maximum velocity, showing more variation at subtidal than at intratidal time scales. Intratidal variations were particularly obvious during bidirectional flow conditions, which occurred only during conditions of low river discharge. The new method was shown to outperform the widely used index velocity method by systematically reducing the relative error in the discharge estimates.
NASA Astrophysics Data System (ADS)
Prasad, Moyye Devi; Nagarajan, D.
2018-05-01
An axisymmetric dome of 70 mm in diameter and 35 mm in depth was formed using the ISF process using varying proportions (25, 50 and 75%) of spiral (S) and helical (H) tool path combinations as a single tool path strategy, on a 2 mm thickness commercially pure aluminium sheets. A maximum forming depth of ˜30 mm was observed on all the components, irrespective of the different tool path combinations employed. None of the components were fractured for the different tool path combinations used. The springback was also same and uniform for all the tool path combinations employed, except for the 75S25H which showed slightly larger springback. The wall thickness reduced drastically up to a certain forming depth and increased with the increase in forming depth for all the tool path combinations. The maximum thinning occurred near the maximum wall angle region for all the components. The wall thickness improved significantly (around 10-15%) near the maximum wall angle region for the 25S75H combination than that of the complete spiral and other tool path strategies. It is speculated that this improvement in wall thickness may be mainly due to the combined contribution of the simple shear and uniaxial dilatation deformation modes of the helical tool path strategy in the 25S75H combination. This increase in wall thickness will greatly help in reducing the plastic instability and postpone the early failure of the component.
Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries (Open Access)
2014-09-05
RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS 1 Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries S. Hussain Raza1...electronic forms. ar X iv :1 51 0. 07 31 7v 1 [ cs .C V ] 2 5 O ct 2 01 5 2 RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS Frame Ground Truth Depth...temporal segmentation using the method proposed by Grundmann et al . [4]. estimation and triangulation to estimate depth maps [17, 27](see Figure 1). In
Grinding damage assessment on four high-strength ceramics.
Canneto, Jean-Jacques; Cattani-Lorente, Maria; Durual, Stéphane; Wiskott, Anselm H W; Scherrer, Susanne S
2016-02-01
The purpose of this study was to assess surface and subsurface damage on 4 CAD-CAM high-strength ceramics after grinding with diamond disks of 75 μm, 54 μm and 18 μm and to estimate strength losses based on damage crack sizes. The materials tested were: 3Y-TZP (Lava), dense Al2O3 (In-Ceram AL), alumina glass-infiltrated (In-Ceram ALUMINA) and alumina-zirconia glass-infiltrated (In-Ceram ZIRCONIA). Rectangular specimens with 2 mirror polished orthogonal sides were bonded pairwise together prior to degrading the top polished surface with diamond disks of either 75 μm, 54 μm or 18 μm. The induced chip damage was evaluated on the bonded interface using SEM for chip depth measurements. Fracture mechanics were used to estimate fracture stresses based on average and maximum chip depths considering these as critical flaws subjected to tension and to calculate possible losses in strength compared to manufacturer's data. 3Y-TZP was hardly affected by grinding chip damage viewed on the bonded interface. Average chip depths were of 12.7±5.2 μm when grinding with 75 μm diamond inducing an estimated loss of 12% in strength compared to manufacturer's reported flexural strength values of 1100 MPa. Dense alumina showed elongated chip cracks and was suffering damage of an average chip depth of 48.2±16.3 μm after 75 μm grinding, representing an estimated loss in strength of 49%. Grinding with 54 μm was creating chips of 32.2±9.1 μm in average, representing a loss in strength of 23%. Alumina glass-infiltrated ceramic was exposed to chipping after 75 μm (mean chip size=62.4±19.3 μm) and 54 μm grinding (mean chip size=42.8±16.6 μm), with respectively 38% and 25% estimated loss in strength. Alumina-zirconia glass-infiltrated ceramic was mainly affected by 75 μm grinding damage with a chip average size of 56.8±15.1 μm, representing an estimated loss in strength of 34%. All four ceramics were not exposed to critical chipping at 18 μm diamond grinding. Reshaping a ceramic framework post sintering should be avoided with final diamond grits of 75 μm as a general rule. For alumina and the glass-infiltrated alumina, using a 54 μm diamond still induces chip damage which may affect strength. Removal of such damage from a reshaped framework is mandatory by using sequentially finer diamonds prior to the application of veneering ceramics especially in critical areas such as margins, connectors and inner surfaces. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Observations that Constrain the Scaling of Apparent Stress
NASA Astrophysics Data System (ADS)
McGarr, A.; Fletcher, J. B.
2002-12-01
Slip models developed for major earthquakes are composed of distributions of fault slip, rupture time, and slip velocity time function over the rupture surface, as divided into many smaller subfaults. Using a recently-developed technique, the seismic energy radiated from each subfault can be estimated from the time history of slip there and the average rupture velocity. Total seismic energies, calculated by summing contributions from all of the subfaults, agree reasonably well with independent estimates based on seismic energy flux in the far-field at regional or teleseismic distances. Two recent examples are the 1999 Izmit, Turkey and the 1999 Hector Mine, California earthquakes for which the NEIS teleseismic measurements of radiated energy agree fairly closely with seismic energy estimates from several different slip models, developed by others, for each of these events. Similar remarks apply to the 1989 Loma Prieta, 1992 Landers, and 1995 Kobe earthquakes. Apparent stresses calculated from these energy and moment results do not indicate any moment or magnitude dependence. The distributions of both fault slip and seismic energy radiation over the rupture surfaces of earthquakes are highly inhomogeneous. These results from slip models, combined with underground and seismic observations of slip for much smaller mining-induced earthquakes, can provide stronger constraint on the possible scaling of apparent stress with moment magnitude M or seismic moment. Slip models for major earthquakes in the range M6.2 to M7.4 show maximum slips ranging from 1.6 to 8 m. Mining-induced earthquakes at depths near 2000 m in South Africa are associated with peak slips of 0.2 to 0.37 m for events of M4.4 to M4.6. These maximum slips, whether derived from a slip model or directly observed underground in a deep gold mine, scale quite definitively as the cube root of the seismic moment. In contrast, peak slip rates (maximum subfault slip/rise time) appear to be scale invariant. A 1.25 m/s slip rate for one of the mining-induced earthquakes was estimated by dividing the corresponding slip observed at depth by the duration of the seismically-recorded slip pulse. Peak slip rates determined from the slip models for the major earthquakes are similar, ranging from about 0.8 to 4.8 m/s. Thus, for earthquakes in the moment magnitude range 4.4 to 7.4, the peak slip rate shows no dependence on M. Whatever variation there is in slip rate is probably due to factors related to the strength of the seismogenic rock mass such as depth. These observations support the idea that apparent stress does not vary systematically with seismic moment inasmuch as the apparent stress is determined by slip rate. Indeed, our finding that fault behavior of M4.4 earthquakes can be scaled readily to events of M greater than 7 with slips up to about 8 m suggests, quite persuasively, that the source physics for crustal earthquakes is much the same over this magnitude range. Interestingly, the mining-induced earthquakes involved brittle failure across very old pre-existing faults for which the cohesive strength is high and the pore pressure is zero, due to mining operations.
Characterization of highly multiplexed monolithic PET / gamma camera detector modules
NASA Astrophysics Data System (ADS)
Pierce, L. A.; Pedemonte, S.; DeWitt, D.; MacDonald, L.; Hunter, W. C. J.; Van Leemput, K.; Miyaoka, R.
2018-04-01
PET detectors use signal multiplexing to reduce the total number of electronics channels needed to cover a given area. Using measured thin-beam calibration data, we tested a principal component based multiplexing scheme for scintillation detectors. The highly-multiplexed detector signal is no longer amenable to standard calibration methodologies. In this study we report results of a prototype multiplexing circuit, and present a new method for calibrating the detector module with multiplexed data. A 50 × 50 × 10 mm3 LYSO scintillation crystal was affixed to a position-sensitive photomultiplier tube with 8 × 8 position-outputs and one channel that is the sum of the other 64. The 65-channel signal was multiplexed in a resistive circuit, with 65:5 or 65:7 multiplexing. A 0.9 mm beam of 511 keV photons was scanned across the face of the crystal in a 1.52 mm grid pattern in order to characterize the detector response. New methods are developed to reject scattered events and perform depth-estimation to characterize the detector response of the calibration data. Photon interaction position estimation of the testing data was performed using a Gaussian Maximum Likelihood estimator and the resolution and scatter-rejection capabilities of the detector were analyzed. We found that using a 7-channel multiplexing scheme (65:7 compression ratio) with 1.67 mm depth bins had the best performance with a beam-contour of 1.2 mm FWHM (from the 0.9 mm beam) near the center of the crystal and 1.9 mm FWHM near the edge of the crystal. The positioned events followed the expected Beer–Lambert depth distribution. The proposed calibration and positioning method exhibited a scattered photon rejection rate that was a 55% improvement over the summed signal energy-windowing method.
Mandic, Radivoj; Knezevic, Olivera M; Mirkov, Dragan M; Jaric, Slobodan
2016-09-01
The aim of the present study was to explore the control strategy of maximum countermovement jumps regarding the preferred countermovement depth preceding the concentric jump phase. Elite basketball players and physically active non-athletes were tested on the jumps performed with and without an arm swing, while the countermovement depth was varied within the interval of almost 30 cm around its preferred value. The results consistently revealed 5.1-11.2 cm smaller countermovement depth than the optimum one, but the same difference was more prominent in non-athletes. In addition, although the same differences revealed a marked effect on the recorded force and power output, they reduced jump height for only 0.1-1.2 cm. Therefore, the studied control strategy may not be based solely on the countermovement depth that maximizes jump height. In addition, the comparison of the two groups does not support the concept of a dual-task strategy based on the trade-off between maximizing jump height and minimizing the jumping quickness that should be more prominent in the athletes that routinely need to jump quickly. Further research could explore whether the observed phenomenon is based on other optimization principles, such as the minimization of effort and energy expenditure. Nevertheless, future routine testing procedures should take into account that the control strategy of maximum countermovement jumps is not fully based on maximizing the jump height, while the countermovement depth markedly confound the relationship between the jump height and the assessed force and power output of leg muscles.
Magnetotelluric Detection Thresholds as a Function of Leakage Plume Depth, TDS and Volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X.; Buscheck, T. A.; Mansoor, K.
We conducted a synthetic magnetotelluric (MT) data analysis to establish a set of specific thresholds of plume depth, TDS concentration and volume for detection of brine and CO 2 leakage from legacy wells into shallow aquifers in support of Strategic Monitoring Subtask 4.1 of the US DOE National Risk Assessment Partnership (NRAP Phase II), which is to develop geophysical forward modeling tools. 900 synthetic MT data sets span 9 plume depths, 10 TDS concentrations and 10 plume volumes. The monitoring protocol consisted of 10 MT stations in a 2×5 grid laid out along the flow direction. We model the MTmore » response in the audio frequency range of 1 Hz to 10 kHz with a 50 Ωm baseline resistivity and the maximum depth up to 2000 m. Scatter plots show the MT detection thresholds for a trio of plume depth, TDS concentration and volume. Plumes with a large volume and high TDS located at a shallow depth produce a strong MT signal. We demonstrate that the MT method with surface based sensors can detect a brine and CO 2 plume so long as the plume depth, TDS concentration and volume are above the thresholds. However, it is unlikely to detect a plume at a depth larger than 1000 m with the change of TDS concentration smaller than 10%. Simulated aquifer impact data based on the Kimberlina site provides a more realistic view of the leakage plume distribution than rectangular synthetic plumes in this sensitivity study, and it will be used to estimate MT responses over simulated brine and CO 2 plumes and to evaluate the leakage detectability. Integration of the simulated aquifer impact data and the MT method into the NRAP DREAM tool may provide an optimized MT survey configuration for MT data collection. This study presents a viable approach for sensitivity study of geophysical monitoring methods for leakage detection. The results come in handy for rapid assessment of leakage detectability.« less
Crustal structure of north Peru from analysis of teleseismic receiver functions
NASA Astrophysics Data System (ADS)
Condori, Cristobal; França, George S.; Tavera, Hernando J.; Albuquerque, Diogo F.; Bishop, Brandon T.; Beck, Susan L.
2017-07-01
In this study, we present results from teleseismic receiver functions, in order to investigate the crustal thickness and Vp/Vs ratio beneath northern Peru. A total number of 981 receiver functions were analyzed, from data recorded by 28 broadband seismic stations from the Peruvian permanent seismic network, the regional temporary SisNort network and one CTBTO station. The Moho depth and average crustal Vp/Vs ratio were determined at each station using the H-k stacking technique to identify the arrival times of primary P to S conversion and crustal reverberations (PpPms, PpSs + PsPms). The results show that the Moho depth correlates well with the surface topography and varies significantly from west to east, showing a shallow depth of around 25 km near the coast, a maximum depth of 55-60 km beneath the Andean Cordillera, and a depth of 35-40 km further to the east in the Amazonian Basin. The bulk crustal Vp/Vs ratio ranges between 1.60 and 1.88 with the mean of 1.75. Higher values between 1.75 and 1.88 are found beneath the Eastern and Western Cordilleras, consistent with a mafic composition in the lower crust. In contrast values vary from 1.60 to 1.75 in the extreme flanks of the Eastern and Western Cordillera indicating a felsic composition. We find a positive relationship between crustal thickness, Vp/Vs ratio, the Bouguer anomaly, and topography. These results are consistent with previous studies in other parts of Peru (central and southern regions) and provide the first crustal thickness estimates for the high cordillera in northern Peru.
The Effect of Finite Thickness Extent on Estimating Depth to Basement from Aeromagnetic Data
NASA Astrophysics Data System (ADS)
Blakely, R. J.; Salem, A.; Green, C. M.; Fairhead, D.; Ravat, D.
2014-12-01
Depth to basement estimation methods using various components of the spectral content of magnetic anomalies are in common use by geophysicists. Examples of these are the Tilt-Depth and SPI methods. These methods use simple models having the base of the magnetic body at infinity. Recent publications have shown that this 'infinite depth' assumption causes underestimation of the depth to the top of sources, especially in areas where the bottom of the magnetic layer is shallow, as would occur in high heat-flow regions. This error has been demonstrated in both model studies and using real data with seismic or well control. To overcome the limitation of infinite depth this contribution presents the mathematics for a finite depth contact body in the Tilt depth and SPI methods and applies it to the central Red Sea where the Curie isotherm and Moho are shallow. The difference in the depth estimation between the infinite and finite contacts is such a case is significant and can exceed 200%.
NASA Astrophysics Data System (ADS)
Wiebe, D. M.; Cox, D. T.; Chen, Y.; Weber, B. A.; Chen, Y.
2012-12-01
Building damage from a hypothetical Cascadia Subduction Zone tsunami was estimated using two methods and applied at the community scale. The first method applies proposed guidelines for a new ASCE 7 standard to calculate the flow depth, flow velocity, and momentum flux from a known runup limit and estimate of the total tsunami energy at the shoreline. This procedure is based on a potential energy budget, uses the energy grade line, and accounts for frictional losses. The second method utilized numerical model results from previous studies to determine maximum flow depth, velocity, and momentum flux throughout the inundation zone. The towns of Seaside and Canon Beach, Oregon, were selected for analysis due to the availability of existing data from previously published works. Fragility curves, based on the hydrodynamic features of the tsunami flow (inundation depth, flow velocity, and momentum flux) and proposed design standards from ASCE 7 were used to estimate the probability of damage to structures located within the inundations zone. The analysis proceeded at the parcel level, using tax-lot data to identify construction type (wood, steel, and reinforced-concrete) and age, which was used as a performance measure when applying the fragility curves and design standards. The overall probability of damage to civil buildings was integrated for comparison between the two methods, and also analyzed spatially for damage patterns, which could be controlled by local bathymetric features. The two methods were compared to assess the sensitivity of the results to the uncertainty in the input hydrodynamic conditions and fragility curves, and the potential advantages of each method discussed. On-going work includes coupling the results of building damage and vulnerability to an economic input output model. This model assesses trade between business sectors located inside and outside the induction zone, and is used to measure the impact to the regional economy. Results highlight critical businesses sectors and infrastructure critical to the economic recovery effort, which could be retrofitted or relocated to survive the event. The results of this study improve community understanding of the tsunami hazard for civil buildings.
NASA Astrophysics Data System (ADS)
El Hussain, I. W.
2017-12-01
The current study provides a site specific deterministic seismic hazard assessment (DSHA) at the selected site for establishing the Oman Museum-Across Ages at Manah area, as a part of a comprehensive geotechnical and seismological plan to design the facilities accordingly. The DSHA first defines the seismic sources that might influence the site and assesses the maximum possible earthquake magnitude for each of them. By assuming each of these maximum earthquakes to occur at a location placing them at the closest distances to the site, the ground motion is predicted utilizing empirical ground motion prediction equations. The local site effects are performed by determining the fundamental frequency of the soft soil using HVSR technique and by estimating amplification spectra using the soil characteristics (mainly shear-wave velocity). Shear-wave velocity has been evaluated using the MASW technique. The maximum amplification value of 2.1 at spectral period 0.06 sec is observed at the ground surface, while the largest amplification value at the top of the conglomerate layer (at 5m depth) is 1.6 for a spectral period of 0.04 Sec. The maximum median 5% damped peak ground acceleration is found to be 0.263g at a spectral period of 0.1 sec. Keywords: DSHA; Site Effects; HVSR; MASW; PGA; Spectral Period
Morgan, Ryan W; Kilbaugh, Todd J; Shoap, Wesley; Bratinov, George; Lin, Yuxi; Hsieh, Ting-Chang; Nadkarni, Vinay M; Berg, Robert A; Sutton, Robert M
2017-02-01
Most pediatric in-hospital cardiac arrests (IHCAs) occur in ICUs where invasive hemodynamic monitoring is frequently available. Titrating cardiopulmonary resuscitation (CPR) to the hemodynamic response of the individual improves survival in preclinical models of adult cardiac arrest. The objective of this study was to determine if titrating CPR to systolic blood pressure (SBP) and coronary perfusion pressure (CoPP) in a pediatric porcine model of asphyxia-associated ventricular fibrillation (VF) IHCA would improve survival as compared to traditional CPR. After 7min of asphyxia followed by VF, 4-week-old piglets received either hemodynamic-directed CPR (HD-CPR; compression depth titrated to SBP of 90mmHg and vasopressor administration to maintain CoPP ≥20mmHg); or Standard Care (compression depth 1/3 of the anterior-posterior chest diameter and epinephrine every 4min). All animals received CPR for 10min prior to the first defibrillation attempt. CPR was continued for a maximum of 20min. Protocolized intensive care was provided to all surviving animals for 4h. The primary outcome was 4-h survival. Survival rate was greater with HD-CPR (12/12) than Standard Care (6/10; p=0.03). CoPP during HD-CPR was higher compared to Standard Care (point estimate +8.1mmHg, CI 95 : 0.5-15.8mmHg; p=0.04). Chest compression depth was lower with HD-CPR than Standard Care (point estimate -14.0mm, CI95: -9.6 to -18.4mm; p<0.01). Prior to the first defibrillation attempt, more vasopressor doses were administered with HD-CPR vs. Standard Care (median 5 vs. 2; p<0.01). Hemodynamic-directed CPR improves short-term survival compared to standard depth-targeted CPR in a porcine model of pediatric asphyxia-associated VF IHCA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Morgan, Ryan W.; Kilbaugh, Todd J.; Shoap, Wesley; Bratinov, George; Lin, Yuxi; Hsieh, Ting-Chang; Nadkarni, Vinay M.; Berg, Robert A.; Sutton, Robert M.
2016-01-01
Aim Most pediatric in-hositalcardiac arrests(IHCAs) occur in ICUs where invasive hemodynamic monitoring is frequently available. Titrating cardiopulmonary resuscitation (CPR) to the hemodynamic response of the individual improves survival in preclinical models of adult cardiac arrest. The objective of this study was to determine if titrating CPR to systolic blood pressure (SBP) and coronary perfusion pressure (CoPP) in a pediatric porcine model of asphyxia-associated ventricular fibrillation (VF) IHCA would improve survival as compared to traditional CPR. Methods After 7 minutes of asphyxia followed by VF, 4-week-old piglets received either Hemodynamic-Directed CPR (HD-CPR; compression depth titrated to SBP of 90mmHg and vasopressor administration to maintain CoPP ≥20mmHg); or Standard Care (compression depth 1/3 of the anterior-posterior chest diameter and epinephrine every 4 minutes). All animals received CPR for 10 minutes prior to the first defibrillation attempt. CPR was continued for a maximum of 20 minutes. Protocolized intensive care was provided to all surviving animals for 4 hours. The primary outcome was 4-hour survival. Results Survival rate was greater with HD-CPR (12/12) than Standard Care (6/10; p=0.03). CoPP during HD-CPR was higher compared to Standard Care (point estimate +8.1mmHg, CI95: 0.5–15.8mmHg; p=0.04). Chest compression depth was lower with HD-CPR than Standard Care (point estimate 14.0mm, CI95: 9.6–18.4mm; p<0.01). Prior to the first defibrillation attempt, more vasopressor doses were administered with HD-CPR versus Standard Care (median 5 versus 2; p<0.01). Conclusions Hemodynamic-directed CPR improves short-term survival compared to standard depth-targeted CPR in a porcine model of pediatric asphyxia-associated VF IHCA. PMID:27923692
Shah, Shaan H; Small, Kirstin M; Sinz, Nathan J; Higgins, Laurence D
2016-06-01
To evaluate for an association between the morphology of the lesser tuberosity and intertubercular groove and subscapularis tendon tears and biceps tendon pathology. Sixty-six patients with arthroscopically confirmed subscapularis tendon tears were compared with 59 demographically matched control patients who underwent magnetic resonance imaging or computed tomography arthrography examination of the shoulder. Measurements of the lesser tuberosity and intertubercular groove included maximum depth of the intertubercular groove, intertubercular groove depth at the midpoint of the glenoid, lesser tuberosity length, length from the top of the humeral head to the point of maximum depth of the intertubercular groove, length from the top of the humeral head to the top of the lesser tuberosity, and medial wall angle and depth. Patients with subscapularis tears showed a significantly decreased depth of the intertubercular groove at the mid glenoid (P = .01), shorter length of the lesser tuberosity (P = .002), and greater distance from the top of the humeral head to the top of the lesser tuberosity (P = .02). There was a trend toward a decreased medial wall angle (P = .07) and greater distance from the top of the humeral head to the point of maximum intertubercular groove depth (P = .06). Patients with biceps tendon pathology showed a significantly decreased depth of the intertubercular groove at the mid glenoid (P = .001), shorter length of the lesser tuberosity (P = .0003), greater distance from the top of the humeral head to the top of the lesser tuberosity (P = .01), and decreased medial wall angle (P = .01) and depth (P = .03). There are several morphologic factors related to the lesser tuberosity and intertubercular groove that are associated with both subscapularis tendon tears and biceps tendon pathology. Level III, case-control study. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Observations of internal waves in the Gulf of California by SEASAT SAR
NASA Technical Reports Server (NTRS)
Fu, L. L.; Holt, B.
1983-01-01
Internal waves which are among the most commonly observed oceanic phenomena in the SEASAT SAR imagery are discussed. These waves are associated with the vertical displacements of constant water density surfaces in the ocean. Their amplitudes are maximum at depths where the water density changes most rapidly usually at depths from 50 to 100 m, whereas the horizontal currents associated with these waves are maximum at the sea surface where the resulting oscillatory currents modulate the sea surface roughness and produce the signatures detected by SAR.
Observations of internal waves in the Gulf of California by SEASAT SAR
NASA Astrophysics Data System (ADS)
Fu, L. L.; Holt, B.
1983-07-01
Internal waves which are among the most commonly observed oceanic phenomena in the SEASAT SAR imagery are discussed. These waves are associated with the vertical displacements of constant water density surfaces in the ocean. Their amplitudes are maximum at depths where the water density changes most rapidly usually at depths from 50 to 100 m, whereas the horizontal currents associated with these waves are maximum at the sea surface where the resulting oscillatory currents modulate the sea surface roughness and produce the signatures detected by SAR.
NASA Technical Reports Server (NTRS)
Deutsch, Ariel N.; Head, James W.; Neumann, Gregory A.; Chabot, Nancy L.
2017-01-01
Earth-based radar observations revealed highly reflective deposits at the poles of Mercury [e.g., 1], which collocate with permanently shadowed regions (PSRs) detected from both imagery and altimetry by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft [e.g., 2]. MESSENGER also measured higher hydrogen concentrations at the north polar region, consistent with models for these deposits to be composed primarily of water ice [3]. Enigmatic to the characterization of ice deposits on Mercury is the thickness of these radar-bright features. A current minimum bound of several meters exists from the radar measurements, which show no drop in the radar cross section between 13- and 70-cm wavelength observations [4, 5]. A maximum thickness of 300 m is based on the lack of any statistically significant difference between the height of craters that host radar-bright deposits and those that do not [6]. More recently, this upper limit on the depth of a typical ice deposit has been lowered to approximately 150 m, in a study that found a mean excess thickness of 50 +/- 35 m of radar-bright deposits for 6 craters [7]. Refining such a constraint permits the derivation of a volumetric estimate of the total polar ice on Mercury, thus providing insight into possible sources of water ice on the planet. Here, we take a different approach to constrain the thickness of water-ice deposits. Permanently shadowed surfaces have been resolved in images acquired with the broadband filter on MESSENGER's wide-angle camera (WAC) using low levels of light scattered by crater walls and other topography [8]. These surfaces are not featureless and often host small craters (less than a few km in diameter). Here we utilize the presence of these small simple craters to constrain the thickness of the radar-bright ice deposits on Mercury. Specifically, we compare estimated depths made from depth-to-diameter ratios and depths from individual Mercury Laser Altimeter (MLA) tracks to constrain the fill of material of small craters that lie within the permanently shadowed, radar bright deposits of 7 north polar craters.
Norazlimi, Nor Atiqah; Ramli, Rosli
2015-01-01
A study was conducted to investigate the relationship between the physical morphology of shorebirds and water birds (i.e., Lesser adjutant (Leptoptilos javanicus), Common redshank (Tringa totanus), Whimbrel (Numenius phaeopus), and Little heron (Butorides striata)) and their foraging behavior in the mudflats area of Selangor, Peninsular Malaysia, from August 2013 to July 2014 by using direct observation techniques (using binoculars and a video recorder). The actively foraging bird species were watched, and their foraging activities were recorded for at least 30 seconds for up to a maximum of five minutes. A Spearman Rank Correlation highlighted a significant relationship between bill size and foraging time (R = 0.443, p < 0.05), bill size and prey size (R = −0.052, p < 0.05), bill size and probing depth (R = 0.42, p = 0.003), and leg length and water/mud depth (R = 0.706, p < 0.005). A Kruskal-Wallis Analysis showed a significant difference between average estimates of real probing depth of the birds (mm) and species (H = 15.96, p = 0.0012). Three foraging techniques were recorded: pause-travel, visual-feeding, and tactile-hunting. Thus, morphological characteristics of bird do influence their foraging behavior and strategies used when foraging. PMID:26345324
NASA Astrophysics Data System (ADS)
Arntsen, A. E.; Perovich, D. K.; Polashenski, C.; Stwertka, C.
2015-12-01
The amount of light that penetrates the Arctic sea ice cover impacts sea-ice mass balance as well as ecological processes in the upper ocean. The seasonally evolving macro and micro spatial variability of transmitted spectral irradiance observed in the Chukchi Sea from May 18 to June 17, 2014 can be primarily attributed to variations in snow depth, ice thickness, and bottom ice algae concentrations. This study characterizes the interactions among these dominant variables using observed optical properties at each sampling site. We employ a normalized difference index to compute estimates of Chlorophyll a concentrations and analyze the increased attenuation of incident irradiance due to absorption by biomass. On a kilometer spatial scale, the presence of bottom ice algae reduced the maximum transmitted irradiance by about 1.5 orders of magnitude when comparing floes of similar snow and ice thicknesses. On a meter spatial scale, the combined effects of disparities in the depth and distribution of the overlying snow cover along with algae concentrations caused maximum transmittances to vary between 0.0577 and 0.282 at a single site. Temporal variability was also observed as the average integrated transmitted photosynthetically active radiation increased by one order of magnitude to 3.4% for the last eight measurement days compared to the first nine. Results provide insight on how interrelated physical and ecological parameters of sea ice in varying time and space may impact new trends in Arctic sea ice extent and the progression of melt.
Zhao, Yan-jun; Zhang, Hua; Liu, Cheng-lin; Liu, Bao-kun; Ma, Li-chun; Wang, Li-cheng
2014-01-01
Climate changes within Cenozoic extreme climate events such as the Paleocene–Eocene Thermal Maximum and the First Oligocene Glacial provide good opportunities to estimate the global climate trends in our present and future life. However, quantitative paleotemperatures data for Cenozoic climatic reconstruction are still lacking, hindering a better understanding of the past and future climate conditions. In this contribution, quantitative paleotemperatures were determined by fluid inclusion homogenization temperature (Th) data from continental halite of the first member of the Shahejie Formation (SF1; probably late Eocene to early Oligocene) in Bohai Bay Basin, North China. The primary textures of the SF1 halite typified by cumulate and chevron halite suggest halite deposited in a shallow saline water and halite Th can serve as an temperature proxy. In total, one-hundred-twenty-one Th data from primary and single-phase aqueous fluid inclusions with different depths were acquired by the cooling nucleation method. The results show that all Th range from 17.7°C to 50.7°C,with the maximum homogenization temperatures (ThMAX) of 50.5°C at the depth of 3028.04 m and 50.7°C at 3188.61 m, respectively. Both the ThMAX presented here are significantly higher than the highest temperature recorded in this region since 1954and agree with global temperature models for the year 2100 predicted by the Intergovernmental Panel on Climate Change. PMID:25047483
NASA Astrophysics Data System (ADS)
Jessop, David S.; Sol, Christian W. O.; Xiao, Long; Kindness, Stephen J.; Braeuninger-Weimer, Philipp; Lin, Hungyen; Griffiths, Jonathan P.; Ren, Yuan; Kamboj, Varun S.; Hofmann, Stephan; Zeitler, J. Axel; Beere, Harvey E.; Ritchie, David A.; Degl'Innocenti, Riccardo
2016-02-01
The growing interest in terahertz (THz) technologies in recent years has seen a wide range of demonstrated applications, spanning from security screening, non-destructive testing, gas sensing, to biomedical imaging and communication. Communication with THz radiation offers the advantage of much higher bandwidths than currently available, in an unallocated spectrum. For this to be realized, optoelectronic components capable of manipulating THz radiation at high speeds and high signal-to-noise ratios must be developed. In this work we demonstrate a room temperature frequency dependent optoelectronic amplitude modulator working at around 2 THz, which incorporates graphene as the tuning medium. The architecture of the modulator is an array of plasmonic dipole antennas surrounded by graphene. By electrostatically doping the graphene via a back gate electrode, the reflection characteristics of the modulator are modified. The modulator is electrically characterized to determine the graphene conductivity and optically characterization, by THz time-domain spectroscopy and a single-mode 2 THz quantum cascade laser, to determine the optical modulation depth and cut-off frequency. A maximum optical modulation depth of ~ 30% is estimated and is found to be most (least) sensitive when the electrical modulation is centered at the point of maximum (minimum) differential resistivity of the graphene. A 3 dB cut-off frequency > 5 MHz, limited only by the area of graphene on the device, is reported. The results agree well with theoretical calculations and numerical simulations, and demonstrate the first steps towards ultra-fast, graphene based THz optoelectronic devices.
Morphometry and mixing regime of a tropical lake: Lake Nova (Southeastern Brazil).
Gonçalves, Monica A; Garcia, Fábio C; Barroso, Gilberto F
2016-09-01
Lake Nova (15.5 km2) is the second largest lake in the Lower Doce River Valley (Southeastern Brazil). A better understanding of ecosystem structure and functioning requires knowledge about lake morphometry, given that lake basin form influences water column stratification. The present study aims to contribute to the understanding of relationship between morphometry and mixing patterns of deep tropical lakes in Brazil. Water column profiles of temperature and dissolved oxygen were taken on four sampling sites along the lake major axis during 2011, 2012 and 2013. The bathymetric survey was carried out in July 2011, along 131.7 km of hydrographic tracks yield 51,692 depth points. Morphometric features of lake size and form factors describe the relative deep subrectangular elongated basin with maximum length of 15.7 km, shoreline development index 5.0, volume of 0.23 km3, volume development of 1.3, and maximum, mean and relative depths of 33.9 m, 14.7 m and 0.7 %, respectively. The deep basin induces a monomictic pattern, with thermal stratification during the wet/warm season associated with anoxic bottom waters (1/3 of lake volume), and mixing during dry and cool season. Based on in situ measurements of tributary river discharges, theoretical retention time (RT) has been estimated in 13.4 years. The morphometry of Lake Nova promote long water RT and the warm monomictic mixing pattern, which is in accordance to the deep tropical lakes in Brazil.
Sedimentary basins reconnaissance using the magnetic Tilt-Depth method
Salem, A.; Williams, S.; Samson, E.; Fairhead, D.; Ravat, D.; Blakely, R.J.
2010-01-01
We compute the depth to the top of magnetic basement using the Tilt-Depth method from the best available magnetic anomaly grids covering the continental USA and Australia. For the USA, the Tilt-Depth estimates were compared with sediment thicknesses based on drilling data and show a correlation of 0.86 between the datasets. If random data were used then the correlation value goes to virtually zero. There is little to no lateral offset of the depth of basinal features although there is a tendency for the Tilt-Depth results to be slightly shallower than the drill depths. We also applied the Tilt-Depth method to a local-scale, relatively high-resolution aeromagnetic survey over the Olympic Peninsula of Washington State. The Tilt-Depth method successfully identified a variety of important tectonic elements known from geological mapping. Of particular interest, the Tilt-Depth method illuminated deep (3km) contacts within the non-magnetic sedimentary core of the Olympic Mountains, where magnetic anomalies are subdued and low in amplitude. For Australia, the Tilt-Depth estimates also give a good correlation with known areas of shallow basement and sedimentary basins. Our estimates of basement depth are not restricted to regional analysis but work equally well at the micro scale (basin scale) with depth estimates agreeing well with drill hole and seismic data. We focus on the eastern Officer Basin as an example of basin scale studies and find a good level of agreement between previously-derived basin models. However, our study potentially reveals depocentres not previously mapped due to the sparse distribution of well data. This example thus shows the potential additional advantage of the method in geological interpretation. The success of this study suggests that the Tilt-Depth method is useful in estimating the depth to crystalline basement when appropriate quality aeromagnetic anomaly data are used (i.e. line spacing on the order of or less than the expected depth to basement). The method is especially valuable as a reconnaissance tool in regions where drillhole or seismic information are either scarce, lacking, or ambiguous.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
... ``maximum water depth'' and ``average water depth'' were rendered incorrect or impossible to read in several.... 1073; Scheerer and McDonald 2003, p. 69). The second paragraph under the heading ``Food, Water, Air...
Arp, C.D.; Jones, Benjamin M.; Urban, F.E.; Grosse, G.
2011-01-01
Thermokarst lakes cover > 20% of the landscape throughout much of the Alaskan Arctic Coastal Plain (ACP) with shallow lakes freezing solid (grounded ice) and deeper lakes maintaining perennial liquid water (floating ice). Thus, lake depth relative to maximum ice thickness (1·5–2·0 m) represents an important threshold that impacts permafrost, aquatic habitat, and potentially geomorphic and hydrologic behaviour. We studied coupled hydrogeomorphic processes of 13 lakes representing a depth gradient across this threshold of maximum ice thickness by analysing remotely sensed, water quality, and climatic data over a 35-year period. Shoreline erosion rates due to permafrost degradation ranged from L) with periods of full and nearly dry basins. Shorter-term (2004–2008) specific conductance data indicated a drying pattern across lakes of all depths consistent with the long-term record for only shallow lakes. Our analysis suggests that grounded-ice lakes are ice-free on average 37 days longer than floating-ice lakes resulting in a longer period of evaporative loss and more frequent negative P − EL. These results suggest divergent hydrogeomorphic responses to a changing Arctic climate depending on the threshold created by water depth relative to maximum ice thickness in ACP lakes.
A novel approach to making microstructure measurements in the ice-covered Arctic Ocean.
NASA Astrophysics Data System (ADS)
Guthrie, J.; Morison, J.; Fer, I.
2014-12-01
As part of the 2014 Field Season of the North Pole Environmental Observatory, a 7-day microstructure experiment was performed. A Rockland Scientific Microrider with 2 FP07 fast response thermistors and 2 SBE-7 micro-conductivity probes was attached to a Seabird 911+ Conductivity-Temperature-Depth unit to allow for calibration of the microstructure probes against the highly accurate Seabird temperature and conductivity sensors. From a heated hut, the instrument package was lowered through a 0.75-m hole in the sea ice down to 350 m depth using a lightweight winch powered with a 3-phase, frequency-controlled motor that produced a smooth, controlled lowering speed of 25 cm s-1. Focusing on temperature and conductivity microstructure and using the special winch removed many of the complications involved with the use of free-fall microstructure profilers under the ice. The slow profiling speed permits calculation of Χ, the dissipation of thermal variance, without relying on fits to theoretical spectra to account for the unresolved variance. The dissipation rate of turbulent kinetic energy, ɛ, can then be estimated using the temperature gradient spectrum and the Ruddick et al. [2001] maximum likelihood method. Outside of a few turbulent patches, thermal diffusivity ranged between O(10-7) and O(10-6) m2s-1, resulting in negligible turbulent heat fluxes. Estimated ɛ was often at or below the noise level of most shear-based microstructure profilers. The noise level of Χ is estimated at O(10-11) °C2s-1, revealing the utility and applicability of this technique in future Arctic field work.
Nearshore coastal mapping. [in Lake Michigan and Puerto Rico
NASA Technical Reports Server (NTRS)
Polcyn, F. C.; Lyzenga, D. R.
1975-01-01
Two test sites of different water quality and bottom topography were used to test for maximum water depth penetration using the Skylab S-192 MSS for measurement of nearshore coastal bathymetry. Sites under investigation lie along the Lake Michigan coastline where littoral transport acts to erode sand bluffs and endangers developments along 1,200 miles of shore, and on the west coast of Puerto Rico where unreliable shoal location and depth information constitutes a safety hazard to navigation. The S-192 and S-190A and B provide data on underwater features because of water transparency in the blue/green portion of the spectrum. Depth of 20 meters were measured with the S-192 in the Puerto Rico test site. The S-190B photography with its improved spatial resolution clearly delineates the triple sand bar topography in the Lake Michigan test site. Several processing techniques were employed to test for maximum depth measurement with least error. The results are useful for helping to determine an optimum spectral bandwidth for future space sensors that will increase depth measurements for different water attenuation conditions where a bottom reflection is detectable.
NASA Astrophysics Data System (ADS)
Asmat, A.; Jalal, K. A.; Ahmad, N.
2018-02-01
The present study uses the Aerosol Optical Depth (AOD) retrieved from Moderate Imaging Resolution Spectroradiometer (MODIS) data for the period from January 2011 until December 2015 over an urban area in Kuching, Sarawak. The results show the minimum AOD value retrieved from MODIS is -0.06 and the maximum value is 6.0. High aerosol loading with high AOD value observed during dry seasons and low AOD monitored during wet seasons. Multi plane regression technique used to retrieve AOD from MODIS (AODMODIS) and different statistics parameter is proposed by using relative absolute error for accuracy assessment in spatial and temporal averaging approach. The AODMODIS then compared with AOD derived from Aerosol Robotic Network (AERONET) Sunphotometer (AODAERONET) and the results shows high correlation coefficient (R2) for AODMODIS and AODAERONET with 0.93. AODMODIS used as an input parameters into Santa Barbara Discrete Ordinate Radiative Transfer (SBDART) model to estimate urban radiative forcing at Kuching. The observed hourly averaged for urban radiative forcing is -0.12 Wm-2 for top of atmosphere (TOA), -2.13 Wm-2 at the surface and 2.00 Wm-2 in the atmosphere. There is a moderate relationship observed between urban radiative forcing calculated using SBDART and AERONET which are 0.75 at the surface, 0.65 at TOA and 0.56 in atmosphere. Overall, variation in AOD tends to cause large bias in the estimated urban radiative forcing.
NASA Astrophysics Data System (ADS)
Hamahashi, Mari; Screaton, Elizabeth; Tanikawa, Wataru; Hashimoto, Yoshitaka; Martin, Kylara; Saito, Saneatsu; Kimura, Gaku
2017-07-01
Subduction of the buoyant Cocos Ridge offshore the Osa Peninsula, Costa Rica substantially affects the upper plate structure through a variety of processes, including outer forearc uplift, erosion, and focused fluid flow. To investigate the nature of a major seismic reflector (MSR) developed between slope sediments (late Pliocene-late Pleistocene silty clay) and underlying higher velocity upper plate materials (late Pliocene-early Pleistocene clayey siltstone), we infer possible mechanisms of sediment removal by examining the consolidation state, microstructure, and zeolite assemblages of sediments recovered from Integrated Ocean Drilling Program Expedition 344 Site U1380. Formation of Ca-type zeolites, laumontite and heulandite, inferred to form in the presence of Ca-rich fluids, has caused porosity reduction. We adjust measured porosity values for these pore-filling zeolites and evaluated the new porosity profile to estimate how much material was removed at the MSR. Based on the composite porosity-depth curve, we infer the past burial depth of the sediments directly below the MSR. The corrected and uncorrected porosity-depth curves yield values of 800 ± 70 m and 900 ± 70 m, respectively. We argue that deposition and removal of this entire estimated thickness in 0.49 Ma would require unrealistically large sedimentation rates and suggest that normal faulting at the MSR must contribute. The porosity offset could be explained with maximum 250 ± 70 m of normal fault throw, or 350 ± 70 m if the porosity were not corrected. The porosity correction significantly reduces the amount of sediment removal needed for the combination of mass movement and normal faulting that characterize the slope in this margin.
Does Aspartic Acid Racemization Constrain the Depth Limit of the Subsurface Biosphere?
NASA Technical Reports Server (NTRS)
Onstott, T C.; Magnabosco, C.; Aubrey, A. D.; Burton, A. S.; Dworkin, J. P.; Elsila, J. E.; Grunsfeld, S.; Cao, B. H.; Hein, J. E.; Glavin, D. P.;
2013-01-01
Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of approximately 89 years for 1 km depth and 27 C and 1-2 years for 3 km depth and 54 C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples.
A hydroclimatological approach to predicting regional landslide probability using Landlab
NASA Astrophysics Data System (ADS)
Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.
2018-02-01
We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.
Does aspartic acid racemization constrain the depth limit of the subsurface biosphere?
Onstott, T C; Magnabosco, C; Aubrey, A D; Burton, A S; Dworkin, J P; Elsila, J E; Grunsfeld, S; Cao, B H; Hein, J E; Glavin, D P; Kieft, T L; Silver, B J; Phelps, T J; van Heerden, E; Opperman, D J; Bada, J L
2014-01-01
Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of ~89 years for 1 km depth and 27 °C and 1-2 years for 3 km depth and 54 °C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 °C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zhang, Hongjuan; Hendricks Franssen, Harrie-Jan; Han, Xujun; Vrugt, Jasper A.; Vereecken, Harry
2017-09-01
Land surface models (LSMs) use a large cohort of parameters and state variables to simulate the water and energy balance at the soil-atmosphere interface. Many of these model parameters cannot be measured directly in the field, and require calibration against measured fluxes of carbon dioxide, sensible and/or latent heat, and/or observations of the thermal and/or moisture state of the soil. Here, we evaluate the usefulness and applicability of four different data assimilation methods for joint parameter and state estimation of the Variable Infiltration Capacity Model (VIC-3L) and the Community Land Model (CLM) using a 5-month calibration (assimilation) period (March-July 2012) of areal-averaged SPADE soil moisture measurements at 5, 20, and 50 cm depths in the Rollesbroich experimental test site in the Eifel mountain range in western Germany. We used the EnKF with state augmentation or dual estimation, respectively, and the residual resampling PF with a simple, statistically deficient, or more sophisticated, MCMC-based parameter resampling method. The performance of the calibrated
LSM models was investigated using SPADE water content measurements of a 5-month evaluation period (August-December 2012). As expected, all DA methods enhance the ability of the VIC and CLM models to describe spatiotemporal patterns of moisture storage within the vadose zone of the Rollesbroich site, particularly if the maximum baseflow velocity (VIC) or fractions of sand, clay, and organic matter of each layer (CLM) are estimated jointly with the model states of each soil layer. The differences between the soil moisture simulations of VIC-3L and CLM are much larger than the discrepancies among the four data assimilation methods. The EnKF with state augmentation or dual estimation yields the best performance of VIC-3L and CLM during the calibration and evaluation period, yet results are in close agreement with the PF using MCMC resampling. Overall, CLM demonstrated the best performance for the Rollesbroich site. The large systematic underestimation of water storage at 50 cm depth by VIC-3L during the first few months of the evaluation period questions, in part, the validity of its fixed water table depth at the bottom of the modeled soil domain.
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Zupanski, Milija
2018-01-01
In this study, we investigate the uncertainties associated with land surface processes in an ensemble predication context. Specifically, we compare the uncertainties produced by a coupled atmosphere-land modeling system with two different land surface models, the Noah- MP land surface model (LSM) and the Noah LSM, by using the Maximum Likelihood Ensemble Filter (MLEF) data assimilation system as a platform for ensemble prediction. We carried out 24-hour prediction simulations in Siberia with 32 ensemble members beginning at 00:00 UTC on 5 March 2013. We then compared the model prediction uncertainty of snow depth and solid precipitation with observation-based research products and evaluated the standard deviation of the ensemble spread. The prediction skill and ensemble spread exhibited high positive correlation for both LSMs, indicating a realistic uncertainty estimation. The inclusion of a multiple snowlayer model in the Noah-MP LSM was beneficial for reducing the uncertainties of snow depth and snow depth change compared to the Noah LSM, but the uncertainty in daily solid precipitation showed minimal difference between the two LSMs. The impact of LSM choice in reducing temperature uncertainty was limited to surface layers of the atmosphere. In summary, we found that the more sophisticated Noah-MP LSM reduces uncertainties associated with land surface processes compared to the Noah LSM. Thus, using prediction models with improved skill implies improved predictability and greater certainty of prediction.
Tracer signals of the intermediate layer of the Arabian Sea
NASA Astrophysics Data System (ADS)
Rhein, Monika; Stramma, Lothar; Plähn, Olaf
In 1995, hydrographic and chlorofluorocarbon (CFCs, components F11, F12) measurements were carried out in the Gulf of Aden, in the Gulf of Oman, and in the Arabian Sea. In the Gulf of Oman, the F12 concentrations in the Persian Gulf outflow (PGW) at about 300m depth were significantly higher than in ambient surface water with saturations reaching 270%. These high values could not be caused by air-sea gas exchange. The outflow was probably contaminated with oil, and the lipophilic character of the CFCs could then lead to the observed supersaturations. The intermediate F12 maximum decreased rapidly further east and south. At the Strait of Bab el Mandeb in the Gulf of Aden, the Red Sea outflow (RSW) was saturated with F12 to about 65% at 400m depth, and decreased to 50% while descending to 800m depth. The low saturation is not surprising, because the outflow contains deep and intermediate water masses from the Red Sea which were isolated from the surface for some time. The tracer contributions to the Arabian Sea for Indian Central Water (ICW) and PGW are about equal, while below 500m depth the RSW contribution greatly exceeds ICW. Modeling the CFC budget of the Arabian Sea, the inflow of ICW north of 12°N is estimated to be 1-6 Sv, depending mainly on the strength of the flow of Red Sea Water into the Arabian Sea.
NASA Astrophysics Data System (ADS)
Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen
2017-11-01
Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of ≈1.14 mm for the simulated PET detector and ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.
NASA Technical Reports Server (NTRS)
Schaber, G. G.; Mccauley, J. F.; Breed, C. S.; Olhoeft, G. R.
1986-01-01
Interpretation of Shuttle Imaging Radar-A (SIR-A) images by McCauley et al. (1982) dramatically changed previous concepts of the role that fluvial processes have played over the past 10,000 to 30 million years in shaping this now extremely flat, featureless, and hyperarid landscape. In the present paper, the near-surface stratigraphy, the electrical properties of materials, and the types of radar interfaces found to be responsible for different classes of SIR-A tonal response are summarized. The dominant factors related to efficient microwave signal penetration into the sediment blanket include (1) favorable distribution of particle sizes, (2) extremely low moisture content and (3) reduced geometric scattering at the SIR-A frequency (1.3 GHz). The depth of signal penetration that results in a recorded backscatter, here called 'radar imaging depth', was documented in the field to be a maximum of 1.5 m, or 0.25 of the calculated 'skin depth', for the sediment blanket. Radar imaging depth is estimated to be between 2 and 3 m for active sand dune materials. Diverse permittivity interfaces and volume scatterers within the shallow subsurface are responsible for most of the observed backscatter not directly attributable to grazing outcrops. Calcium carbonate nodules and rhizoliths concentrated in sandy alluvium of Pleistocene age south of Safsaf oasis in south Egypt provide effective contrast in premittivity and thus act as volume scatterers that enhance SIR-A portrayal of younger inset stream channels.
Gas Chemistry of Submarine Hydrothermal Venting at Maug Caldera, Mariana Arc
NASA Astrophysics Data System (ADS)
Embley, R. W.; Lupton, J. E.; Butterfield, D. A.; Lilley, M. D.; Evans, L. J.; Olson, E. J.; Resing, J. A.; Buck, N.; Larson, B. I.; Young, C.
2014-12-01
Maug volcano consists of 3 islands that define the perimeter of a submerged caldera that was formed by an explosive eruption. The caldera reaches a depth of ~225 meters, and has a prominent central cone or pinnacle that ascends within 20 meters of the sea surface. Our exploration of Maug began in 2003, when a single hydrocast in the caldera detected a strong suspended particle and helium plume reaching a maximum of δ3He = 250% at ~180 meters depth, clearly indicating hydrothermal activity within the caldera. In 2004 we returned armed with the ROPOS ROV, and two ROPOS dives discovered and sampled low temperature (~4 °C) diffuse venting associated with bacterial mats on the NE flank of the central pinnacle at 145 m depth. Samples collected with titanium gas tight bottles were badly diluted with ambient seawater but allowed an estimate of end-member 3He/4He of 7.3 Ra. Four vertical casts lowered into the caldera in 2004 all had a strong 3He signal (δ3He = 190%) at 150-190 meters depth. A recent expedition in 2014 focused on the shallow (~10 m) gas venting along the caldera interior. Scuba divers were able to collect samples of the gas bubbles using evacuated SS bottles fitted with plastic funnels. The gas samples had a consistent ~170 ppm He, 8 ppmNe, 60% CO2, 40%N2, and 0.8% Ar, and an end-member 3He/4He ratio of 6.9 Ra. This 3He/4He ratio falls within the range for typical arc volcanoes. The rather high atmospheric component (N2, Ar, Ne) in these samples is not contamination but appears to be derived from subsurface exchange between the ascending CO2 bubbles and air saturated seawater. A single vertical cast in 2014 had a maximum δ3He = 55% at 140 m depth, much lower than in 2003 and 2004. This decrease is possibly due to recent flushing of the caldera by a storm event, or may reflect a decrease in the deep hydrothermal activity. This area of shallow CO2 venting in Maug caldera is of particular interest as a natural laboratory for studying the effects of ocean acidification on corals.
Geophysical setting of western Utah and eastern Nevada between latitudes 37°45′ and 40°N
Mankinen, Edward A.; McKee, Edwin H.; Tripp, Bryce; Krahulec, Ken; Jordan, Lucy
2009-01-01
Gravity and aeromagnetic data refine the structural setting for the region of western Utah and eastern Nevada between Snake and Hamlin Valleys on the west and Tule Valley on the east. These data are used here as part of a regional analysis. An isostatic gravity map shows large areas underlain by gravity lows, the most prominent of which is a large semi-circular low associated with the Indian Peak caldera complex in the southwestern part of the study area. Another low underlies the Thomas caldera in the northeast, and linear lows elsewhere indicate low-density basin-fill in all major north-trending graben valleys. Gravity highs reflect pre-Cenozoic rocks mostly exposed in the mountain ranges. In the Confusion Range, however, the gravity high extends about 15 km east of the range front to Coyote Knolls, indicating a broad pediment cut on upper Paleozoic rocks and covered by a thin veneer of alluvium. Aeromagnetic highs sharply delineate Oligocene and Miocene volcanic rocks and intracaldera plutons associated with the Indian Peak caldera complex and the Pioche–Marysvale igneous belt. Jurassic to Eocene plutons and volcanic rocks elsewhere in the study area, however, have much more modest magnetic signatures. Some relatively small magnetic highs in the region are associated with outcrops of volcanic rock, and the continuation of those anomalies indicates that the rocks are probably extensive in the subsurface. A gravity inversion method separating the isostatic gravity anomaly into fields representing pre-Cenozoic basement rocks and Cenozoic basin deposits was used to calculate depth to basement and estimate maximum amounts of alluvial and volcanic fill within the valleys. Maximum depths within the Indian Peak caldera complex average about 2.5 km, locally reaching 3 km. North of the caldera complex, thickness of valley fill in most graben valleys ranges from 1.5 to 3 km thick, with Hamlin and Pine Valleys averaging ~3 km. The main basin beneath Tule Valley is relatively shallow (~0.6 km), reaching a maximum depth of ~1 km over a small area northeast of Coyote Knolls. Maximum horizontal gradients were calculated for both long-wavelength gravity and magnetic-potential data, and these were used to constrain major density and magnetic lineaments. These lineaments help delineate deep-seated crustal structures that separate major tectonic domains, potentially localizing Cenozoic tectonic features that may control regional ground-water flow.
Dilbone, Elizabeth; Legleiter, Carl; Alexander, Jason S.; McElroy, Brandon
2018-01-01
Methods for spectrally based mapping of river bathymetry have been developed and tested in clear‐flowing, gravel‐bed channels, with limited application to turbid, sand‐bed rivers. This study used hyperspectral images and field surveys from the dynamic, sandy Niobrara River to evaluate three depth retrieval methods. The first regression‐based approach, optimal band ratio analysis (OBRA), paired in situ depth measurements with image pixel values to estimate depth. The second approach used ground‐based field spectra to calibrate an OBRA relationship. The third technique, image‐to‐depth quantile transformation (IDQT), estimated depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image‐derived variable. OBRA yielded the lowest depth retrieval mean error (0.005 m) and highest observed versus predicted R2 (0.817). Although misalignment between field and image data did not compromise the performance of OBRA in this study, poor georeferencing could limit regression‐based approaches such as OBRA in dynamic, sand‐bedded rivers. Field spectroscopy‐based depth maps exhibited a mean error with a slight shallow bias (0.068 m) but provided reliable estimates for most of the study reach. IDQT had a strong deep bias but provided informative relative depth maps. Overprediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the depth CDF. Although each of the techniques we tested demonstrated potential to provide accurate depth estimates in sand‐bed rivers, each method also was subject to certain constraints and limitations.
Bacterial and primary production in the Greenland Sea
NASA Astrophysics Data System (ADS)
Børsheim, Knut Yngve
2017-12-01
Bacterial production rates were measured in water profiles collected in the Greenland Sea and adjacent areas. Hydrography and nutrients throughout the water column were measured along 75°N from 12°W to 10°E at 20 km distance intervals. Net primary production rates from satellite sensed data were compared with literature values from 14C incubations and used for regional and seasonal comparisons. Maximum bacterial production rates were associated with the region close to the edge of the East Greenland current, and the rates decreased gradually towards the center of the Greenland Sea central gyre. Integrated over the upper 20 m the maximum bacterial production rate was 17.9 mmol C m- 2 day- 1, and east of the center of the gyre the average integrated rate was 4.6 mmol C m- 2 day- 1. It is hypothesized that high bacterial production rates in the western Greenland Sea were sustained by organic material carried from the Arctic Ocean by the East Greenland Current. The depth profiles of nitrate and phosphate were very similar both sides of the Arctic front, with 2% higher values between 500 m and 2000 m in the Arctic domain, and a N/P ratio of 13.6. The N/Si ratio varied by depth and region, with increasing silicate depletion from 1500 m depth to the surface. The rate of depletion from 1500 m depth to surface in the Atlantic domain was twice as high as in the Arctic domain. Net primary production rates in the area between the edge of the East Greenland current and the center of the Greenland Sea gyre was 96 mmol C m- 2 day- 1 at the time of the expedition in 2006, and 78 mmol C m- 2 day- 1 east of the center including the Atlantic domain. Annual net primary production estimated from satellite data in the Greenland Sea increased substantially in the period between 2003 and 2016, and the rate of increase was lowest close to the East Greenland Current.
Estimating plant available water content from remotely sensed evapotranspiration
NASA Astrophysics Data System (ADS)
van Dijk, A. I. J. M.; Warren, G.; Doody, T.
2012-04-01
Plant available water content (PAWC) is an emergent soil property that is a critical variable in hydrological modelling. PAWC determines the active soil water storage and, in water-limited environments, is the main cause of different ecohydrological behaviour between (deep-rooted) perennial vegetation and (shallow-rooted) seasonal vegetation. Conventionally, PAWC is estimated for a combination of soil and vegetation from three variables: maximum rooting depth and the volumetric water content at field capacity and permanent wilting point, respectively. Without elaborate local field observation, large uncertainties in PAWC occur due to the assumptions associated with each of the three variables. We developed an alternative, observation-based method to estimate PAWC from precipitation observations and CSIRO MODIS Reflectance-based Evapotranspiration (CMRSET) estimates. Processing steps include (1) removing residual systematic bias in the CMRSET estimates, (2) making spatially appropriate assumptions about local water inputs and surface runoff losses, (3) using mean seasonal patterns in precipitation and CMRSET to estimate the seasonal pattern in soil water storage changes, (4) from these, calculating the mean seasonal storage range, which can be treated as an estimate of PAWC. We evaluate the resulting PAWC estimates against those determined in field experiments for 180 sites across Australia. We show that the method produces better estimates of PAWC than conventional techniques. In addition, the method provides detailed information with full continental coverage at moderate resolution (250 m) scale. The resulting maps can be used to identify likely groundwater dependent ecosystems and to derive PAWC distributions for each combination of soil and vegetation type.
Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods
Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste; ...
2017-04-03
This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less
Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste
This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less
Mrochen, Michael; Schelling, Urs; Wuellner, Christian; Donitzky, Christof
2009-02-01
To investigate the effect of temporal and spatial distributions of laser spots (scan sequences) on the corneal surface quality after ablation and the maximum ablation of a given refractive correction after photoablation with a high-repetition-rate scanning-spot laser. IROC AG, Zurich, Switzerland, and WaveLight AG, Erlangen, Germany. Bovine corneas and poly(methyl methacrylate) (PMMA) plates were photoablated using a 1050 Hz excimer laser prototype for corneal laser surgery. Four temporal and spatial spot distributions (scan sequences) with different temporal overlapping factors were created for 3 myopic, 3 hyperopic, and 3 phototherapeutic keratectomy ablation profiles. Surface quality and maximum ablation depth were measured using a surface profiling system. The surface quality factor increased (rough surfaces) as the amount of temporal overlapping in the scan sequence and the amount of correction increased. The rise in surface quality factor was less for bovine corneas than for PMMA. The scan sequence might cause systematic substructures at the surface of the ablated material depending on the overlapping factor. The maximum ablation varied within the scan sequence. The temporal and spatial distribution of the laser spots (scan sequence) during a corneal laser procedure affected the surface quality and maximum ablation depth of the ablation profile. Corneal laser surgery could theoretically benefit from smaller spot sizes and higher repetition rates. The temporal and spatial spot distributions are relevant to achieving these aims.
NASA Astrophysics Data System (ADS)
Slack, W.; Murdoch, L.
2016-12-01
Hydraulic fractures can be created in shallow soil or bedrock to promote processes that destroy or remove chemical contaminants. The form of the fracture plays an important role in how it is used in such applications. We created more than 4500 environmental hydraulic fractures at approximately 300 sites since 1990, and we measured surface deformation at many. Several of these sites subsequently were excavated to evaluate fracture form in detail. In one recent example, six hydraulic fractures were created at 1.5m depth while we measured upward displacement and tilt at 15 overlying locations. We excavated in the vicinities of two of the fractures and mapped the exposed fractures. Tilt vectors were initially symmetric about the borehole but radiated from a point that moved southwest with time. Upward displacement of as much as 2.5 cm covered a region 5m to 6m across. The maximum displacement was roughly at the center of the deformed region but was 2m southwest of the borehole, consistent with the tilt data. Excavation revealed an oblong, proppant-filled fracture over 4.2 m in length with a maximum thickness of 1 cm, so the proppant covers a region that is smaller than the uplifted area and the proppant thickness is roughly half of the uplift. The fracture was shaped like a shallow saucer with maximum dips of approximately 15o at the southwestern end. The pattern of tilt and uplift generally reflect the aperture of the underlying pressurized fracture, but the deformation extends beyond the extent of the sand proppant so a quantitative interpretation requires inversion. Inversion of the tilt data using a simple double dislocation model under-estimates the extent but correctly predicts the depth, orientation, and off-centered location. Inversion of uplift using a model that assumes the overburden deforms like a plate over-estimates the extent. Neither can characterize the curved shape. A forward model using FEM analysis capable of representing 3D shapes is capable of more accurate interpretations of fracture form and extent, but it comes at a cost of more parameters and a greater computational burden compared to the analytical forward models. The best approach is the combination of all three forward models to interpret the deformation data.
Extreme precipitation depths for Texas, excluding the Trans-Pecos region
Lanning-Rush, Jennifer; Asquith, William H.; Slade, Raymond M.
1998-01-01
Storm durations of 1, 2, 3, 4, 5, and 6 days were investigated for this report. The extreme precipitation depth for a particular area is estimated from an “extreme precipitation curve” (an upper limit or envelope curve developed from graphs of extreme precipitation depths for each climatic region). The extreme precipitation curves were determined using precipitation depth-duration information from a subset (24 “extreme” storms) of 213 “notable” storms documented throughout Texas. The extreme precipitation curves can be used to estimate extreme precipitation depth for a particular area. The extreme precipitation depth represents a limiting depth, which can provide useful comparative information for more quantitative analyses.
Using computational modeling of river flow with remotely sensed data to infer channel bathymetry
Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.
2012-01-01
As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.
Foulger, G.R.
1995-01-01
Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area, S. Iceland, a dominantly basaltic area. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ?? 50??C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. -from Author
Phillips, Jeffrey D.; Grauch, V.J.S.
2004-01-01
In the southern Espa?ola basin south of Santa Fe, New Mexico, weakly magnetic Santa Fe Group sediments of Oligocene to Pleistocene age, which represent the primary aquifers for the region, are locally underlain by moderately to strongly magnetic igneous and volcaniclastic rocks of Oligocene age. Where this relationship exists, the thickness of Santa Fe Group sediments, and thus the maximum thickness of the aquifers, can be estimated from quantitative analysis of high-resolution aeromagnetic data. These thickness estimates provide guidance for characterizing the ground-water resources in between scattered water wells in this area of rapid urban development and declining water supplies. This report presents one such analysis based on the two-step extended Euler method for estimating depth to magnetic sources. The results show the general form of a north-trending synclinal basin located between the Cerrillos Hills and Eldorado with northward thickening of Santa Fe Group sediments. The increase in thickness is gradual from the erosional edge on the south to a U-shaped Santa Fe embayment hinge line, north of which sediments thicken much more dramatically. Along the north-south basin axis, Santa Fe Group sediments thicken from 300 feet (91 meters) at the hinge line near latitude 35o32'30'N to 2,000 feet (610 meters) at the Cerrillos Road interchange at Interstate 25, north of latitude 35o36'N. The depth analysis indicates that, superimposed on this general synclinal form, there are many local areas where the Santa Fe Group sediments may be thickened by a few hundred feet, presumably due to erosional relief on the underlying Oligocene volcanic and volcaniclastic rocks. Some larger areas of greater apparent thickening occur where the presence of magnetic rocks directly underlying the Santa Fe Group is uncertain. Where magnetic rocks are absent beneath the Santa Fe Group, the thickness cannot be estimated from the aeromagnetic data.
NASA Astrophysics Data System (ADS)
Callaghan, K. L.; Wickert, A. D.; Michael, L.; Fan, Y.; Miguez-Macho, G.; Mitrovica, J. X.; Austermann, J.; Ng, G. H. C.
2017-12-01
Groundwater accounts for 1.69% of the globe's water storage - nearly the same amount (1.74%) that is stored in ice caps and glaciers. The volume of water stored in this reservoir has changed over glacial-interglacial cycles as climate warms and cools, sea level rises and falls, ice sheets advance and retreat, surface topography isostatically adjusts, and patterns of moisture transport reorganize. During the last deglaciation, over the past 21000 years, all of these factors contributed to profound hydrologic change in the Americas. In North America, deglaciation generated proglacial lakes and wetlands along the isostatically-depressed margin of the retreating Laurentide Ice Sheet, along with extensive pluvial lakes in the desert southwest. In South America, changing patterns of atmospheric circulation caused regional and time-varying wetting and drying that led to fluctuations in water table levels. Understanding how groundwater levels change in response to these factors can aid our understanding of the effects of modern climate change on groundwater resources. Using a model that incorporates temporally evolving climate, topography (driven by glacial isostatic adjustment), ice extent, sea level, and spatially varying soil properties, we present our estimates of changes in total groundwater storage in the Americas over the past 21000 years. We estimate depth to water table at 500-year intervals and at a 30-arcsecond resolution. This allows a comparative assessment of changing groundwater storage volumes through time. The model has already been applied to the present day and has proven successful in estimating modern groundwater depths at a broad scale (Fan et al., 2013). We also assess changing groundwater-fed lakes, and compare model-estimated lake sizes and locations to paleorecords of these lakes. Our data- and model-integrated look back at the terminal Pleistocene provides an estimate of groundwater variability under extreme climate change. Preliminary results show changes in groundwater storage within the Americas on the order of tens of centimetres in units of equivalent global sea-level change.
NASA Astrophysics Data System (ADS)
Bosse, Anthony; Testor, Pierre; Mortier, Laurent; Beguery, Laurent; Bernardet, Karim; Taillandier, Vincent; d'Ortenzio, Fabrizio; Prieur, Louis; Coppola, Laurent; Bourrin, François
2013-04-01
In the last 5 years, an unprecedented effort in the sampling of the Northern Current (NC) has been carried out using gliders which collected more than 50 000 profiles down to 1000m maximum along a few repeated sections perpendicular to the French coast. Based on this dataset, this study presents a very first quantitative picture of the NC on 0-1000m depth. We show its mean structure of temperature and salinity characterized by the different Water Masses of the basin (Atlantic Water, Winter Intermediate Water, Levantine Intermediate Water and Western Mediterranean Deep Water) for each season and at different location. Geostrophic currents are derived from the integration of the thermal-wind balance using the mean glider-estimate of the current during each dive as a reference. Estimates of the heat, salt, and volume transport are then computed in order to draw an heat and salt budget of the NC. The results show a strong seasonal variability due to the intense surface buoyancy loss in winter resulting in a vertical mixing offshore that makes the mixed layer depth reaching several hundreds of meters in the whole basin and in a very particular area down to the bottom of the sea-floor (deep convection area). The horizontal density gradient intensifies in winter leading to geostrophic currents that are more intense and more confined to the continental slope, and thus to the enhancement of the mesoscale activity (meandering, formation of eddies through baroclinic instability...). The mean transport estimates of the NC is found to be about 2-3Sv greater than previous spurious estimates. The heat budget of the NC also provides an estimate of the mean across shore heat/salt flux directly impacting the region in the Gulf of Lion where deep ocean convection, a key process in the thermohaline circulation of the Mediterranean Sea, can occur in Winter.
NASA Astrophysics Data System (ADS)
Pering, Tom D.; Tamburello, Giancarlo; McGonigle, Andrew J. S.; Aiuppa, Alessandro; James, Mike R.; Lane, Steve J.; Sciotto, Mariangela; Cannata, Andrea; Patanè, Domenico
2014-05-01
During rapid strombolian activity observed at the Bocca Nuova (BN) crater of Mt Etna on the 27th July 2012, ultra-violet cameras were used to measure SO2 emissions from the active vent over ≡ 30 minutes of activity. This resulted in the first determination of SO2 masses for strombolian activity at Etna, with individual bursts of ≡ 0.1 - 14 kg. By combining this with Multi-GAS measurements of gas ratios in the BN plume, we estimate a total gas mass for individual bursts of ≡ 0.2 - 165 kg. By calculating the degassing paths of typical H2O and CO2 contents for Etnean magmas and matching this with the measured CO2/SO2 ratio of ≡3 we estimate that the depth that gas decouples from the melt at 0.5 - 6.2 km. Statistical analysis of the repose time between bursts showed an average interval of ≡3 - 5 s with a maximum of ≡ 45 s. Plotting the repose time following bursts against their gas masses indicates that larger events were not followed rapidly by a subsequent event. The subsequent event also always had a significant emission speed, i.e. following larger events there was a minimum wait period and minimum emission speed for the subsequent burst. This could be the result of a number of different processes or effects: 1) bubble coalescence and the consequent faster rise of larger gas masses,, 2) the coalescence of ascending Taylor bubbles (slugs), 3) an atmospheric transport effect related to changes in magma level, and 4) the partial collapse of a foam or a form of trap-and-release mechanism. Subsequent analysis of the fluid dynamics was performed using several numerical models, including: Del Bello et al. (2012) to estimate magma and conduit parameters, Seyfried and Freundt (2000) with Llewellin et al. (2012) to estimate where transition to full slug flow occurs, and Noguiera et al. (2006) for the wake length of slugs. The use of these models in combination with the James et al. (2008) dynamic slug model suggests that coalescence between gas masses, reasonably assumed to be slugs, occurs more frequently in the upper ≡ 100 m of the conduit, where gas expansion becomes significant. The depth at which transition to slug flow occurs is similarly shallow. Comparison of individual burst events and a range of lags with filtered seismic displacement data from the EBCN seismic station of the INGV network, demonstrated no correlation between maximum peak-to-peak amplitude in the vertical component. Although, a tentative (due to the discrete 10 minute period used with omission of anomalous data) correlation of r2 = 0.88 exists for the sum of burst mass in a minute against equivalent RMS when offset by a lag of 2 minutes. Considering the approximate rise speed of gas masses, the location of gas at the time of correlation is estimated to be a depth of < ≡ 250 m.
Emittance Theory for Cylindrical Fiber Selective Emitter
NASA Technical Reports Server (NTRS)
Chubb, Donald L.
1998-01-01
A fibrous rare earth selective emitter is approximated as an infinitely long, cylinder. The spectral emittance, e(sub x), is obtained L- by solving the radiative transfer equations with appropriate boundary conditions and uniform temperature. For optical depth, K(sub R), where alpha(sub lambda), is the extinction coefficient and R is the cylinder radius, greater than 1 the spectral emittance depths, K(sub R) alpha(sub lambda)R, is nearly at its maximum value. There is an optimum cylinder radius, R(sub opt) for maximum emitter efficiency, n(sub E). Values for R(sub opt) are strongly dependent on the number of emission bands of the material. The optimum radius decreases slowly with increasing emitter temperature, while the maximum efficiency and useful radiated power increase rapidly with increasing, temperature.
NASA Astrophysics Data System (ADS)
Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.
2017-12-01
This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.
Petrologic Constraints on Magma Plumbing Systems Beneath Hawaiian Volcanoes
NASA Astrophysics Data System (ADS)
Li, Y.; Peterman, K. J.; Scott, J. L.; Barton, M.
2016-12-01
We have calculated the pressures of partial crystalliztion of basaltic magmas from Hawaii using a petrological method. A total of 1576 major oxide analyses of glasses from four volcanoes (Kilauea and the Puna Ridge, Loihi, Mauna Loa, and Mauna Kea, on the Big Island) were compiled and used as input data. Glasses represent quenched liquid compositions and are ideal for calculation of pressures of partial crystallization. The results were filtered to exclude samples that yielded unrealistic high errors associated with the calculated pressure or negative value of pressure, and to exclude samples with non-basaltic compositions. Calculated pressures were converted to depths of partial crystallization. The majority (68.2%) of pressures for the shield-stage subaerial volcanoes Kilauea, Mauna Loa, and Mauna Kea, fall in the range 0-140 MPa, corresponding to depths of 0-5 km. Glasses from the Puna Ridge yield pressures ranging from 18 to 126 MPa and are virtually identical to pressures determined from glasses from Kilauea (0 to 129 MPa). These results are consistent with the presence of magma reservoirs at depths of 0-5 km beneath the large shield volcanoes. The inferred depth of the magma reservoir beneath the summit of Kilauea (average = 1.8 km, maximum = 5 km) agrees extremely well with depths ( 2-6 km) estimated from seismic studies. The results for Kilauea and Mauna Kea indicate that significant partial crystallization also occurs beneath the summit reservoirs at depths up to 11 km. These results are consistent with seismic evidence for the presence of a magma reservoir at 8-11 km beneath Kilauea at the base of the volcanic pile. The results for Loihi indicate crystallization at higher average pressures (100-400 MPa) and depths (3-14 km) than the large shield volcanoes, suggesting that the plumbing system is not yet fully developed, and that the Hawaiian volcanic plumbing systems evolve over time.
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
Microzooplankton biomass distribution in Terra Nova Bay, Ross Sea (Antarctica)
NASA Astrophysics Data System (ADS)
Fonda Umani, S.; Monti, M.; Nuccio, C.
1998-11-01
This work describes the spatial and vertical distribution of microzooplankton (20-200 μm) abundance and biomass of the upper layers (0-100 m), collected during the first oceanographic Italian expedition in Antarctica (1987/1988) in Terra Nova Bay (Ross Sea). Biomass was estimated by using biovolume calculations and literature conversion factors. Sampling was carried out at three depths, surface, 50 and 100 m. The dominant taxa were made up of tintinnid ciliates, ciliates other than tintinnids, larvae of micrometazoa and heterotrophic dinoflagellates. The abundance of the total microplankton fraction had its absolute maximum in the center of Terra Nova Bay at the surface with 31 042 ind. dm -3. The areal and vertical distribution of heterotrophic microplankton biomass differs from that of abundance. On the basis of hydrological conditions, phytoplankton composition and biomass and microzooplankton biomass and structure it is possible to identify three groups of stations: 1—northern coastal stations (intermediate chlorophyll maxima, microphytoplankton prevalence, low microzooplankton biomass); 2—central stations (high surface chlorophyll, nanoplankton prevalence, high abundance of microzooplankton); 3—northern stations (deeper pycnocline, nanoplankton prevalence, high microzooplankton biomass at intermediate depths).
Saturn meteorology - A diagnostic assessment of thin-layer configurations for the zonal flow
NASA Technical Reports Server (NTRS)
Allison, M.; Stone, P. H.
1983-01-01
Voyager imaging, infrared, and radio observations for Saturn have been recently interpreted by Smith et al. (1982) as an indication that the jet streams observed at the cloud tops extend to depths greater than the 10,000-bar level. This analysis assumes a maximum latitudinal temperature contrast of a few percent, a mean atmospheric rotation rate at depth given by Saturn's ratio period, and no variation with latitude of the bottom pressure level for the zonal flow system. These assumptions are not, however, firmly constrained by observation. The diagnostic analysis of plausible alternative configurations for Saturn's atmospheric structure demonstrates that a thin weather layer system (confined at mid to high latitudes to levels above 200 bar) cannot be excluded by any of the available observations. A quantitative estimate of the effects of moisture condensation (including the differentiation of mean molecular weight) suggests that these might provide the buoyancy contrasts necessary to support a thin-layer flow provided that Saturn's outer envelope is enriched approximately 10 times in water abundance relative to a solar composition atmosphere and strongly differentiated with latitude at the condensation level.
Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.
Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves
2016-09-01
This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.
Erosion of aluminum 6061-T6 under cavitation attack in mineral oil and water
NASA Technical Reports Server (NTRS)
Rao, B. C. S.; Buckley, D. H.
1985-01-01
Studies of the erosion of aluminum 6061-T6 under cavitation attack in distilled water, ordinary tap water and a viscous mineral oil are presented. The mean depth of penetration for the mineral oil was about 40 percent of that for water at the end of a 40 min test. The mean depth of penetration and its rate did not differ significantly for distilled and tap water. The mean depth of penetration rate for both distilled and tap water increased to a maximum and then decreased with test duration, while that for mineral oil had a maximum during the initial period. The ratio h/2a of the pit depth h to the pit diameter 2a varied from 0.04 to 0.13 in water and from 0.06 to 0.20 in mineral oil. Scanning electron microscopy indicates that the pits are initially formed over the grain boundaries and precipitates while the surface grains are deformed under cavitation attack.
Estimation of optimal nasotracheal tube depth in adult patients.
Ji, Sung-Mi
2017-12-01
The aim of this study was to estimate the optimal depth of nasotracheal tube placement. We enrolled 110 patients scheduled to undergo oral and maxillofacial surgery, requiring nasotracheal intubation. After intubation, the depth of tube insertion was measured. The neck circumference and distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch were measured. To estimate optimal tube depth, correlation and regression analyses were performed using clinical and anthropometric parameters. The mean tube depth was 28.9 ± 1.3 cm in men (n = 62), and 26.6 ± 1.5 cm in women (n = 48). Tube depth significantly correlated with height (r = 0.735, P < 0.001). Distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch correlated with depth of the endotracheal tube (r = 0.363, r = 0.362, and r = 0.546, P < 0.05). The tube depth also correlated with the sum of these distances (r = 0.646, P < 0.001). We devised the following formula for estimating tube depth: 19.856 + 0.267 × sum of the three distances (R 2 = 0.432, P < 0.001). The optimal tube depth for nasotracheally intubated adult patients correlated with height and sum of the distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch. The proposed equation would be a useful guide to determine optimal nasotracheal tube placement.
Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.
Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît
2011-01-01
Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.
Tree-Ring Widths and Snow Cover Depth in High Tauern
NASA Astrophysics Data System (ADS)
Falarz, Malgorzata
2017-12-01
The aim of the study is to examine the correlation of Norway spruce tree-ring widths and the snow cover depth in the High Tauern mountains. The average standardized tree-ring widths indices for Nowary spruce posted by Bednarz and Niedzwiedz (2006) were taken into account. Increment cores were collected from 39 Norway spruces growing in the High Tauern near the upper limit of the forest at altitude of 1700-1800 m, 3 km from the meteorological station at Sonnblick. Moreover, the maximum of snow cover depth in Sonnblick (3105 m a.s.l.) for each winter season in the period from 1938/39 to 1994/95 (57 winter seasons) was taken into account. The main results of the research are as follows: (1) tree-ring widths in a given year does not reveal statistically significant dependency on the maximum snow cover depth observed in the winter season, which ended this year; (2) however, the tested relationship is statistically significant in the case of correlating of the tree-ring widths in a given year with a maximum snow cover depth in a season of previous year. The correlation coefficient for the entire period of the study is not very high (r=0.27) but shows a statistical significance at the 0.05 level; (3) the described relationship is not stable over time. 30-year moving correlations showed no significant dependencies till 1942 and after 1982 (probably due to the so-called divergence phenomenon). However, during the period of 1943-1981 the values of correlation coefficient for moving 30-year periods are statistically significant and range from 0.37 to 0.45; (4) the correlation coefficient between real and calibrated (on the base of the regression equation) values of maximum snow cover depth is statistically significant for calibration period and not significant for verification one; (5) due to a quite short period of statistically significant correlations and not very strict dependencies, the reconstruction of snow cover on Sonnblick for the period before regular measurements seems to be not reasonable.
On th meridional surface profile of the Gulf Stream at 55 deg W
NASA Technical Reports Server (NTRS)
Hallock, Zachariah R.; Teague, William J.
1995-01-01
Nine-month records from nine inverted echo sounders (IESs) are analyzed to describe the mean baroclinic Gulf Stream at 55 deg W. IES acoustic travel times are converted to thermocline depth which is optimally interpolated. Kinematic and dynamic parameters (Gulf Stream meridional position, velocity, and vorticity) are calculated. Primary Gulf Stream variabiltiy is attributed to meandering and and changes in direction. A mean, stream-coordinate (relative to Gulf Stream instantaneous position and direction) meridional profile is derived and compared with results presented by other investigators. The mean velocity is estimated at 0.84 m/s directed 14 deg to the right eastward, and the thermocline (12 c) drops 657 m (north to south), corresponding to a baroclinic rise of the surface of 0.87 m. The effect of Gulf Stream curvature on temporal mean profiles is found to be unimportant and of minimal importance overall. The derived, downstream current profile is well represented by a Gaussian function and is about 190 km wide where it crosses zero. Surface baroclinic transport is estimated to be 8.5 x 10(exp 4) sq m/s, and maximum shear (flanking the maximum) is 1.2 x 10(exp -5). Results compare well with other in situ observational results from the same time period. On the other hand, analyses (by others) of concurrent satellite altimetry (Geosat) suggest a considerable narrower, more intense mean Gulf Stream.
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
Spatial statistical network models for stream and river temperature in New England, USA
NASA Astrophysics Data System (ADS)
Detenbeck, Naomi E.; Morrison, Alisa C.; Abele, Ralph W.; Kopp, Darin A.
2016-08-01
Watershed managers are challenged by the need for predictive temperature models with sufficient accuracy and geographic breadth for practical use. We described thermal regimes of New England rivers and streams based on a reduced set of metrics for the May-September growing season (July or August median temperature, diurnal rate of change, and magnitude and timing of growing season maximum) chosen through principal component analysis of 78 candidate metrics. We then developed and assessed spatial statistical models for each of these metrics, incorporating spatial autocorrelation based on both distance along the flow network and Euclidean distance between points. Calculation of spatial autocorrelation based on travel or retention time in place of network distance yielded tighter-fitting Torgegrams with less scatter but did not improve overall model prediction accuracy. We predicted monthly median July or August stream temperatures as a function of median air temperature, estimated urban heat island effect, shaded solar radiation, main channel slope, watershed storage (percent lake and wetland area), percent coarse-grained surficial deposits, and presence or maximum depth of a lake immediately upstream, with an overall root-mean-square prediction error of 1.4 and 1.5°C, respectively. Growing season maximum water temperature varied as a function of air temperature, local channel slope, shaded August solar radiation, imperviousness, and watershed storage. Predictive models for July or August daily range, maximum daily rate of change, and timing of growing season maximum were statistically significant but explained a much lower proportion of variance than the above models (5-14% of total).
Boehme, Lars; Thompson, Dave; Fedak, Mike; Bowen, Don; Hammill, Mike O.; Stenson, Garry B.
2012-01-01
Predicting how marine mammal populations respond to habitat changes will be essential for developing conservation management strategies in the 21st century. Responses to previous environmental change may be informative in the development of predictive models. Here we describe the likely effects of the last ice age on grey seal population size and distribution. We use satellite telemetry data to define grey seal foraging habitat in terms of the temperature and depth ranges exploited by the contemporary populations. We estimate the available extent of such habitat in the North Atlantic at present (between 1.42·106 km2 and 2.07·106 km2) and at the last glacial maximum (between 4.74·104 km2 and 2.11·105 km2); taking account of glacial and seasonal sea-ice coverage, estimated reductions of sea-level (123 m) and sea surface temperature hind-casts. Most of the extensive continental shelf waters (North Sea, Baltic Sea and Scotian Shelf), currently supporting >95% of grey seals, were unavailable during the last glacial maximum. A combination of lower sea-level and extensive ice-sheets, massively increased seasonal sea-ice coverage and southerly extent of cold water would have pushed grey seals into areas with no significant shelf waters. The habitat during the last glacial maximum might have been as small as 3% of today's extent and grey seal populations may have fallen to similarly low numbers. An alternative scenario involving a major change to a pelagic or bathy-pelagic foraging niche cannot be discounted. However, hooded seals currently dominate that niche and may have excluded grey seals from such habitat. If as seems likely, the grey seal population fell to very low levels it would have remained low for several thousand years before expanding into current habitats over the past 12,000 years or so. PMID:23300843
NASA Astrophysics Data System (ADS)
Harbitz, C. B.; Glimsdal, S.; Løvholt, F.; Orefice, S.; Romano, F.; Brizuela, B.; Lorito, S.; Hoechner, A.; Babeyko, A. Y.
2016-12-01
The standard way of estimating tsunami inundation is by applying numerical depth-averaged shallow-water run-up models. However, for a regional Probabilistic Tsunami Hazard Assessment (PTHA), applying such inundation models may be too time-consuming. A faster, yet less accurate procedure, is to relate the near-shore surface elevations at offshore points to maximum shoreline water levels by using a set of amplification factors based on the characteristics of the incident wave and the bathymetric slope. The surface elevation at the shoreline then acts as a rough approximation for the maximum inundation height or run-up height along the shoreline. An amplification-factor procedure based on a limited set of idealized broken shoreline segments has previously been applied to estimate the maximum inundation heights globally. Here, we present a study where this technique is developed further, by taking into account the local bathymetric profiles. We extract a large number of local bathymetric transects over a significant part of the North East Atlantic, the Mediterranean and connected seas (NEAM) region. For each bathymetric transect, we compute the wave amplification from an offshore control point to points close to the shoreline using a linear shallow-water model for waves of different period and polarity with a sinusoidal pulse wave as input. The amplification factors are then tabulated. We present maximum water levels from the amplification factor method, and compare these with results from conventional inundation models. Finally, we demonstrate how the amplification factor method can be convolved with PTHA results to provide regional tsunami hazard maps. This work has been supported by the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement 603839 (Project ASTARTE), and the TSUMAPS-NEAM Project (http://www.tsumapsneam.eu/), co-financed by the European Union Civil Protection Mechanism, Agreement Number: ECHO/SUB/2015/718568/PREV26.
NASA Astrophysics Data System (ADS)
Glimsdal, Sylfest; Løvholt, Finn; Bonnevie Harbitz, Carl; Orefice, Simone; Romano, Fabrizio; Brizuela, Beatriz; Lorito, Stefano; Hoechner, Andreas; Babeyko, Andrey
2017-04-01
The standard way of estimating tsunami inundation is by applying numerical depth-averaged shallow-water run-up models. However, for a regional Probabilistic Tsunami Hazard Assessment (PTHA), applying such inundation models may be too time-consuming. A faster, yet less accurate procedure, is to relate the near-shore surface elevations at offshore points to maximum shoreline water levels by using a set of amplification factors based on the characteristics of the incident wave and the bathymetric slope. The surface elevation at the shoreline then acts as a rough approximation for the maximum inundation height or run-up height along the shoreline. An amplification-factor procedure based on a limited set of idealized broken shoreline segments has previously been applied to estimate the maximum inundation heights globally. Here, we present a study where this technique is developed further, by taking into account the local bathymetric profiles. We extract a large number of local bathymetric transects over a significant part of the North East Atlantic, the Mediterranean and connected seas (NEAM region). For each bathymetric transect, we compute the wave amplification from an offshore control point to points close to the shoreline using a linear shallow-water model for waves of different period and polarity with a sinusoidal pulse wave as input. The amplification factors are then tabulated. We present maximum water levels from the amplification factor method, and compare these with results from conventional inundation models. Finally, we demonstrate how the amplification factor method can be convolved with PTHA results to provide regional tsunami hazard maps. This work has been supported by the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement 603839 (Project ASTARTE), and the TSUMAPS-NEAM Project (http://www.tsumapsneam.eu/), co-financed by the European Union Civil Protection Mechanism, Agreement Number: ECHO/SUB/2015/718568/PREV26.
Global distribution of plant-extractable water capacity of soil
Dunne, K.A.; Willmott, C.J.
1996-01-01
Plant-extractable water capacity of soil is the amount of water that can be extracted from the soil to fulfill evapotranspiration demands. It is often assumed to be spatially invariant in large-scale computations of the soil-water balance. Empirical evidence, however, suggests that this assumption is incorrect. In this paper, we estimate the global distribution of the plant-extractable water capacity of soil. A representative soil profile, characterized by horizon (layer) particle size data and thickness, was created for each soil unit mapped by FAO (Food and Agriculture Organization of the United Nations)/Unesco. Soil organic matter was estimated empirically from climate data. Plant rooting depths and ground coverages were obtained from a vegetation characteristic data set. At each 0.5?? ?? 0.5?? grid cell where vegetation is present, unit available water capacity (cm water per cm soil) was estimated from the sand, clay, and organic content of each profile horizon, and integrated over horizon thickness. Summation of the integrated values over the lesser of profile depth and root depth produced an estimate of the plant-extractable water capacity of soil. The global average of the estimated plant-extractable water capacities of soil is 8??6 cm (Greenland, Antarctica and bare soil areas excluded). Estimates are less than 5, 10 and 15 cm - over approximately 30, 60, and 89 per cent of the area, respectively. Estimates reflect the combined effects of soil texture, soil organic content, and plant root depth or profile depth. The most influential and uncertain parameter is the depth over which the plant-extractable water capacity of soil is computed, which is usually limited by root depth. Soil texture exerts a lesser, but still substantial, influence. Organic content, except where concentrations are very high, has relatively little effect.
EAS development curve at energy of 10(16) - 10(18) eV measured by optical Cerenkov light
NASA Technical Reports Server (NTRS)
Hara, T.; Daigo, M.; Honda, M.; Kamata, K.; Kifune, T.; Mizumoto, Y.; Nagano, M.; Ohno, Y.; Tanahasni, G.
1985-01-01
The data of optical Cerenkov light from extensive air shower observed at the core distance more than 1 Km at Akeno are reexamined. Applying the new simulated results, the shower development curves for the individual events were constructed. For the showers of 10 to 17th power eV the average depth at the shower maximum is determined to be 660 + or - 40 gcm/2. The shower curve of average development is found to be well described by a Gaisser-Hillas shower development function with above shower maximum depth.
The Vertical Dust Profile over Gale Crater
NASA Astrophysics Data System (ADS)
Guzewich, S.; Newman, C. E.; Smith, M. D.; Moores, J.; Smith, C. L.; Moore, C.; Richardson, M. I.; Kass, D. M.; Kleinboehl, A.; Martin-Torres, F. J.; Zorzano, M. P.; Battalio, J. M.
2017-12-01
Regular joint observations of the atmosphere over Gale Crater from the orbiting Mars Reconnaissance Orbiter/Mars Climate Sounder (MCS) and Mars Science Laboratory (MSL) Curiosity rover allow us to create a coarse, but complete, vertical profile of dust mixing ratio from the surface to the upper atmosphere. We split the atmospheric column into three regions: the planetary boundary layer (PBL) within Gale Crater that is directly sampled by MSL (typically extending from the surface to 2-6 km in height), the region of atmosphere sampled by MCS profiles (typically 25-80 km above the surface), and the region of atmosphere between these two layers. Using atmospheric optical depth measurements from the Rover Environmental Monitoring System (REMS) ultraviolet photodiodes (in conjunction with MSL Mast Camera solar imaging), line-of-sight opacity measurements with the MSL Navigation Cameras (NavCam), and an estimate of the PBL depth from the MarsWRF general circulation model, we can directly calculate the dust mixing ratio within the Gale Crater PBL and then solve for the dust mixing ratio in the middle layer above Gale Crater but below the atmosphere sampled by MCS. Each atmospheric layer has a unique seasonal cycle of dust opacity, with Gale Crater's PBL reaching a maximum in dust mixing ratio near Ls = 270° and a minimum near Ls = 90°. The layer above Gale Crater, however, has a seasonal cycle that closely follows the global opacity cycle and reaches a maximum near Ls = 240° and exhibits a local minimum (associated with the "solsticial pauses") near Ls = 270°. Knowing the complete vertical profile also allows us to determine the frequency of high-altitude dust layers above Gale, and whether such layers truly exhibit the maximum dust mixing ratio within the entire vertical column. We find that 20% of MCS profiles contain an "absolute" high-altitude dust layer, i.e., one in which the dust mixing ratio within the high-altitude dust layer is the maximum dust mixing ratio in the vertical column of atmosphere over Gale Crater.
Twenty years of balloon-borne tropospheric aerosol measurements at Laramie, Wyoming
NASA Technical Reports Server (NTRS)
Hofmann, David J.
1993-01-01
The paper examines the tropospheric aerosol record obtained over the period 1971 to 1990, during which high-altitude balloons with optical particle counters were launched at Laramie, Wyoming, in a long-term study of the stratospheric sulfate aerosol layer. All aerosol particle size ranges display pronounced seasonal variations, with the condensation nuclei concentration and the optically active component showing a summer maximum throughout the troposphere. Mass estimates, assuming spherical sulfate particles, indicate an average column mass between altitudes of 2.5 and 10 km of about 4 and 16 mg/sq m in winter and summer, respectively. Calculated optical depths vary between 0.01 and 0.04 from winter to summer; the estimated mass scattering cross section is about 3 sq m/g throughout the troposphere. There is evidence for a decreasing trend of 1.6-1.8 percent/yr in the optically active tropospheric aerosol over the past 20 yr, which may be related to a similar reduction in SO2 emission in the U.S. over this period.
Smoothing-Based Relative Navigation and Coded Aperture Imaging
NASA Technical Reports Server (NTRS)
Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher
2017-01-01
This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.
A COMPARISON OF AEROSOL OPTICAL DEPTH SIMULATED USING CMAQ WITH SATELLITE ESTIMATES
Satellite data provide new opportunities to study the regional distribution of particulate matter. The aerosol optical depth (AOD) - a derived estimate from the satellite measured irradiance, can be compared against model derived estimate to provide an evaluation of the columnar ...
ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Evaluation of thermobarometry for spinel lherzolite fragments in alkali basalts
NASA Astrophysics Data System (ADS)
Ozawa, Kazuhito; Youbi, Nasrrddine; Boumehdi, Moulay Ahmed; McKenzie, Dan; Nagahara, Hiroko
2017-04-01
Geothermobarometry of solid fragments in kimberlite and alkali basalts, generally called "xenoliths", provides information on thermal and chemical structure of lithospheric and asthenospheric mantle, based on which various chemical, thermal, and rheological models of lithosphere have been constructed (e.g., Griffin et al., 2003; McKenzie et al., 2005; Ave Lallemant et al., 1980). Geothermobarometry for spinel-bearing peridotite fragments, which are frequently sampled from Phanerozoic provinces in various tectonic environments (Nixon and Davies, 1987), has essential difficulties, and it is usually believed that appropriated barometers do not exist for them (O'Reilly et al., 1997; Medaris et al., 1999). Ozawa et al. (2016; EGU) proposed a method of geothermobarometry for spinel lherzolite fragments. They applied the method to mantle fragments in alkali basalts from Bou Ibalhatene maars in the Middle Atlas in Morocco (Raffone et al. 2009; El Azzouzi et al., 2010; Witting et al., 2010; El Messbahi et al., 2015). Ozawa et al. (2016) obtained 0.5GPa pressure difference (1.5-2.0GPa) for 100°C variation in temperatures (950-1050°C). However, it is imperative to verify the results on the basis of completely independent data. There are three types of independent information: (1) time scale of solid fragment extraction, which may be provided by kinetics of reactions induced by heating and/or decompression during their entrapment in the host magma and transportation to the Earth's surface (Smith, 1999), (2) depth of the host basalt formation, which may be provided by the petrological and geochemical studies of the host basalts, and (3) lithosphere-asthenosphere boundary depths, which may be estimated by geophysical observations. Among which, (3) is shown to be consistent with the result in Ozawa et al. (2016). We here present that the estimated thermal structure just before the fragment extraction is fully supported by the information of (1) and (2). Spera (1984) reviewed various method of estimation of ascent rate of mantle fragments in kimberlite and alkali basalt; one based on fluid dynamics of transportation of entrapped fragments by giving the maximum size and viscosity of magma as a minimum estimate (Spera, 1980) and the other by coupling depth of fragment residence before the entrapment in a magma and time scale of heating by the magma. The depth of entrapment, however, is the least known parameter for spinel lherzolite. Because of nearly adiabatic ascent of magmas loaded with solid fragments, all the fragments underwent the same heating and decompression history with difference in entrapment depth and thus heating duration, from which the depth of their residence just before the extraction may be estimated if ascent rate is known. Therefore, extent of chemical and textural modification induced by heating and decompression may provide independent test for pressure estimation. We have used several reactions for this purpose: (1) Mg-Fe exchange reaction between spinel and olivine (Ozawa, 1983; 1984), (2) Ca zoning in olivine (Takahashi, 1980), (3) partial dissolution of clinopyroxene, (4) partial dissolution of spinel, and (5) formation of melt frozen as glass, which is related to (3) and (4). The depth of melt generation is constrained to be deeper than 70km by modeling the trace element compositions of the host magmas using the methods of McKenzie and O'Nions (1991) and data from El Azzouzi et al. (2010). The host magmas can be produced by melting the convecting upper mantle without requirement of any input from the continental lithosphere. This is consistent with the positive gravity anomalies in the NW Africa showing shallow upwelling in this region allowing decompressional melting owing to the thinner lithosphere in the Middle Atlas.
[Effect of gap size between tooth and restorative materials on microbiolism based caries in vitro].
Lu, Wen-bin; Li, Yun
2012-05-01
To evaluate the effect of gap size between tooth and restorative materials on microbiolism based caries in vitro. Tooth blocks made of human molars without caries and the same size composite resin blocks were selected and prepared. Tooth-resin matrix was mounted on resin base with a gap size of 0, 25, 50, 100, 190, 250 µm and a control group was dealed with adhesive system. Six experimental groups and one control group were included, with 8 samples in one group and a total of 56 samples. The samples were cultured by a 14-day sequential batch culture technique. The development of outer surface lesion and wall lesion was assessed with confocal laser scanning microscope (CLSM) by measuring the maximum lesion depth, fluorescence areas and average fluorescence value. The data were collected and statistically analyzed. The deposits of the tooth-restoration interface and the development of the carious lesion were observed by scanning electron microscope (SEM). Most groups showed outer surface lesion and wall surface lesions observed by CLSM and SEM except 2 samples in control group. There was no significant difference on the outer surface lesion (P > 0.05). The maximum lesion depth [(1145.37 ± 198.98), (1190.12 ± 290.80) µm respectively], the maximum lesion length, fluorescence areas and average fluorescence value of 190 and 250 µm groups' wall lesions were significantly higher than the 0, 25, 50 and 100 µm groups [the maximum lesion depth was (205.25 ± 122.61), (303.87 ± 118.80), (437.75 ± 154.88), (602.87 ± 269.13) µm respectively], P < 0.01. With the increase of the gap size, the demineralization developed more seriously. While the maximum lesion depth, the maximum lesion length and fluorescence areas of 0, 25, 50 µm groups' wall lesions were of no significant difference. There was close relationship between gap size and wall lesion when the gap was above 100 µm at tooth-composite resin interface. The existence of gap was the main influencing factor on the development of microbiolism based caries lesion.
Flood damage curves for consistent global risk assessments
NASA Astrophysics Data System (ADS)
de Moel, Hans; Huizinga, Jan; Szewczyk, Wojtek
2016-04-01
Assessing potential damage of flood events is an important component in flood risk management. Determining direct flood damage is commonly done using depth-damage curves, which denote the flood damage that would occur at specific water depths per asset or land-use class. Many countries around the world have developed flood damage models using such curves which are based on analysis of past flood events and/or on expert judgement. However, such damage curves are not available for all regions, which hampers damage assessments in those regions. Moreover, due to different methodologies employed for various damage models in different countries, damage assessments cannot be directly compared with each other, obstructing also supra-national flood damage assessments. To address these problems, a globally consistent dataset of depth-damage curves has been developed. This dataset contains damage curves depicting percent of damage as a function of water depth as well as maximum damage values for a variety of assets and land use classes (i.e. residential, commercial, agriculture). Based on an extensive literature survey concave damage curves have been developed for each continent, while differentiation in flood damage between countries is established by determining maximum damage values at the country scale. These maximum damage values are based on construction cost surveys from multinational construction companies, which provide a coherent set of detailed building cost data across dozens of countries. A consistent set of maximum flood damage values for all countries was computed using statistical regressions with socio-economic World Development Indicators from the World Bank. Further, based on insights from the literature survey, guidance is also given on how the damage curves and maximum damage values can be adjusted for specific local circumstances, such as urban vs. rural locations, use of specific building material, etc. This dataset can be used for consistent supra-national scale flood damage assessments, and guide assessment in countries where no damage model is currently available.
van Tulder, Raphael; Laggner, Roberta; Kienbacher, Calvin; Schmid, Bernhard; Zajicek, Andreas; Haidvogel, Jochen; Sebald, Dieter; Laggner, Anton N; Herkner, Harald; Sterz, Fritz; Eisenburger, Philip
2015-04-01
In CPR, sufficient compression depth is essential. The American Heart Association ("at least 5cm", AHA-R) and the European Resuscitation Council ("at least 5cm, but not to exceed 6cm", ERC-R) recommendations differ, and both are hardly achieved. This study aims to investigate the effects of differing target depth instructions on compression depth performances of professional and lay-rescuers. 110 professional-rescuers and 110 lay-rescuers were randomized (1:1, 4 groups) to estimate the AHA-R or ERC-R on a paper sheet (given horizontal axis) using a pencil and to perform chest compressions according to AHA-R or ERC-R on a manikin. Distance estimation and compression depth were the outcome variables. Professional-rescuers estimated the distance according to AHA-R in 19/55 (34.5%) and to ERC-R in 20/55 (36.4%) cases (p=0.84). Professional-rescuers achieved correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 36/55 (65.4%) cases (p=0.97). Lay-rescuers estimated the distance correctly according to AHA-R in 18/55 (32.7%) and to ERC-R in 20/55 (36.4%) cases (p=0.59). Lay-rescuers yielded correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 26/55 (47.3%) cases (p=0.02). Professional and lay-rescuers have severe difficulties in correctly estimating distance on a sheet of paper. Professional-rescuers are able to yield AHA-R and ERC-R targets likewise. In lay-rescuers AHA-R was associated with significantly higher success rates. The inability to estimate distance could explain the failure to appropriately perform chest compressions. For teaching lay-rescuers, the AHA-R with no upper limit of compression depth might be preferable. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Vertical Movement Patterns and Ontogenetic Niche Expansion in the Tiger Shark, Galeocerdo cuvier
Afonso, André S.; Hazin, Fábio H. V.
2015-01-01
Sharks are top predators in many marine ecosystems and can impact community dynamics, yet many shark populations are undergoing severe declines primarily due to overfishing. Obtaining species-specific knowledge on shark spatial ecology is important to implement adequate management strategies for the effective conservation of these taxa. This is particularly relevant concerning highly-mobile species that use wide home ranges comprising coastal and oceanic habitats, such as tiger sharks, Galeocerdo cuvier. We deployed satellite tags in 20 juvenile tiger sharks off northeastern Brazil to assess the effect of intrinsic and extrinsic factors on depth and temperature usage. Sharks were tracked for a total of 1184 d and used waters up to 1112 m in depth. The minimum temperature recorded equaled 4°C. All sharks had a clear preference for surface (< 5 m) waters but variability in depth usage was observed as some sharks used mostly shallow (< 60 m) waters whereas others made frequent incursions into greater depths. A diel behavioral shift was detected, with sharks spending considerably more time in surface (< 10 m) waters during the night. Moreover, a clear ontogenetic expansion in the vertical range of tiger shark habitat was observed, with generalized linear models estimating a ~4-fold increase in maximum diving depth from 150- to 300-cm size-classes. The time spent in the upper 5 m of the water column did not vary ontogenetically but shark size was the most important factor explaining the utilization of deeper water layers. Young-of-the-year tiger sharks seem to associate with shallow, neritic habitats but they progressively move into deeper oceanic habitats as they grow larger. Such an early plasticity in habitat use could endow tiger sharks with access to previously unavailable prey, thus contributing to a wider ecological niche. PMID:25629732
NASA Astrophysics Data System (ADS)
Winston, M. S.; Taylor, B. M.; Franklin, E. C.
2017-06-01
Mesophotic coral ecosystems (MCEs) represent the lowest depth distribution inhabited by many coral reef-associated organisms. Research on fishes associated with MCEs is sparse, leading to a critical lack of knowledge of how reef fish found at mesophotic depths may vary from their shallow reef conspecifics. We investigated intraspecific variability in body condition and growth of three Hawaiian endemics collected from shallow, photic reefs (5-33 m deep) and MCEs (40-75 m) throughout the Hawaiian Archipelago and Johnston Atoll: the detritivorous goldring surgeonfish, Ctenochaetus strigosus, and the planktivorous threespot chromis, Chromis verater, and Hawaiian dascyllus, Dascyllus albisella. Estimates of body condition and size-at-age varied between shallow and mesophotic depths; however, these demographic differences were outweighed by the magnitude of variability found across the latitudinal gradient of locations sampled within the Central Pacific. Body condition and maximum body size were lowest in samples collected from shallow and mesophotic Johnston Atoll sites, with no difference occurring between depths. Samples from the Northwestern Hawaiian Islands tended to have the highest body condition and reached the largest body sizes, with differences between shallow and mesophotic sites highly variable among species. The findings of this study support newly emerging research demonstrating intraspecific variability in the life history of coral-reef fish species whose distributions span shallow and mesophotic reefs. This suggests not only that the conservation and fisheries management should take into consideration differences in the life histories of reef-fish populations across spatial scales, but also that information derived from studies of shallow fishes be applied with caution to conspecific populations in mesophotic coral environments.
NASA Astrophysics Data System (ADS)
Xie, Zhipeng; Hu, Zeyong; Xie, Zhenghui; Jia, Binghao; Sun, Genhou; Du, Yizhen; Song, Haiqing
2018-02-01
This paper presents the impact of two snow cover schemes (NY07 and SL12) in the Community Land Model version 4.5 (CLM4.5) on the snow distribution and surface energy budget over the Tibetan Plateau. The simulated snow cover fraction (SCF), snow depth, and snow cover days were evaluated against in situ snow depth observations and a satellite-based snow cover product and snow depth dataset. The results show that the SL12 scheme, which considers snow accumulation and snowmelt processes separately, has a higher overall accuracy (81.8%) than the NY07 (75.8%). The newer scheme performs better in the prediction of overall accuracy compared with the NY07; however, SL12 yields a 15.1% underestimation rate while NY07 overestimated the SCF with a 15.2% overestimation rate. Both two schemes capture the distribution of the maximum snow depth well but show large positive biases in the average value through all periods (3.37, 3.15, and 1.48 cm for NY07; 3.91, 3.52, and 1.17 cm for SL12) and overestimate snow cover days compared with the satellite-based product and in situ observations. Higher altitudes show larger root-mean-square errors (RMSEs) in the simulations of snow depth and snow cover days during the snow-free period. Moreover, the surface energy flux estimations from the SL12 scheme are generally superior to the simulation from NY07 when evaluated against ground-based observations, in particular for net radiation and sensible heat flux. This study has great implications for further improvement of the subgrid-scale snow variations over the Tibetan Plateau.
NASA Astrophysics Data System (ADS)
Xu, Jianhui; Shu, Hong
2014-09-01
This study assesses the analysis performance of assimilating the Moderate Resolution Imaging Spectroradiometer (MODIS)-based albedo and snow cover fraction (SCF) separately or jointly into the physically based Common Land Model (CoLM). A direct insertion method (DI) is proposed to assimilate the black and white-sky albedos into the CoLM. The MODIS-based albedo is calculated with the MODIS bidirectional reflectance distribution function (BRDF) model parameters product (MCD43B1) and the solar zenith angle as estimated in the CoLM for each time step. Meanwhile, the MODIS SCF (MOD10A1) is assimilated into the CoLM using the deterministic ensemble Kalman filter (DEnKF) method. A new DEnKF-albedo assimilation scheme for integrating the DI and DEnKF assimilation schemes is proposed. Our assimilation results are validated against in situ snow depth observations from November 2008 to March 2009 at five sites in the Altay region of China. The experimental results show that all three data assimilation schemes can improve snow depth simulations. But overall, the DEnKF-albedo assimilation shows the best analysis performance as it significantly reduces the bias and root-mean-square error (RMSE) during the snow accumulation and ablation periods at all sites except for the Fuyun site. The SCF assimilation via DEnKF produces better results than the albedo assimilation via DI, implying that the albedo assimilation that indirectly updates the snow depth state variable is less efficient than the direct SCF assimilation. For the Fuyun site, the DEnKF-albedo scheme tends to overestimate the snow depth accumulation with the maximum bias and RMSE values because of the large positive innovation (observation minus forecast).
Vertical movement patterns and ontogenetic niche expansion in the tiger shark, Galeocerdo cuvier.
Afonso, André S; Hazin, Fábio H V
2015-01-01
Sharks are top predators in many marine ecosystems and can impact community dynamics, yet many shark populations are undergoing severe declines primarily due to overfishing. Obtaining species-specific knowledge on shark spatial ecology is important to implement adequate management strategies for the effective conservation of these taxa. This is particularly relevant concerning highly-mobile species that use wide home ranges comprising coastal and oceanic habitats, such as tiger sharks, Galeocerdo cuvier. We deployed satellite tags in 20 juvenile tiger sharks off northeastern Brazil to assess the effect of intrinsic and extrinsic factors on depth and temperature usage. Sharks were tracked for a total of 1184 d and used waters up to 1112 m in depth. The minimum temperature recorded equaled 4°C. All sharks had a clear preference for surface (< 5 m) waters but variability in depth usage was observed as some sharks used mostly shallow (< 60 m) waters whereas others made frequent incursions into greater depths. A diel behavioral shift was detected, with sharks spending considerably more time in surface (< 10 m) waters during the night. Moreover, a clear ontogenetic expansion in the vertical range of tiger shark habitat was observed, with generalized linear models estimating a ~4-fold increase in maximum diving depth from 150- to 300-cm size-classes. The time spent in the upper 5 m of the water column did not vary ontogenetically but shark size was the most important factor explaining the utilization of deeper water layers. Young-of-the-year tiger sharks seem to associate with shallow, neritic habitats but they progressively move into deeper oceanic habitats as they grow larger. Such an early plasticity in habitat use could endow tiger sharks with access to previously unavailable prey, thus contributing to a wider ecological niche.
Fairchild, J.F.; Feltz, K.P.; Sappington, L.C.; Allert, A.L.; Nelson, K.J.; Valle, J.
2009-01-01
We conducted acute and chronic toxicity studies of the effects of picloram acid on the threatened bull trout (Salvelinus confluentus) and the standard coldwater surrogate rainbow trout (Oncorhynchus mykiss). Juvenile fish were chronically exposed for 30 days in a proportional flow-through diluter to measured concentrations of 0, 0.30, 0.60, 1.18, 2.37, and 4.75 mg/L picloram. No mortality of either species was observed at the highest concentration. Bull trout were twofold more sensitive to picloram (30-day maximum acceptable toxic concentration of 0.80 mg/L) compared to rainbow trout (30-day maximum acceptable toxic concentration of 1.67 mg/L) based on the endpoint of growth. Picloram was acutely toxic to rainbow trout at 36 mg/L (96-h ALC50). The acute:chronic ratio for rainbow trout exposed to picloram was 22. The chronic toxicity of picloram was compared to modeled and measured environmental exposure concentrations (EECs) using a four-tiered system. The Tier 1, worst-case exposure estimate, based on a direct application of the current maximum use rate (1.1 kg/ha picloram) to a standardized aquatic ecosystem (water body of 1-ha area and 1-m depth), resulted in an EEC of 0.73 mg/L picloram and chronic risk quotients of 0.91 and 0.44 for bull trout and rainbow trout, respectively. Higher-tiered exposure estimates reduced chronic risk quotients 10-fold. Results of this study indicate that picloram, if properly applied according to the manufacturer's label, poses little risk to the threatened bull trout or rainbow trout in northwestern rangeland environments on either an acute or a chronic basis. ?? 2008 Springer Science+Business Media, LLC.
A Simulation of Biological Prosesses in the Equatorial Pacific Warm Pool at 165 deg E
NASA Technical Reports Server (NTRS)
McClain, Charles R.; Murtugudde, Ragu; Signorini, Sergio
1998-01-01
A nine-year simulation (1984-1992) of biological processes in the equatorial Pacific Warm Pool is presented. A modified version of the 4-component (phytoplankton, zooplankton, nitrate and ammonium) ecosystem model by McClain et al. (1996) is used. Modifications include use of a spectral model for computation of PAR and inclusion of fecal pellet remineralization and ammonium nitrification. The physical parameters (horizontal and vertical velocities and temperature) required by the ecosystem model were derived from an improved version of the Gent and Cane (1990) ocean general circulation model (Murtugudde and Busalacchi, 1997). Surface downwelling spectral irradiance was estimated using the clear-sky models of Frouin et al. (1989) and Gregg and Carder (1990) and cloud cover information from the International Satellite Cloud Climatology Project (ISCCP). The simulations indicate considerable variability on interannual time scales in all four ecosystem components. In particular, surface chlorophyll concentrations varied by an order of magnitude with maximum values exceeding 0.30 mg/cu m in 1988, 1989, and 1990, and pronounced minimums during 1987 and 1992. The deep chlorophyll maximum ranged between 75 and 125 meters with values occasionally exceeding 0.40 mg/cu m. With the exception of the last half of 1988, surface nitrate was always near depletion. Ammonium exhibited a subsurface maximum just below the DCM with concentrations as high as 0.5 mg-atN/cu m . Total integrated annual primary production varied between 40 and 250 gC/sq m/yr with an annual average of 140 gC/sq m/yr. Finally, the model is used to estimate the mean irradiance at the base of the mixed layer, i.e., the penetration irradiance, which was 18 Watts/sq m over the nine year period. The average mixed layer depth was 42 m.
Lahar hazard zones for eruption-generated lahars in the Lassen Volcanic Center, California
Robinson, Joel E.; Clynne, Michael A.
2012-01-01
Lahar deposits are found in drainages that head on or near Lassen Peak in northern California, demonstrating that these valleys are susceptible to future lahars. In general, lahars are uncommon in the Lassen region. Lassen Peak's lack of large perennial snowfields and glaciers limits its potential for lahar development, with the winter snowpack being the largest source of water for lahar generation. The most extensive lahar deposits are related to the May 1915 eruption of Lassen Peak, and evidence for pre-1915 lahars is sparse and spatially limited. The May 1915 eruption of Lassen Peak was a small-volume eruption that generated a snow and hot-rock avalanche, a pyroclastic flow, and two large and four smaller lahars. The two large lahars were generated on May 19 and 22 and inundated sections of Lost and Hat Creeks. We use 80 years of snow depth measurements from Lassen Peak to calculate average and maximum liquid water depths, 2.02 meters (m) and 3.90 m respectively, for the month of May as estimates of the 1915 lahars. These depths are multiplied by the areal extents of the eruptive deposits to calculate a water volume range, 7.05-13.6x106 cubic meters (m3). We assume the lahars were a 50/50 mix of water and sediment and double the water volumes to provide an estimate of the 1915 lahars, 13.2-19.8x106 m3. We use a representative volume of 15x106 m3 in the software program LAHARZ to calculate cross-sectional and planimetric areas for the 1915 lahars. The resultant lahar inundation zone reasonably portrays both of the May 1915 lahars. We use this same technique to calculate the potential for future lahars in basins that head on or near Lassen Peak. LAHARZ assumes that the total lahar volume does not change after leaving the potential energy, H/L, cone (the height of the edifice, H, down to the approximate break in slope at its base, L); therefore, all water available to initiate a lahar is contained inside this cone. Because snow is the primary source of water for lahar generation, we assume that the maximum historical water equivalent, 3.90 m, covers the entire basin area inside the H/L cone. The product of planimetric area of each basin inside the H/L and the maximum historical water equivalent yields the maximum water volume available to generate a lahar. We then double the water volumes to approximate maximum lahar volumes. The maximum lahar volumes and an understanding of the statistical uncertainties inherent to the LAHARZ calculations guided our selection of six hypothetical volumes, 1, 3, 10, 30, 60, and 90x106 m3, to delineate concentric lahar inundation zones. The lahar inundation zones extend, in general, tens of kilometers away from Lassen Peak. The small, more-frequent lahar inundation zones (1 and 3x106 m3) are, on average, 10 km long. The exceptions are the zones in Warner Creek and Mill Creek, which extend much further. All but one of the small, more-frequent lahar inundation zones reach outside of the Lassen Volcanic National Park boundary, and the zone in Mill Creek extends well past the park boundary. All of the medium, moderately frequent lahar inundation zones (10 and 30x106 m3) extend past the park boundary and could potentially impact the communities of Viola and Old Station and State Highways 36 and 44, both north and west of Lassen Peak. The approximately 27-km-long on average, large, less-frequent lahar inundation zones (60 and 90x106 m3) represent worst-case lahar scenarios that are unlikely to occur. Flood hazards continue downstream from the toes of the lahars, potentially affecting communities in the Sacramento River Valley.
Alaska North Slope Tundra Travel Model and Validation Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harry R. Bader; Jacynthe Guimond
2006-03-01
The Alaska Department of Natural Resources (DNR), Division of Mining, Land, and Water manages cross-country travel, typically associated with hydrocarbon exploration and development, on Alaska's arctic North Slope. This project is intended to provide natural resource managers with objective, quantitative data to assist decision making regarding opening of the tundra to cross-country travel. DNR designed standardized, controlled field trials, with baseline data, to investigate the relationships present between winter exploration vehicle treatments and the independent variables of ground hardness, snow depth, and snow slab thickness, as they relate to the dependent variables of active layer depth, soil moisture, and photosyntheticallymore » active radiation (a proxy for plant disturbance). Changes in the dependent variables were used as indicators of tundra disturbance. Two main tundra community types were studied: Coastal Plain (wet graminoid/moist sedge shrub) and Foothills (tussock). DNR constructed four models to address physical soil properties: two models for each main community type, one predicting change in depth of active layer and a second predicting change in soil moisture. DNR also investigated the limited potential management utility in using soil temperature, the amount of photosynthetically active radiation (PAR) absorbed by plants, and changes in microphotography as tools for the identification of disturbance in the field. DNR operated under the assumption that changes in the abiotic factors of active layer depth and soil moisture drive alteration in tundra vegetation structure and composition. Statistically significant differences in depth of active layer, soil moisture at a 15 cm depth, soil temperature at a 15 cm depth, and the absorption of photosynthetically active radiation were found among treatment cells and among treatment types. The models were unable to thoroughly investigate the interacting role between snow depth and disturbance due to a lack of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.« less
NASA Astrophysics Data System (ADS)
Sawazaki, K.; Saito, T.; Ueno, T.; Shiomi, K.
2015-12-01
In this study, utilizing depth-sensitivity of interferometric waveforms recorded by co-located Hi-net and KiK-net sensors, we separate the responsible depth of seismic velocity change associated with the M6.3 earthquake occurred on November 22, 2014, in central Japan. The Hi-net station N.MKGH is located about 20 km northeast from the epicenter, where the seismometer is installed at the 150 m depth. At the same site, the KiK-net has two strong motion seismometers installed at the depths of 0 and 150 m. To estimate average velocity change around the N.MKGH station, we apply the stretching technique to auto-correlation function (ACF) of ambient noise recorded by the Hi-net sensor. To evaluate sensitivity of the Hi-net ACF to velocity change above and below the 150 m depth, we perform a numerical wave propagation simulation using 2-D FDM. To obtain velocity change above the 150 m depth, we measure response waveform from the depths of 150 m to 0 m by computing deconvolution function (DCF) of earthquake records obtained by the two KiK-net vertical array sensors. The background annual velocity variation is subtracted from the detected velocity change. From the KiK-net DCF records, the velocity reduction ratio above the 150 m depth is estimated to be 4.2 % and 3.1 % in the periods of 1-7 days and 7 days - 4 months after the mainshock, respectively. From the Hi-net ACF records, the velocity reduction ratio is estimated to be 2.2 % and 1.8 % in the same time periods, respectively. This difference in the estimated velocity reduction ratio is attributed to depth-dependence of the velocity change. By using the depth sensitivity obtained from the numerical simulation, we estimate the velocity reduction ratio below the 150 m depth to be lower than 1.0 % for both time periods. Thus the significant velocity reduction and recovery are observed above the 150 m depth only, which may be caused by strong ground motion of the mainshock and following healing in the shallow ground.
Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.
2014-01-01
Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.
Morrow, Carolyn A.; Lockner, David A.; Moore, Diane E.; Hickman, Stephen H.
2014-01-01
The San Andreas Fault Observatory at Depth (SAFOD) scientific borehole near Parkfield, California crosses two actively creeping shear zones at a depth of 2.7 km. Core samples retrieved from these active strands consist of a foliated, Mg-clay-rich gouge containing porphyroclasts of serpentinite and sedimentary rock. The adjacent damage zone and country rocks are comprised of variably deformed, fine-grained sandstones, siltstones, and mudstones. We conducted laboratory tests to measure the permeability of representative samples from each structural unit at effective confining pressures, Pe up to the maximum estimated in situ Pe of 120 MPa. Permeability values of intact samples adjacent to the creeping strands ranged from 10−18 to 10−21 m2 at Pe = 10 MPa and decreased with applied confining pressure to 10−20–10−22 m2 at 120 MPa. Values for intact foliated gouge samples (10−21–6 × 10−23 m2 over the same pressure range) were distinctly lower than those for the surrounding rocks due to their fine-grained, clay-rich character. Permeability of both intact and crushed-and-sieved foliated gouge measured during shearing at Pe ≥ 70 MPa ranged from 2 to 4 × 10−22 m2 in the direction perpendicular to shearing and was largely insensitive to shear displacement out to a maximum displacement of 10 mm. The weak, actively-deforming foliated gouge zones have ultra-low permeability, making the active strands of the San Andreas Fault effective barriers to cross-fault fluid flow. The low matrix permeability of the San Andreas Fault creeping zones and adjacent rock combined with observations of abundant fractures in the core over a range of scales suggests that fluid flow outside of the actively-deforming gouge zones is probably fracture dominated.
NASA Astrophysics Data System (ADS)
Kutser, Tiit; Vahtmäe, Ele; Martin, Georg
2006-04-01
One of the objectives of monitoring benthic algal cover is to observe short- and long-term changes in species distribution and structure of coastal benthic habitats as indicators of ecological state. Mapping benthic algal cover with conventional methods (diving) provides great accuracy and high resolution, yet is very expensive and is limited by the time and manpower necessary. We measured reflectance spectra of three indicator species for the Baltic Sea: Cladophora glomerata (green macroalgae), Furcellaria lumbricalis (red macroalgae), and Fucus vesiculosus (brown macroalgae) and used a bio-optical model in an attempt to estimate whether these algae are separable from each other and sandy bottom or deep water by means of satellite remote sensing. Our modelling results indicate that to some extent it is possible to map the studied species with multispectral satellite sensors in turbid waters. However, the depths where the macroalgae can be detected are often shallower than the maximum depths where the studied species usually grow. In waters deeper than just a few meters, the differences between the studied bottom types are seen only in band 2 (green) of the multispectral sensors under investigation. It means that multispectral sensors are capable of detecting difference in brightness only in one band which is insufficient for recognition of different bottom types in waters where no or few in situ data are available. Configuration of MERIS spectral bands allows the recognition of red, green and brown macroalgae based on their spectral signatures provided the algal belts are wider than MERIS spatial resolution. Commercial stock of F. lumbricalis in West-Estonian Archipelago covers area where MERIS 300 m spatial resolution is adequate. However, strong attenuation of light in the water column and signal to noise ratio of the sensor do not allow mapping of Furcellaria down to maximum depths where it occurs.
NASA Astrophysics Data System (ADS)
Wu, Xian-Qian; Wang, Xi; Wei, Yan-Peng; Song, Hong-Wei; Huang, Chen-Guang
2012-06-01
Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue, cracking, etc. Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process. In this paper, the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated. Firstly, dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method. Secondly, the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method. Furthermore, related empirical formulas were given for each dimensionless parameter based on the simulation results. Finally, comparison was made and good agreement was found between the simulation results and the empirical formula, which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.
Washington Play Fairway Analysis Geothermal GIS Data
Corina Forson
2015-12-15
This file contains file geodatabases of the Mount St. Helens seismic zone (MSHSZ), Wind River valley (WRV) and Mount Baker (MB) geothermal play-fairway sites in the Washington Cascades. The geodatabases include input data (feature classes) and output rasters (generated from modeling and interpolation) from the geothermal play-fairway in Washington State, USA. These data were gathered and modeled to provide an estimate of the heat and permeability potential within the play-fairways based on: mapped volcanic vents, hot springs and fumaroles, geothermometry, intrusive rocks, temperature-gradient wells, slip tendency, dilation tendency, displacement, displacement gradient, max coulomb shear stress, sigma 3, maximum shear strain rate, and dilational strain rate at 200m and 3 km depth. In addition this file contains layer files for each of the output rasters. For details on the areas of interest please see the 'WA_State_Play_Fairway_Phase_1_Technical_Report' in the download package. This submission also includes a file with the geothermal favorability of the Washington Cascade Range based off of an earlier statewide assessment. Additionally, within this file there are the maximum shear and dilational strain rate rasters for all of Washington State.
Del Castillo, Luis F.; da Silva, Ana R. Ferreira; Hernández, Saul I.; Aguilella, M.; Andrio, Andreu; Mollá, Sergio; Compañ, Vicente
2014-01-01
Purpose We present an analysis of the corneal oxygen consumption Qc from non-linear models, using data of oxygen partial pressure or tension (pO2) obtained from in vivo estimation previously reported by other authors.1 Methods Assuming that the cornea is a single homogeneous layer, the oxygen permeability through the cornea will be the same regardless of the type of lens that is available on it. The obtention of the real value of the maximum oxygen consumption rate Qc,max is very important because this parameter is directly related with the gradient pressure profile into the cornea and moreover, the real corneal oxygen consumption is influenced by both anterior and posterior oxygen fluxes. Results Our calculations give different values for the maximum oxygen consumption rate Qc,max, when different oxygen pressure values (high and low pO2) are considered at the interface cornea-tears film. Conclusion Present results are relevant for the calculation on the partial pressure of oxygen, available at different depths into the corneal tissue behind contact lenses of different oxygen transmissibility. PMID:25649636
Radiocarbon constraints on the glacial ocean circulation and its impact on atmospheric CO2
Skinner, L. C.; Primeau, F.; Freeman, E.; de la Fuente, M.; Goodwin, P. A.; Gottschalk, J.; Huang, E.; McCave, I. N.; Noble, T. L.; Scrivner, A. E.
2017-01-01
While the ocean’s large-scale overturning circulation is thought to have been significantly different under the climatic conditions of the Last Glacial Maximum (LGM), the exact nature of the glacial circulation and its implications for global carbon cycling continue to be debated. Here we use a global array of ocean–atmosphere radiocarbon disequilibrium estimates to demonstrate a ∼689±53 14C-yr increase in the average residence time of carbon in the deep ocean at the LGM. A predominantly southern-sourced abyssal overturning limb that was more isolated from its shallower northern counterparts is interpreted to have extended from the Southern Ocean, producing a widespread radiocarbon age maximum at mid-depths and depriving the deep ocean of a fast escape route for accumulating respired carbon. While the exact magnitude of the resulting carbon cycle impacts remains to be confirmed, the radiocarbon data suggest an increase in the efficiency of the biological carbon pump that could have accounted for as much as half of the glacial–interglacial CO2 change. PMID:28703126
Theory of the synchronous motion of an array of floating flap gates oscillating wave surge converter
NASA Astrophysics Data System (ADS)
Michele, Simone; Sammarco, Paolo; d'Errico, Michele
2016-08-01
We consider a finite array of floating flap gates oscillating wave surge converter (OWSC) in water of constant depth. The diffraction and radiation potentials are solved in terms of elliptical coordinates and Mathieu functions. Generated power and capture width ratio of a single gate excited by incoming waves are given in terms of the radiated wave amplitude in the far field. Similar to the case of axially symmetric absorbers, the maximum power extracted is shown to be directly proportional to the incident wave characteristics: energy flux, angle of incidence and wavelength. Accordingly, the capture width ratio is directly proportional to the wavelength, thus giving a design estimate of the maximum efficiency of the system. We then compare the array and the single gate in terms of energy production. For regular waves, we show that excitation of the out-of-phase natural modes of the array increases the power output, while in the case of random seas we show that the array and the single gate achieve the same efficiency.
Prediction of lake depth across a 17-state region in the United States
Oliver, Samantha K.; Soranno, Patricia A.; Fergus, C. Emi; Wagner, Tyler; Winslow, Luke A.; Scott, Caren E.; Webster, Katherine E.; Downing, John A.; Stanley, Emily H.
2016-01-01
Lake depth is an important characteristic for understanding many lake processes, yet it is unknown for the vast majority of lakes globally. Our objective was to develop a model that predicts lake depth using map-derived metrics of lake and terrestrial geomorphic features. Building on previous models that use local topography to predict lake depth, we hypothesized that regional differences in topography, lake shape, or sedimentation processes could lead to region-specific relationships between lake depth and the mapped features. We therefore used a mixed modeling approach that included region-specific model parameters. We built models using lake and map data from LAGOS, which includes 8164 lakes with maximum depth (Zmax) observations. The model was used to predict depth for all lakes ≥4 ha (n = 42 443) in the study extent. Lake surface area and maximum slope in a 100 m buffer were the best predictors of Zmax. Interactions between surface area and topography occurred at both the local and regional scale; surface area had a larger effect in steep terrain, so large lakes embedded in steep terrain were much deeper than those in flat terrain. Despite a large sample size and inclusion of regional variability, model performance (R2 = 0.29, RMSE = 7.1 m) was similar to other published models. The relative error varied by region, however, highlighting the importance of taking a regional approach to lake depth modeling. Additionally, we provide the largest known collection of observed and predicted lake depth values in the United States.
A COMPARISON OF AEROSOL OPTICAL DEPTH SIMULATED USING CMAQ WITH SATELLITE ESTIMATES
Satellite data provide new opportunities to study the regional distribution of particulate matter.
The aerosol optical depth (AOD) - a derived estimate from the satellite-measured radiance, can be compared against model estimates to provide an evaluation of the columnar ae...
USDA-ARS?s Scientific Manuscript database
The estimation of parameters of a flow-depth dependent furrow infiltration model and of hydraulic resistance, using irrigation evaluation data, was investigated. The estimated infiltration parameters are the saturated hydraulic conductivity and the macropore volume per unit area. Infiltration throu...
NASA Astrophysics Data System (ADS)
Yildirim, Murat; Ferhanoglu, Onur; Kobler, James B.; Zeitels, Steven M.; Ben-Yakar, Adela
2013-02-01
Vocal fold scarring is one of the major causes of voice disorders and may arise from overuse or post-surgical wound healing. One promising treatment utilizes the injection of soft biomaterials aimed at restoring viscoelasticity of the outermost vibratory layer of the vocal fold, superficial lamina propria (SLP). However, the density of the tissue and the required injection pressure impair proper localization of the injected biomaterial in SLP. To enhance treatment effectiveness, we are investigating a technique to image and ablate sub-epithelial planar voids in vocal folds using ultrafast laser pulses to better localize the injected biomaterial. It is challenging to optimize the excitation wavelength to perform imaging and ablation at depths suitable for clinical use. Here, we compare maximum imaging depth using two photon autofluorescence and second harmonic generation with third-harmonic generation imaging modalities for healthy porcine vocal folds. We used a home-built inverted nonlinear scanning microscope together with a high repetition rate (2 MHz) ultrafast fiber laser (Raydiance Inc.). We acquired both two-photon autofluorescence and second harmonic generation signals using 776 nm wavelength and third harmonic generation signals using 1552 nm excitation wavelength. We observed that maximum imaging depth with 776 nm wavelength is significantly improved from 114 μm to 205 μm when third harmonic generation is employed using 1552 nm wavelength, without any observable damage in the tissue.
Shallow-Water Nitrox Diving, the NASA Experience
NASA Technical Reports Server (NTRS)
Fitzpatrick, Daniel T.
2009-01-01
NASA s Neutral Buoyancy Laboratory (NBL) contains a 6.2 million gallon, 12-meter deep pool where astronauts prepare for space missions involving space walks (extravehicular activity EVA). Training is conducted in a space suit (extravehicular mobility unit EMU) pressurized to 4.0 - 4.3 PSI for up to 6.5 hours while breathing a 46% NITROX mix. Since the facility opened in 1997, over 30,000 hours of suited training has been completed with no occurrence of decompression sickness (DCS) or oxygen toxicity. This study examines the last 5 years of astronaut suited training runs. All suited runs are computer monitored and data is recorded in the Environmental Control System (ECS) database. Astronaut training runs from 2004 - 2008 were reviewed and specific data including total run time, maximum depth and average depth were analyzed. One hundred twenty seven astronauts and cosmonauts completed 2,231 training runs totaling 12,880 exposure hours. Data was available for 96% of the runs. It was revealed that the suit configuration produces a maximum equivalent air depth of 7 meters, essentially eliminating the risk of DCS. Based on average run depth and time, approximately 17% of the training runs exceeded the NOAA oxygen maximum single exposure limits, with no resulting oxygen toxicity. The NBL suited training protocols are safe and time tested. Consideration should be given to reevaluate the NOAA oxygen exposure limits for PO2 levels at or below 1 ATA.
A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
Sadek, H.S.; Rashad, S.M.; Blank, H.R.
1984-01-01
If proper account is taken of the constraints of the method, it is capable of providing depth estimates to within an accuracy of about 10 percent under suitable circumstances. The estimates are unaffected by source magnetization and are relatively insensitive to assumptions as to source shape or distribution. The validity of the method is demonstrated by analyses of synthetic profiles and profiles recorded over Harrat Rahat, Saudi Arabia, and Diyur, Egypt, where source depths have been proved by drilling.
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Estimation of global snow cover using passive microwave data
NASA Astrophysics Data System (ADS)
Chang, Alfred T. C.; Kelly, Richard E.; Foster, James L.; Hall, Dorothy K.
2003-04-01
This paper describes an approach to estimate global snow cover using satellite passive microwave data. Snow cover is detected using the high frequency scattering signal from natural microwave radiation, which is observed by passive microwave instruments. Developed for the retrieval of global snow depth and snow water equivalent using Advanced Microwave Scanning Radiometer EOS (AMSR-E), the algorithm uses passive microwave radiation along with a microwave emission model and a snow grain growth model to estimate snow depth. The microwave emission model is based on the Dense Media Radiative Transfer (DMRT) model that uses the quasi-crystalline approach and sticky particle theory to predict the brightness temperature from a single layered snowpack. The grain growth model is a generic single layer model based on an empirical approach to predict snow grain size evolution with time. Gridding to the 25 km EASE-grid projection, a daily record of Special Sensor Microwave Imager (SSM/I) snow depth estimates was generated for December 2000 to March 2001. The estimates are tested using ground measurements from two continental-scale river catchments (Nelson River and the Ob River in Russia). This regional-scale testing of the algorithm shows that for passive microwave estimates, the average daily snow depth retrieval standard error between estimated and measured snow depths ranges from 0 cm to 40 cm of point observations. Bias characteristics are different for each basin. A fraction of the error is related to uncertainties about the grain growth initialization states and uncertainties about grain size changes through the winter season that directly affect the parameterization of the snow depth estimation in the DMRT model. Also, the algorithm does not include a correction for forest cover and this effect is clearly observed in the retrieval. Finally, error is also related to scale differences between in situ ground measurements and area-integrated satellite estimates. With AMSR-E data, improvements to snow depth and water equivalent estimates are expected since AMSR-E will have twice the spatial resolution of the SSM/I and will be able to characterize better the subnivean snow environment from an expanded range of microwave frequencies.
Goff, M.G.; Slyfield, C.R.; Kummari, S.R.; Tkachenko, E.V.; Fischer, S. E.; Yi, Y.H.; Jekir, M.; Keaveny, T.M.; Hernandez, C.J.
2012-01-01
The number and size of resorption cavities in cancellous bone are believed to influence rates of bone loss, local tissue stress and strain and potentially whole bone strength. Traditional two-dimensional approaches to measuring resorption cavities in cancellous bone report the percent of the bone surface covered by cavities or osteoclasts, but cannot measure cavity number or size. Here we use three-dimensional imaging (voxel size 0.7 × 0.7 × 5.0 μm) to characterize resorption cavity location, number and size in human vertebral cancellous bone from nine elderly donors (7 male, 2 female, ages 47–80 years). Cavities were 30.10 ± 8.56 μm in maximum depth, 80.60 ± 22.23 *103 μm2 in surface area and 614.16 ± 311.93 *103 μm3 in volume (mean ± SD). The average number of cavities per unit tissue volume (N.Cv/TV) was 1.25 ± 0.77 mm−3. The ratio of maximum cavity depth to local trabecular thickness was 30.46 ± 7.03 % and maximum cavity depth was greater on thicker trabeculae (p < 0.05, r2 = 0.14). Half of the resorption cavities were located entirely on nodes (the intersection of two or more trabeculae) within the trabecular structure. Cavities that were not entirely on nodes were predominately on plate-like trabeculae oriented in the cranial-caudal (longitudinal) direction. Cavities on plate-like trabeculae were larger in maximum cavity depth, cavity surface area and cavity volume than cavities on rod-like trabeculae (p < 0.05). We conclude from these findings that cavity size and location are related to local trabecular microarchitecture. PMID:22507299