Sample records for depth-service volume model

  1. Preliminary research on eddy current bobbin quantitative test for heat exchange tube in nuclear power plant

    NASA Astrophysics Data System (ADS)

    Qi, Pan; Shao, Wenbin; Liao, Shusheng

    2016-02-01

    For quantitative defects detection research on heat transfer tube in nuclear power plants (NPP), two parts of work are carried out based on the crack as the main research objects. (1) Production optimization of calibration tube. Firstly, ASME, RSEM and homemade crack calibration tubes are applied to quantitatively analyze the defects depth on other designed crack test tubes, and then the judgment with quantitative results under crack calibration tube with more accuracy is given. Base on that, weight analysis of influence factors for crack depth quantitative test such as crack orientation, length, volume and so on can be undertaken, which will optimize manufacture technology of calibration tubes. (2) Quantitative optimization of crack depth. Neural network model with multi-calibration curve adopted to optimize natural crack test depth generated in in-service tubes shows preliminary ability to improve quantitative accuracy.

  2. Heating of solid targets with laser pulses

    NASA Technical Reports Server (NTRS)

    Bechtel, J. H.

    1975-01-01

    Analytical and numerical solutions to the heat-conduction equation are obtained for the heating of absorbing media with pulsed lasers. The spatial and temporal form of the temperature is determined using several different models of the laser irradiance. Both surface and volume generation of heat are discussed. It is found that if the depth of thermal diffusion for the laser-pulse duration is large compared to the optical-attenuation depth, the surface- and volume-generation models give nearly identical results. However, if the thermal-diffusion depth for the laser-pulse duration is comparable to or less than the optical-attenuation depth, the surface-generation model can give significantly different results compared to the volume-generation model. Specific numerical results are given for a tungsten target irradiated by pulses of different temporal durations and the implications of the results are discussed with respect to the heating of metals by picosecond laser pulses.

  3. Estimating the volume of supra-glacial melt lakes across Greenland: A study of uncertainties derived from multi-platform water-reflectance models

    NASA Astrophysics Data System (ADS)

    Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.

    2012-12-01

    Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.

  4. Process cost and facility considerations in the selection of primary cell culture clarification technology.

    PubMed

    Felo, Michael; Christensen, Brandon; Higgins, John

    2013-01-01

    The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes <1,000 L, clarification using multi-stage depth filtration offers cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.

  5. Dynamic Response and Residual Helmet Liner Crush Using Cadaver Heads and Standard Headforms.

    PubMed

    Bonin, S J; Luck, J F; Bass, C R; Gardiner, J C; Onar-Thomas, A; Asfour, S S; Siegmund, G P

    2017-03-01

    Biomechanical headforms are used for helmet certification testing and reconstructing helmeted head impacts; however, their biofidelity and direct applicability to human head and helmet responses remain unclear. Dynamic responses of cadaver heads and three headforms and residual foam liner deformations were compared during motorcycle helmet impacts. Instrumented, helmeted heads/headforms were dropped onto the forehead region against an instrumented flat anvil at 75, 150, and 195 J. Helmets were CT scanned to quantify maximum liner crush depth and crush volume. General linear models were used to quantify the effect of head type and impact energy on linear acceleration, head injury criterion (HIC), force, maximum liner crush depth, and liner crush volume and regression models were used to quantify the relationship between acceleration and both maximum crush depth and crush volume. The cadaver heads generated larger peak accelerations than all three headforms, larger HICs than the International Organization for Standardization (ISO), larger forces than the Hybrid III and ISO, larger maximum crush depth than the ISO, and larger crush volumes than the DOT. These significant differences between the cadaver heads and headforms need to be accounted for when attempting to estimate an impact exposure using a helmet's residual crush depth or volume.

  6. Lake and wetland ecosystem services measuring water storage and local climate regulation

    NASA Astrophysics Data System (ADS)

    Wong, Christina P.; Jiang, Bo; Bohn, Theodore J.; Lee, Kai N.; Lettenmaier, Dennis P.; Ma, Dongchun; Ouyang, Zhiyun

    2017-04-01

    Developing interdisciplinary methods to measure ecosystem services is a scientific priority, however, progress remains slow in part because we lack ecological production functions (EPFs) to quantitatively link ecohydrological processes to human benefits. In this study, we tested a new approach, combining a process-based model with regression models, to create EPFs to evaluate water storage and local climate regulation from a green infrastructure project on the Yongding River in Beijing, China. Seven artificial lakes and wetlands were established to improve local water storage and human comfort; evapotranspiration (ET) regulates both services. Managers want to minimize the trade-off between water losses and cooling to sustain water supplies while lowering the heat index (HI) to improve human comfort. We selected human benefit indicators using water storage targets and Beijing's HI, and the Variable Infiltration Capacity model to determine the change in ET from the new ecosystems. We created EPFs to quantify the ecosystem services as marginal values [Δfinal ecosystem service/Δecohydrological process]: (1) Δwater loss (lake evaporation/volume)/Δdepth and (2) Δsummer HI/ΔET. We estimate the new ecosystems increased local ET by 0.7 mm/d (20.3 W/m2) on the Yongding River. However, ET rates are causing water storage shortfalls while producing no improvements in human comfort. The shallow lakes/wetlands are vulnerable to drying when inflow rates fluctuate, low depths lead to higher evaporative losses, causing water storage shortfalls with minimal cooling effects. We recommend managers make the lakes deeper to increase water storage, and plant shade trees to improve human comfort in the parks.

  7. Velocity gradients and reservoir volumes lessons in computational sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, P.W.

    1995-12-31

    The sensitivity of reservoir volume estimation from depth converted geophysical time maps to the velocity gradients employed is investigated through a simple model study. The computed volumes are disconcertingly sensitive to gradients, both horizontal and vertical. The need for an accurate method of time to depth conversion is well demonstrated by the model study in which errors in velocity are magnified 40 fold in the computation of the volume. Thus if +/- 10% accuracy in the volume is desired, we must be able to estimate the velocity at the water contact with 0.25% accuracy. Put another way, if the velocitymore » is 8000 feet per second at the well then we have only +/- 20 feet per second leeway in estimating the velocity at the water contact. Very moderate horizontal and vertical gradients would typically indicate a velocity change of a few hundred feet per second if they are in the same direction. Clearly the interpreter needs to by very careful. A methodology is demonstrated which takes into account all the information that is available, velocities, tops, depositional and lithologic spatial patterns, and common sense. It is assumed that through appropriate use of check shot and other time-depth information, that the interpreter has correctly tied the reflection picks to the well tops. Such ties are ordinarily too soft for direct time-depth conversion to give adequate depth ties. The proposed method uses a common compaction law as its basis and incorporates time picks, tops and stratigraphic maps into the depth conversion process. The resulting depth map ties the known well tops in an optimum fashion.« less

  8. Magnetotelluric Detection Thresholds as a Function of Leakage Plume Depth, TDS and Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X.; Buscheck, T. A.; Mansoor, K.

    We conducted a synthetic magnetotelluric (MT) data analysis to establish a set of specific thresholds of plume depth, TDS concentration and volume for detection of brine and CO 2 leakage from legacy wells into shallow aquifers in support of Strategic Monitoring Subtask 4.1 of the US DOE National Risk Assessment Partnership (NRAP Phase II), which is to develop geophysical forward modeling tools. 900 synthetic MT data sets span 9 plume depths, 10 TDS concentrations and 10 plume volumes. The monitoring protocol consisted of 10 MT stations in a 2×5 grid laid out along the flow direction. We model the MTmore » response in the audio frequency range of 1 Hz to 10 kHz with a 50 Ωm baseline resistivity and the maximum depth up to 2000 m. Scatter plots show the MT detection thresholds for a trio of plume depth, TDS concentration and volume. Plumes with a large volume and high TDS located at a shallow depth produce a strong MT signal. We demonstrate that the MT method with surface based sensors can detect a brine and CO 2 plume so long as the plume depth, TDS concentration and volume are above the thresholds. However, it is unlikely to detect a plume at a depth larger than 1000 m with the change of TDS concentration smaller than 10%. Simulated aquifer impact data based on the Kimberlina site provides a more realistic view of the leakage plume distribution than rectangular synthetic plumes in this sensitivity study, and it will be used to estimate MT responses over simulated brine and CO 2 plumes and to evaluate the leakage detectability. Integration of the simulated aquifer impact data and the MT method into the NRAP DREAM tool may provide an optimized MT survey configuration for MT data collection. This study presents a viable approach for sensitivity study of geophysical monitoring methods for leakage detection. The results come in handy for rapid assessment of leakage detectability.« less

  9. Spatial characteristics of observed precipitation fields: A catalog of summer storms in Arizona, Volume 1

    NASA Technical Reports Server (NTRS)

    Fennessey, N. M.; Eagleson, P. S.; Qinliang, W.; Rodrigues-Iturbe, I.

    1986-01-01

    Eight years of summer raingage observations are analyzed for a dense, 93 gage, network operated by the U. S. Department of Agriculture, Agricultural Research Service, in their 150 sq km Walnut Gulch catchment near Tucson, Arizona. Storms are defined by the total depths collected at each raingage during the noon to noon period for which there was depth recorded at any of the gages. For each of the resulting 428 storms, the 93 gage depths are interpolated onto a dense grid and the resulting random field is anlyzed. Presented are: storm depth isohyets at 2 mm contour intervals, first three moments of point storm depth, spatial correlation function, spatial variance function, and the spatial distribution of total rainstorm depth.

  10. Manned systems utilization analysis (study 2.1). Volume 3: LOVES computer simulations, results, and analyses

    NASA Technical Reports Server (NTRS)

    Stricker, L. T.

    1975-01-01

    The LOVES computer program was employed to analyze the geosynchronous portion of the NASA's 1973 automated satellite mission model from 1980 to 1990. The objectives of the analyses were: (1) to demonstrate the capability of the LOVES code to provide the depth and accuracy of data required to support the analyses; and (2) to tradeoff the concept of space servicing automated satellites composed of replaceable modules against the concept of replacing expendable satellites upon failure. The computer code proved to be an invaluable tool in analyzing the logistic requirements of the various test cases required in the tradeoff. It is indicated that the concept of space servicing offers the potential for substantial savings in the cost of operating automated satellite systems.

  11. An order insertion scheduling model of logistics service supply chain considering capacity and time factors.

    PubMed

    Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.

  12. Prediction of Cavitation Depth in an Al-Cu Alloy Melt with Bubble Characteristics Based on Synchrotron X-ray Radiography

    NASA Astrophysics Data System (ADS)

    Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode

    2018-06-01

    The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.

  13. Prediction of Cavitation Depth in an Al-Cu Alloy Melt with Bubble Characteristics Based on Synchrotron X-ray Radiography

    NASA Astrophysics Data System (ADS)

    Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode

    2018-04-01

    The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.

  14. Salient regions detection using convolutional neural networks and color volume

    NASA Astrophysics Data System (ADS)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  15. Magma ocean formation due to giant impacts

    NASA Technical Reports Server (NTRS)

    Tonks, W. B.; Melosh, H. J.

    1993-01-01

    The thermal effects of giant impacts are studied by estimating the melt volume generated by the initial shock wave and corresponding magma ocean depths. Additionally, the effects of the planet's initial temperature on the generated melt volume are examined. The shock pressure required to completely melt the material is determined using the Hugoniot curve plotted in pressure-entropy space. Once the melting pressure is known, an impact melting model is used to estimate the radial distance melting occurred from the impact site. The melt region's geometry then determines the associated melt volume. The model is also used to estimate the partial melt volume. Magma ocean depths resulting from both excavated and retained melt are calculated, and the melt fraction not excavated during the formation of the crater is estimated. The fraction of a planet melted by the initial shock wave is also estimated using the model.

  16. Derivation and Validation of Supraglacial Lake Volumes on the Greenland Ice Sheet from High-Resolution Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Moussavi, Mahsa S.; Abdalati, Waleed; Pope, Allen; Scambos, Ted; Tedesco, Marco; MacFerrin, Michael; Grigsby, Shane

    2016-01-01

    Supraglacial meltwater lakes on the western Greenland Ice Sheet (GrIS) are critical components of its surface hydrology and surface mass balance, and they also affect its ice dynamics. Estimates of lake volume, however, are limited by the availability of in situ measurements of water depth,which in turn also limits the assessment of remotely sensed lake depths. Given the logistical difficulty of collecting physical bathymetric measurements, methods relying upon in situ data are generally restricted to small areas and thus their application to largescale studies is difficult to validate. Here, we produce and validate spaceborne estimates of supraglacial lake volumes across a relatively large area (1250 km(exp 2) of west Greenland's ablation region using data acquired by the WorldView-2 (WV-2) sensor, making use of both its stereo-imaging capability and its meter-scale resolution. We employ spectrally-derived depth retrieval models, which are either based on absolute reflectance (single-channel model) or a ratio of spectral reflectances in two bands (dual-channel model). These models are calibrated by usingWV-2multispectral imagery acquired early in the melt season and depth measurements from a high resolutionWV-2 DEM over the same lake basins when devoid of water. The calibrated models are then validated with different lakes in the area, for which we determined depths. Lake depth estimates based on measurements recorded in WV-2's blue (450-510 nm), green (510-580 nm), and red (630-690 nm) bands and dual-channel modes (blue/green, blue/red, and green/red band combinations) had near-zero bias, an average root-mean-squared deviation of 0.4 m (relative to post-drainage DEMs), and an average volumetric error of b1%. The approach outlined in this study - image-based calibration of depth-retrieval models - significantly improves spaceborne supraglacial bathymetry retrievals, which are completely independent from in situ measurements.

  17. Geodetic imaging: Reservoir monitoring using satellite interferometry

    USGS Publications Warehouse

    Vasco, D.W.; Wicks, C.; Karasaki, K.; Marques, O.

    2002-01-01

    Fluid fluxes within subsurface reservoirs give rise to surface displacements, particularly over periods of a year or more. Observations of such deformation provide a powerful tool for mapping fluid migration within the Earth, providing new insights into reservoir dynamics. In this paper we use Interferometric Synthetic Aperture Radar (InSAR) range changes to infer subsurface fluid volume strain at the Coso geothermal field. Furthermore, we conduct a complete model assessment, using an iterative approach to compute model parameter resolution and covariance matrices. The method is a generalization of a Lanczos-based technique which allows us to include fairly general regularization, such as roughness penalties. We find that we can resolve quite detailed lateral variations in volume strain both within the reservoir depth range (0.4-2.5 km) and below the geothermal production zone (2.5-5.0 km). The fractional volume change in all three layers of the model exceeds the estimated model parameter uncertainly by a factor of two or more. In the reservoir depth interval (0.4-2.5 km), the predominant volume change is associated with northerly and westerly oriented faults and their intersections. However, below the geothermal production zone proper [the depth range 2.5-5.0 km], there is the suggestion that both north- and northeast-trending faults may act as conduits for fluid flow.

  18. Verification and transfer of thermal pollution model. Volume 6: User's manual for 1-dimensional numerical model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.

    1982-01-01

    The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter.

  19. Elements of an improved model of debris-flow motion

    USGS Publications Warehouse

    Iverson, R.M.

    2009-01-01

    A new depth-averaged model of debris-flow motion describes simultaneous evolution of flow velocity and depth, solid and fluid volume fractions, and pore-fluid pressure. Non-hydrostatic pore-fluid pressure is produced by dilatancy, a state-dependent property that links the depth-averaged shear rate and volumetric strain rate of the granular phase. Pore-pressure changes caused by shearing allow the model to exhibit rate-dependent flow resistance, despite the fact that the basal shear traction involves only rate-independent Coulomb friction. An analytical solution of simplified model equations shows that the onset of downslope motion can be accelerated or retarded by pore-pressure change, contingent on whether dilatancy is positive or negative. A different analytical solution shows that such effects will likely be muted if downslope motion continues long enough, because dilatancy then evolves toward zero, and volume fractions and pore pressure concurrently evolve toward steady states. ?? 2009 American Institute of Physics.

  20. Morphometric analysis of Russian Plain's small lakes on the base of accurate digital bathymetric models

    NASA Astrophysics Data System (ADS)

    Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana

    2016-04-01

    Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.

  1. An Order Insertion Scheduling Model of Logistics Service Supply Chain Considering Capacity and Time Factors

    PubMed Central

    Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful. PMID:25276851

  2. Evaluation of a near-infrared-type contrast medium extravasation detection system using a swine model.

    PubMed

    Ishihara, Toshihiro; Kobayashi, Tatsushi; Ikeno, Naoya; Hayashi, Takayuki; Sakakibara, Masahiro; Niki, Noboru; Satake, Mitsuo; Moriyama, Noriyuki

    2014-01-01

    To refine the development and evaluate the near-infrared (NIR) extravasation detection system and its ability to detect extravasation during a contrast-enhanced computed tomography (CT) examination. The NIR extravasation detection system projects the NIR light through the surface of the human skin then, using its sensory system, will monitor the changes in the amount of NIR that reflected, which varies based on absorption properties.Seven female pigs were used to evaluate the contrast media extravasation detection system, using a 20-gauge intravenous catheter, when injected at a rate of 1 mL/s into 4 different locations just under the skin in the thigh section. Using 3-dimensional CT images, we evaluated the extravasations between time and volume, depth and volume, and finally depth and time to detect. We confirmed that the NIR light, 950-nm wavelength, used by the extravasation detection system is well absorbed by contrast media, making changes easy to detect. The average time to detect an extravasation was 2.05 seconds at a depth of 2.0 mm below the skin with a volume of 1.3 mL, 2.57 seconds at a depth between 2.1 and 5 mm below the skin and a volume of 3.47 mL, 10.5 seconds for depths greater than 5.1 mm and a volume of 11.1 mL. The detection accuracy was significantly deteriorated when the depth exceeded 5.0 mm (Tukey-Kramer, P < 0.05) CONCLUSIONS: The extravasation system detection system that is using NIR has a high level of detection sensitivity. The sensitivity enables the system to detect extravasation at depths less than 2 mm with a volume of 1.5 mL and at depths less than 5 mm with a volume of 3.5 mL. The extravasation detection system could be suitable for use during examinations.

  3. Verification and transfer of thermal pollution model. Volume 2: User's manual for 3-dimensional free-surface model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.

    1982-01-01

    The six-volume report: describes the theory of a three-dimensional (3-D) mathematical thermal discharge model and a related one-dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.

  4. Cryomagma Ascent on Europa

    NASA Astrophysics Data System (ADS)

    Lesage, E.; Massol, H.; Schmidt, F.

    2018-06-01

    This study aims at modeling the cryomagma ascent from a liquid reservoir to Europa's surface. We obtain the eruption duration and emitted cryomagma volume at the surface for different chamber depths and volumes and different cryomagma compositions.

  5. Modeling of electrical impedance tomography to detect breast cancer by finite volume methods

    NASA Astrophysics Data System (ADS)

    Ain, K.; Wibowo, R. A.; Soelistiono, S.

    2017-05-01

    The properties of the electrical impedance of tissue are an interesting study, because changes of the electrical impedance of organs are related to physiological and pathological. Both physiological and pathological properties are strongly associated with disease information. Several experiments shown that the breast cancer has a lower impedance than the normal breast tissue. Thus, the imaging based on impedance can be used as an alternative equipment to detect the breast cancer. This research carries out by modelling of Electrical Impedance Tomography to detect the breast cancer by finite volume methods. The research includes development of a mathematical model of the electric potential field by 2D Finite Volume Method, solving the forward problem and inverse problem by linear reconstruction method. The scanning is done by 16 channel electrode with neighbors method to collect data. The scanning is performed at a frequency of 10 kHz and 100 kHz with three objects numeric includes an anomaly at the surface, an anomaly at the depth and an anomaly at the surface and at depth. The simulation has been successfully to reconstruct image of functional anomalies of the breast cancer at the surface position, the depth position or a combination of surface and the depth.

  6. Numerical investigation of hydrodynamic flow over an AUV moving in the water-surface vicinity considering the laminar-turbulent transition

    NASA Astrophysics Data System (ADS)

    Salari, Mahmoud; Rava, Amin

    2017-09-01

    Nowadays, Autonomous Underwater Vehicles (AUVs) are frequently used for exploring the oceans. The hydrodynamics of AUVs moving in the vicinity of the water surface are significantly different at higher depths. In this paper, the hydrodynamic coefficients of an AUV in non-dimensional depths of 0.75, 1, 1.5, 2, and 4D are obtained for movement close to the free-surface. Reynolds Averaged Navier Stokes Equations (RANS) are discretized using the finite volume approach and the water-surface effects modeled using the Volume of Fraction (VOF) method. As the operating speeds of AUVs are usually low, the boundary layer over them is not fully laminar or fully turbulent, so the effect of boundary layer transition from laminar to turbulent flow was considered in the simulations. Two different turbulence/transition models were used: 1) a full-turbulence model, the k-ɛ model, and 2) a turbulence/transition model, Menter's Transition-SST model. The results show that the Menter's Transition-SST model has a better consistency with experimental results. In addition, the wave-making effects of these bodies are studied at different immersion depths in the sea-surface vicinity or at finite depths. It is observed that the relevant pitch moments and lift coefficients are non-zero for these axi-symmetric bodies when they move close to the sea-surface. This is not expected for greater depths.

  7. ArcGIS Framework for Scientific Data Analysis and Serving

    NASA Astrophysics Data System (ADS)

    Xu, H.; Ju, W.; Zhang, J.

    2015-12-01

    ArcGIS is a platform for managing, visualizing, analyzing, and serving geospatial data. Scientific data as part of the geospatial data features multiple dimensions (X, Y, time, and depth) and large volume. Multidimensional mosaic dataset (MDMD), a newly enhanced data model in ArcGIS, models the multidimensional gridded data (e.g. raster or image) as a hypercube and enables ArcGIS's capabilities to handle the large volume and near-real time scientific data. Built on top of geodatabase, the MDMD stores the dimension values and the variables (2D arrays) in a geodatabase table which allows accessing a slice or slices of the hypercube through a simple query and supports animating changes along time or vertical dimension using ArcGIS desktop or web clients. Through raster types, MDMD can manage not only netCDF, GRIB, and HDF formats but also many other formats or satellite data. It is scalable and can handle large data volume. The parallel geo-processing engine makes the data ingestion fast and easily. Raster function, definition of a raster processing algorithm, is a very important component in ArcGIS platform for on-demand raster processing and analysis. The scientific data analytics is achieved through the MDMD and raster function templates which perform on-demand scientific computation with variables ingested in the MDMD. For example, aggregating monthly average from daily data; computing total rainfall of a year; calculating heat index for forecasting data, and identifying fishing habitat zones etc. Addtionally, MDMD with the associated raster function templates can be served through ArcGIS server as image services which provide a framework for on-demand server side computation and analysis, and the published services can be accessed by multiple clients such as ArcMap, ArcGIS Online, JavaScript, REST, WCS, and WMS. This presentation will focus on the MDMD model and raster processing templates. In addtion, MODIS land cover, NDFD weather service, and HYCOM ocean model will be used to illustrate how ArcGIS platform and MDMD model can facilitate scientific data visualization and analytics and how the analysis results can be shared to more audience through ArcGIS Online and Portal.

  8. The Volume of Earth's Lakes

    NASA Astrophysics Data System (ADS)

    Cael, B. B.

    How much water do lakes on Earth hold? Global lake volume estimates are scarce, highly variable, and poorly documented. We develop a mechanistic null model for estimating global lake mean depth and volume based on a statistical topographic approach to Earth's surface. The volume-area scaling prediction is accurate and consistent within and across lake datasets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3) . This volume is in the range of historical estimates (166,000-280,000 km3) , but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62 - 151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles. We also evaluate the size (area) distribution of lakes on Earth compared to expectations from percolation theory. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 2388357.

  9. Seismic Structures in the Earth's Inner Core Below Southeastern Asia

    NASA Astrophysics Data System (ADS)

    Krasnoshchekov, Dmitry; Kaazik, Petr; Kozlovskaya, Elena; Ovtchinnikov, Vladimir

    2016-05-01

    Documenting seismic heterogeneities in the Earth's inner core (IC) is important in terms of getting an insight into its history and dynamics. A valuable means for studying properties and spatial structure of such heterogeneities is provided by measurements of body waves refracted in the vicinity of the inner core boundary (ICB). Here, we investigate eastern hemisphere of the solid core by means of PKPBC-PKPDF differential travel times that sample depths from 140 to 360 km below its boundary. We study 292 polar and 133 equatorial residuals measured over the traces that probe roughly the same volume of the IC in both planes. Equatorial residuals show slight spatial variations in the sampled IC volume mostly below the level of 0.5 %, whereas polar residuals are up to three times as big, direction dependent and can exhibit higher local variations. The measurements reveal fast changes in seismic velocity within a restricted volume of the IC. We interpret the observations in terms of anisotropy and check against several anisotropy models few of which have been found capable of fitting the residuals scatter. We particularly quantify the model where a dipping discontinuity separates fully isotropic roof of the IC from its anisotropic body, whereas the depth of isotropy-anisotropy transition increases in southeast direction from 190 km below Southeastern Asia (off the coast of China) to 350 km beneath Australia. Another acceptable model cast in terms of localized anisotropic heterogeneities is valid if 33 largest polar measurements over the rays sampling a small volume below Southeastern Asia and the rest of polar data are treated separately. This model envisages almost isotropic eastern hemisphere of the IC at least down to the depth of 360 km below the ICB and constrains the anisotropic volume only to the ranges of North latitudes from 18° to 23°, East longitudes from 125° to 135° and depths exceeding 170 km. The anisotropy strength in either model is about 2 %. Further effective pursuit of the models presents challenges in terms of resolution and coverage and basically requires a significant dataset extension.

  10. Verification and transfer of thermal pollution model. Volume 3: Verification of 3-dimensional rigid-lid model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.

    1982-01-01

    The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free-surface model also provides surface height variations with time.

  11. Demonstration of a full volume 3D pre-stack depth migration in the Garden Banks area using massively parallel processor (MPP) technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solano, M.; Chang, H.; VanDyke, J.

    1996-12-31

    This paper describes the implementation and results of portable, production-scale 3D Pre-stack Kirchhoff depth migration software. Full volume pre-stack imaging was applied to a six million trace (46.9 Gigabyte) data set from a subsalt play in the Garden Banks area in the Gulf of Mexico. The velocity model building and updating, were derived using image depth gathers and an image-driven strategy. After three velocity iterations, depth migrated sections revealed drilling targets that were not visible in the conventional 3D post-stack time migrated data set. As expected from the implementation of the migration algorithm, it was found that amplitudes are wellmore » preserved and anomalies associated with known reservoirs, conform to petrophysical predictions. Image gathers for velocity analysis and the final depth migrated volume, were generated on an 1824 node Intel Paragon at Sandia National Laboratories. The code has been successfully ported to a CRAY (T3D) and Unix workstation Parallel Virtual Machine environments (PVM).« less

  12. Mars: Crustal pore volume, cryospheric depth, and the global occurrence of groundwater

    NASA Technical Reports Server (NTRS)

    Clifford, Stephen M.

    1987-01-01

    It is argued that most of the Martian hydrosphere resides in a porous outer layer of crust that, based on a lunar analogy, appears to extend to a depth of about 10 km. The total pore volume of this layer is sufficient to store the equivalent of a global ocean of water some 500 to 1500 m deep. Thermal modeling suggests that about 300 to 500 m of water could be stored as ice within the crust. Any excess must exist as groundwater.

  13. Verification and transfer of thermal pollution model. Volume 5: Verification of 2-dimensional numerical model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.

    1982-01-01

    The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorate (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.

  14. Buoyancy under Control: Underwater Locomotor Performance in a Deep Diving Seabird Suggests Respiratory Strategies for Reducing Foraging Effort

    PubMed Central

    Cook, Timothée R.; Kato, Akiko; Tanaka, Hideji; Ropert-Coudert, Yan; Bost, Charles-André

    2010-01-01

    Background Because they have air stored in many body compartments, diving seabirds are expected to exhibit efficient behavioural strategies for reducing costs related to buoyancy control. We study the underwater locomotor activity of a deep-diving species from the Cormorant family (Kerguelen shag) and report locomotor adjustments to the change of buoyancy with depth. Methodology/Principal Findings Using accelerometers, we show that during both the descent and ascent phases of dives, shags modelled their acceleration and stroking activity on the natural variation of buoyancy with depth. For example, during the descent phase, birds increased swim speed with depth. But in parallel, and with a decay constant similar to the one in the equation explaining the decrease of buoyancy with depth, they decreased foot-stroke frequency exponentially, a behaviour that enables birds to reduce oxygen consumption. During ascent, birds also reduced locomotor cost by ascending passively. We considered the depth at which they started gliding as a proxy to their depth of neutral buoyancy. This depth increased with maximum dive depth. As an explanation for this, we propose that shags adjust their buoyancy to depth by varying the amount of respiratory air they dive with. Conclusions/Significance Calculations based on known values of stored body oxygen volumes and on deep-diving metabolic rates in avian divers suggest that the variations of volume of respiratory oxygen associated with a respiration mediated buoyancy control only influence aerobic dive duration moderately. Therefore, we propose that an advantage in cormorants - as in other families of diving seabirds - of respiratory air volume adjustment upon diving could be related less to increasing time of submergence, through an increased volume of body oxygen stores, than to reducing the locomotor costs of buoyancy control. PMID:20352122

  15. An expert system model for mapping tropical wetlands and peatlands reveals South America as the largest contributor.

    PubMed

    Gumbricht, Thomas; Roman-Cuesta, Rosa Maria; Verchot, Louis; Herold, Martin; Wittmann, Florian; Householder, Ethan; Herold, Nadine; Murdiyarso, Daniel

    2017-09-01

    Wetlands are important providers of ecosystem services and key regulators of climate change. They positively contribute to global warming through their greenhouse gas emissions, and negatively through the accumulation of organic material in histosols, particularly in peatlands. Our understanding of wetlands' services is currently constrained by limited knowledge on their distribution, extent, volume, interannual flood variability and disturbance levels. We present an expert system approach to estimate wetland and peatland areas, depths and volumes, which relies on three biophysical indices related to wetland and peat formation: (1) long-term water supply exceeding atmospheric water demand; (2) annually or seasonally water-logged soils; and (3) a geomorphological position where water is supplied and retained. Tropical and subtropical wetlands estimates reach 4.7 million km 2 (Mkm 2 ). In line with current understanding, the American continent is the major contributor (45%), and Brazil, with its Amazonian interfluvial region, contains the largest tropical wetland area (800,720 km 2 ). Our model suggests, however, unprecedented extents and volumes of peatland in the tropics (1.7 Mkm 2 and 7,268 (6,076-7,368) km 3 ), which more than threefold current estimates. Unlike current understanding, our estimates suggest that South America and not Asia contributes the most to tropical peatland area and volume (ca. 44% for both) partly related to some yet unaccounted extended deep deposits but mainly to extended but shallow peat in the Amazon Basin. Brazil leads the peatland area and volume contribution. Asia hosts 38% of both tropical peat area and volume with Indonesia as the main regional contributor and still the holder of the deepest and most extended peat areas in the tropics. Africa hosts more peat than previously reported but climatic and topographic contexts leave it as the least peat-forming continent. Our results suggest large biases in our current understanding of the distribution, area and volumes of tropical peat and their continental contributions. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  16. Estimating the effect of lung collapse and pulmonary shunt on gas exchange during breath-hold diving: the Scholander and Kooyman legacy.

    PubMed

    Fahlman, A; Hooker, S K; Olszowka, A; Bostrom, B L; Jones, D R

    2009-01-01

    We developed a mathematical model to investigate the effect of lung compression and collapse (pulmonary shunt) on the uptake and removal of O(2), CO(2) and N(2) in blood and tissue of breath-hold diving mammals. We investigated the consequences of pressure (diving depth) and respiratory volume on pulmonary shunt and gas exchange as pressure compressed the alveoli. The model showed good agreement with previous studies of measured arterial O(2) tensions (Pa(O)(2)) from freely diving Weddell seals and measured arterial and venous N(2) tensions from captive elephant seals compressed in a hyperbaric chamber. Pulmonary compression resulted in a rapid spike in Pa(O)(2) and arterial CO(2) tension, followed by cyclical variation with a periodicity determined by Q(tot). The model showed that changes in diving lung volume are an efficient behavioural means to adjust the extent of gas exchange with depth. Differing models of lung compression and collapse depth caused major differences in blood and tissue N(2) estimates. Our integrated modelling approach contradicted predictions from simple models, and emphasised the complex nature of physiological interactions between circulation, lung compression and gas exchange. Overall, our work suggests the need for caution in interpretation of previous model results based on assumed collapse depths and all-or-nothing lung collapse models.

  17. Progress Towards a Thermo-Mechanical Magma Chamber Forward Model for Eruption Cycles, Applied to the Columbia River Flood Basalts

    NASA Astrophysics Data System (ADS)

    Karlstrom, L.; Ozimek, C.

    2016-12-01

    Magma chamber modeling has advanced to the stage where it is now possible to develop self-consistent, predictive models that consider mechanical, thermal, and compositional magma time evolution through multiple eruptive cycles. We have developed such a thermo-mechanical-chemical model for a laterally extensive sill-like chamber beneath free surface, to understand physical controls on eruptive products through time at long-lived magmatic centers. This model predicts the relative importance of recharge, eruption, assimilation and fractional crystallization (REAFC, Lee et al., 2013) on evolving chemical composition as a function of mechanical magma chamber stability regimes. We solve for the time evolution of chamber pressure, temperature, gas volume fraction, volume, elemental concentration in the melt and crustal temperature field that accounts for moving boundary conditions associated with chamber inflation (and the possibility of coupled chambers at different depths). The density, volume fractions of melt and crystals, crustal assimilation and the changing viscosity and crustal properties of the wall rock are also tracked, along with joint solubility of water and CO2. The eventual goal is to develop an efficient forward model to invert for eruptive records at long-lived eruptive centers, where multiple types of data for eruptions are available. As a first step, we apply this model to a new compilation of eruptive data from the Columbia River Flood Basalts (CRFB), which erupted 210,000 km3 from feeder dikes in Washington, Oregon and Idaho between 16.9-6Ma. Data include volumes, timing and geochemical composition of eruptive units, along with seismic surveys and clinopyroxene geobarometry that constrain depth of storage through time. We are in the process of performing a suite of simulations varying model input parameters such as mantle melt rate, emplacement depth, wall rock compositions and rheology, and volatile content to explain volume, eruption timescales, and chemical trace aspects of CRFB eruptions. We are particularly interested in whether the large volume eruptions of the main phase Grande Ronde basalts were made possible due to the development of shallow crustal storage.

  18. Estimation of infiltration and hydraulic resistance in furrow irrigation, with infiltration dependent on flow depth

    USDA-ARS?s Scientific Manuscript database

    The estimation of parameters of a flow-depth dependent furrow infiltration model and of hydraulic resistance, using irrigation evaluation data, was investigated. The estimated infiltration parameters are the saturated hydraulic conductivity and the macropore volume per unit area. Infiltration throu...

  19. Numerical Investigation of Influence of Electrode Immersion Depth on Heat Transfer and Fluid Flow in Electroslag Remelting Process

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Cai, Hui; Pan, Liping; He, Zhu; Liu, Shuang; Li, Baokuan

    2016-12-01

    The influence of the electrode immersion depth on the electromagnetic, flow and temperature fields, as well as the solidification progress in an electroslag remelting furnace have been studied by a transient three-dimensional coupled mathematical model. Maxwell's equations were solved by the electrical potential approach. The Lorentz force and Joule heating were added into the momentum and energy conservation equations as a source term, respectively, and were updated at each time step. The volume of fluid method was invoked to track the motion of the metal droplet and slag-metal interface. The solidification was modeled by an enthalpy-porosity formulation. An experiment was carried out to validate the model. The total amount of Joule heating decreases from 2.13 × 105 W to 1.86 × 105 W when the electrode immersion depth increases from 0.01 m to 0.03 m. The variation law of the slag temperature is different from that of the Joule heating. The volume average temperature rises from 1856 K to 1880 K when the immersion depth increases from 0.01 m to 0.02 m, and then drops to 1869 K if the immersion depth continuously increases to 0.03 m. As a result, the deepest metal pool, which is around 0.03 m, is formed when the immersion depth is 0.02 m.

  20. Synthetic resistivity calculations for the canonical depth-to-bedrock problem: A critical examination of the thin interbed problem and electrical equivalence theories

    NASA Astrophysics Data System (ADS)

    Weiss, C. J.; Knight, R.

    2009-05-01

    One of the key factors in the sensible inference of subsurface geologic properties from both field and laboratory experiments is the ability to quantify the linkages between the inherently fine-scale structures, such as bedding planes and fracture sets, and their macroscopic expression through geophysical interrogation. Central to this idea is the concept of a "minimal sampling volume" over which a given geophysical method responds to an effective medium property whose value is dictated by the geometry and distribution of sub- volume heterogeneities as well as the experiment design. In this contribution we explore the concept of effective resistivity volumes for the canonical depth-to-bedrock problem subject to industry-standard DC resistivity survey designs. Four models representing a sedimentary overburden and flat bedrock interface were analyzed through numerical experiments of six different resistivity arrays. In each of the four models, the sedimentary overburden consists of a thinly interbedded resistive and conductive laminations, with equivalent volume-averaged resistivity but differing lamination thickness, geometry, and layering sequence. The numerical experiments show striking differences in the apparent resistivity pseudo-sections which belie the volume-averaged equivalence of the models. These models constitute the synthetic data set offered for inversion in this Back to Basics Resistivity Modeling session and offer the promise to further our understanding of how the sampling volume, as affected by survey design, can be constrained by joint-array inversion of resistivity data.

  1. Gas hydrate volume estimations on the South Shetland continental margin, Antarctic Peninsula

    USGS Publications Warehouse

    Jin, Y.K.; Lee, M.W.; Kim, Y.; Nam, S.H.; Kim, K.J.

    2003-01-01

    Multi-channel seismic data acquired on the South Shetland margin, northern Antarctic Peninsula, show that Bottom Simulating Reflectors (BSRs) are widespread in the area, implying large volumes of gas hydrates. In order to estimate the volume of gas hydrate in the area, interval velocities were determined using a 1-D velocity inversion method and porosities were deduced from their relationship with sub-bottom depth for terrigenous sediments. Because data such as well logs are not available, we made two baseline models for the velocities and porosities of non-gas hydrate-bearing sediments in the area, considering the velocity jump observed at the shallow sub-bottom depth due to joint contributions of gas hydrate and a shallow unconformity. The difference between the results of the two models is not significant. The parameters used to estimate the total volume of gas hydrate in the study area were 145 km of total length of BSRs identified on seismic profiles, 350 m thickness and 15 km width of gas hydrate-bearing sediments, and 6.3% of the average volume gas hydrate concentration (based on the second baseline model). Assuming that gas hydrates exist only where BSRs are observed, the total volume of gas hydrates along the seismic profiles in the area is about 4.8 ?? 1010 m3 (7.7 ?? 1012 m3 volume of methane at standard temperature and pressure).

  2. Knowledge service decision making in business incubators based on the supernetwork model

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Zhang, Haihong; Wu, Wenqing

    2017-08-01

    As valuable resources for incubating firms, knowledge resources have received gradually increasing attention from all types of business incubators, and business incubators use a variety of knowledge services to stimulate rapid growth in incubating firms. Based on previous research, we generalize the knowledge transfer and knowledge networking services of two main forms of knowledge services and further divide knowledge transfer services into knowledge depth services and knowledge breadth services. Then, we construct the business incubators' knowledge supernetwork model, describe the evolution mechanism among heterogeneous agents and utilize a simulation to explore the performance variance of different business incubators' knowledge services. The simulation results show that knowledge stock increases faster when business incubators are able to provide knowledge services to more incubating firms and that the degree of discrepancy in the knowledge stock increases during the process of knowledge growth. Further, knowledge transfer services lead to greater differences in the knowledge structure, while knowledge networking services lead to smaller differences. Regarding the two types of knowledge transfer services, knowledge depth services are more conducive to knowledge growth than knowledge breadth services, but knowledge depth services lead to greater gaps in knowledge stocks and greater differences in knowledge structures. Overall, it is optimal for business incubators to select a single knowledge service or portfolio strategy based on the amount of time and energy expended on the two types of knowledge services.

  3. Mixed sand and gravel beaches: accurate measurement of active layer depth and sediment transport volumes using PIT tagged tracer pebbles

    NASA Astrophysics Data System (ADS)

    Holland, A.; Moses, C.; Sear, D. A.; Cope, S.

    2016-12-01

    As sediments containing significant gravel portions are increasingly used for beach replenishment projects globally, the total number of beaches classified as `mixed sand and gravel' (MSG) increases. Calculations for required replenishment sediment volumes usually assume a uniform layer of sediment transport across and along the beach, but research into active layer (AL) depth has shown variations both across shore and according to sediment size distribution. This study addresses the need for more accurate calculations of sediment transport volumes on MSG beaches by using more precise measurements of AL depth and width, and virtual velocity of tracer pebbles. Variations in AL depth were measured along three main profile lines (from MHWS to MLWN) at Eastoke, Hayling Island (Hampshire, UK). Passive Integrated Transponder (PIT) tagged pebbles were deployed in columns, and their new locations repeatedly surveyed with RFID technology. These data were combined with daily dGPS beach profiles and sediment sampling for detailed analysis of the influence of beach morphodynamics on sediment transport volumes. Data were collected over two consecutive winter seasons: 2014-15 (relatively calm, average wave height <1 m) and 2015-16 (prolonged periods of moderate storminess, wave heights of 1-2 m). The active layer was, on average, 22% of wave height where beach slope (tanβ) is 0.1, with variations noted according to slope angle, sediment distribution, and beach groundwater level. High groundwater levels and a change in sediment proportions in the sandy lower foreshore reduced the AL to 10% of wave height in this area. The disparity in AL depth across the beach profile indicates that traditional models are not accurately representing bulk sediment transport on MSG beaches. It is anticipated that by improving model inputs, beach managers will be better able to predict necessary volumes and sediment grain size proportions of replenishment material for effective management of MSG beaches.

  4. Care Models of eHealth Services: A Case Study on the Design of a Business Model for an Online Precare Service.

    PubMed

    van Meeuwen, Dorine Pd; van Walt Meijer, Quirine J; Simonse, Lianne Wl

    2015-03-24

    With a growing population of health care clients in the future, the organization of high-quality and cost-effective service providing becomes an increasing challenge. New online eHealth services are proposed as innovative options for the future. Yet, a major barrier to these services appears to be the lack of new business model designs. Although design efforts generally result in visual models, no such artifacts have been found in the literature on business model design. This paper investigates business model design in eHealth service practices from a design perspective. It adopts a research by design approach and seeks to unravel what characteristics of business models determine an online service and what are important value exchanges between health professionals and clients. The objective of the study was to analyze the construction of care models in-depth, framing the essential elements of a business model, and design a new care model that structures these elements for the particular context of an online pre-care service in practice. This research employs a qualitative method of an in-depth case study in which different perspectives on constructing a care model are investigated. Data are collected by using the visual business modeling toolkit, designed to cocreate and visualize the business model. The cocreated models are transcribed and analyzed per actor perspective, transactions, and value attributes. We revealed eight new actors in the business model for providing the service. Essential actors are: the intermediary network coordinator connecting companies, the service dedicated information technology specialists, and the service dedicated health specialist. In the transactions for every service providing we found a certain type of contract, such as a license contract and service contracts for precare services and software products. In addition to the efficiency, quality, and convenience, important value attributes appeared to be: timelines, privacy and credibility, availability, pleasantness, and social interaction. Based on the in-depth insights from the actor perspectives, the business model for online precare services is modeled with a visual design. A new care model of the online precare service is designed and compiled of building blocks for the business model. For the construction of a care model, actors, transactions, and value attributes are essential elements. The design of a care model structures these elements in a visual way. Guided by the business modeling toolkit, the care model design artifact is visualized in the context of an online precare service. Important building blocks include: provision of an online flow of information with regular interactions to the client stimulates self-management of personal health and service-dedicated health expert ensure an increase of the perceived quality of the eHealth service.

  5. Care Models of eHealth Services: A Case Study on the Design of a Business Model for an Online Precare Service

    PubMed Central

    2015-01-01

    Background With a growing population of health care clients in the future, the organization of high-quality and cost-effective service providing becomes an increasing challenge. New online eHealth services are proposed as innovative options for the future. Yet, a major barrier to these services appears to be the lack of new business model designs. Although design efforts generally result in visual models, no such artifacts have been found in the literature on business model design. This paper investigates business model design in eHealth service practices from a design perspective. It adopts a research by design approach and seeks to unravel what characteristics of business models determine an online service and what are important value exchanges between health professionals and clients. Objective The objective of the study was to analyze the construction of care models in-depth, framing the essential elements of a business model, and design a new care model that structures these elements for the particular context of an online pre-care service in practice. Methods This research employs a qualitative method of an in-depth case study in which different perspectives on constructing a care model are investigated. Data are collected by using the visual business modeling toolkit, designed to cocreate and visualize the business model. The cocreated models are transcribed and analyzed per actor perspective, transactions, and value attributes. Results We revealed eight new actors in the business model for providing the service. Essential actors are: the intermediary network coordinator connecting companies, the service dedicated information technology specialists, and the service dedicated health specialist. In the transactions for every service providing we found a certain type of contract, such as a license contract and service contracts for precare services and software products. In addition to the efficiency, quality, and convenience, important value attributes appeared to be: timelines, privacy and credibility, availability, pleasantness, and social interaction. Based on the in-depth insights from the actor perspectives, the business model for online precare services is modeled with a visual design. A new care model of the online precare service is designed and compiled of building blocks for the business model. Conclusions For the construction of a care model, actors, transactions, and value attributes are essential elements. The design of a care model structures these elements in a visual way. Guided by the business modeling toolkit, the care model design artifact is visualized in the context of an online precare service. Important building blocks include: provision of an online flow of information with regular interactions to the client stimulates self-management of personal health and service-dedicated health expert ensure an increase of the perceived quality of the eHealth service. PMID:25831094

  6. Intraoperative laser speckle contrast imaging improves the stability of rodent middle cerebral artery occlusion model

    NASA Astrophysics Data System (ADS)

    Yuan, Lu; Li, Yao; Li, Hangdao; Lu, Hongyang; Tong, Shanbao

    2015-09-01

    Rodent middle cerebral artery occlusion (MCAO) model is commonly used in stroke research. Creating a stable infarct volume has always been challenging for technicians due to the variances of animal anatomy and surgical operations. The depth of filament suture advancement strongly influences the infarct volume as well. We investigated the cerebral blood flow (CBF) changes in the affected cortex using laser speckle contrast imaging when advancing suture during MCAO surgery. The relative CBF drop area (CBF50, i.e., the percentage area with CBF less than 50% of the baseline) showed an increase from 20.9% to 69.1% when the insertion depth increased from 1.6 to 1.8 cm. Using the real-time CBF50 marker to guide suture insertion during the surgery, our animal experiments showed that intraoperative CBF-guided surgery could significantly improve the stability of MCAO with a more consistent infarct volume and less mortality.

  7. An appraisal of Indonesia's immense peat carbon stock using national peatland maps: uncertainties and potential losses from conversion

    Treesearch

    Matthew Warren; Kristell Hergoualc' h; J. Boone Kauffman; Daniel Murdiyarso; Randall Kolka

    2017-01-01

    Background: A large proportion of the world's tropical peatlands occur in Indonesia where rapid conversion and associated losses of carbon, biodiversity and ecosystem services have brought peatland management to the forefront of Indonesia's climate mitigation efforts. We evaluated peat volume from two commonly referenced maps of peat distribution and depth...

  8. Collection development at the NOAA Central Library

    NASA Technical Reports Server (NTRS)

    Quillen, Steve R.

    1994-01-01

    The National Oceanic and Atmospheric Administration (NOAA) Central Library collection, approximately one million volumes, incorporates the holdings of its predecessor agencies. Within the library, the collections are filed separately, based on their source and/or classification schemes. The NOAA Central Library provides a variety of services to users, ranging from quick reference and interlibrary loan to in-depth research and online data bases.

  9. Storm Water Management Model Reference Manual Volume II ...

    EPA Pesticide Factsheets

    SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and generate runoff and pollutant loads. The routing portion of SWMM transports this runoff through a system of pipes, channels, storage/treatment devices, pumps, and regulators. SWMM tracks the quantity and quality of runoff generated within each subcatchment, and the flow rate, flow depth, and quality of water in each pipe and channel during a simulation period comprised of multiple time steps. The reference manual for this edition of SWMM is comprised of three volumes. Volume I describes SWMM’s hydrologic models, Volume II its hydraulic models, and Volume III its water quality and low impact development models. This document provides the underlying mathematics for the hydraulic calculations of the Storm Water Management Model (SWMM)

  10. The Acoustic Model Evaluation Committee (AMEC) Reports. Volume 3. Evaluation of the RAYMODE X Propagation Loss Model. Book 1

    DTIC Science & Technology

    1982-09-01

    and run on single sound speed profile. This model the UNIVAC 1108 computer. Other RAYMODE is in exteasive fleet usage, supporting versions were not...sought for significant disparities. (U) In addition to a sound speed versus depth or temperature versus depth plus a (U) Taken together, the two accuracy...as- constant salinity value, the program can sessment techniques, the Difference and access historical sound speed data FOM techniques, lead to

  11. Satellite services system analysis study. Volume 2: Satellite and services user model

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite services needs are analyzed. Topics include methodology: a satellite user model; representative servicing scenarios; potential service needs; manned, remote, and automated involvement; and inactive satellites/debris. Satellite and services user model development is considered. Groundrules and assumptions, servicing, events, and sensitivity analysis are included. Selection of references satellites is also discussed.

  12. Improving health systems performance in low- and middle-income countries: a system dynamics model of the pay-for-performance initiative in Afghanistan.

    PubMed

    Alonge, O; Lin, S; Igusa, T; Peters, D H

    2017-12-01

    System dynamics methods were used to explore effective implementation pathways for improving health systems performance through pay-for-performance (P4P) schemes. A causal loop diagram was developed to delineate primary causal relationships for service delivery within primary health facilities. A quantitative stock-and-flow model was developed next. The stock-and-flow model was then used to simulate the impact of various P4P implementation scenarios on quality and volume of services. Data from the Afghanistan national facility survey in 2012 was used to calibrate the model. The models show that P4P bonuses could increase health workers' motivation leading to higher levels of quality and volume of services. Gaming could reduce or even reverse this desired effect, leading to levels of quality and volume of services that are below baseline levels. Implementation issues, such as delays in the disbursement of P4P bonuses and low levels of P4P bonuses, also reduce the desired effect of P4P on quality and volume, but they do not cause the outputs to fall below baseline levels. Optimal effect of P4P on quality and volume of services is obtained when P4P bonuses are distributed per the health workers' contributions to the services that triggered the payments. Other distribution algorithms such as equal allocation or allocations proportionate to salaries resulted in quality and volume levels that were substantially lower, sometimes below baseline. The system dynamics models served to inform, with quantitative results, the theory of change underlying P4P intervention. Specific implementation strategies, such as prompt disbursement of adequate levels of performance bonus distributed per health workers' contribution to service, increase the likelihood of P4P success. Poorly designed P4P schemes, such as those without an optimal algorithm for distributing performance bonuses and adequate safeguards for gaming, can have a negative overall impact on health service delivery systems. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.

  13. The effect of motorcycle helmet fit on estimating head impact kinematics from residual liner crush.

    PubMed

    Bonin, Stephanie J; Gardiner, John C; Onar-Thomas, Arzu; Asfour, Shihab S; Siegmund, Gunter P

    2017-09-01

    Proper helmet fit is important for optimizing head protection during an impact, yet many motorcyclists wear helmets that do not properly fit their heads. The goals of this study are i) to quantify how a mismatch in headform size and motorcycle helmet size affects headform peak acceleration and head injury criteria (HIC), and ii) to determine if peak acceleration, HIC, and impact speed can be estimated from the foam liner's maximum residual crush depth or residual crush volume. Shorty-style helmets (4 sizes of a single model) were tested on instrumented headforms (4 sizes) during linear impacts between 2.0 and 10.5m/s to the forehead region. Helmets were CT scanned to quantify residual crush depth and volume. Separate linear regression models were used to quantify how the response variables (peak acceleration (g), HIC, and impact speed (m/s)) were related to the predictor variables (maximum crush depth (mm), crush volume (cm 3 ), and the difference in circumference between the helmet and headform (cm)). Overall, we found that increasingly oversized helmets reduced peak headform acceleration and HIC for a given impact speed for maximum residual crush depths less than 7.9mm and residual crush volume less than 40cm 3 . Below these levels of residual crush, we found that peak headform acceleration, HIC, and impact speed can be estimated from a helmet's residual crush. Above these crush thresholds, large variations in headform kinematics are present, possibly related to densification of the foam liner during the impact. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. ENHANCING HSPF MODEL CHANNEL HYDRAULIC REPRESENTATION

    EPA Science Inventory

    The Hydrological Simulation Program - FORTRAN (HSPF) is a comprehensive watershed model, which employs depth-area-volume-flow relationships known as hydraulic function table (FTABLE) to represent stream channel cross-sections and reservoirs. An accurate FTABLE determination for a...

  15. Projects That Matter: Concepts and Models for Service-Learning in Engineering. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Tsang, Edmund, Ed.

    This volume, the 14th in a series of monographs on service learning and academic disciplinary areas, is designed as a practical guide for faculty seeking to integrate service learning into an engineering course. The volume also deals with larger issues in engineering education and provides case studies of service-learning courses. The articles…

  16. Acting Locally: Concepts and Models for Service-Learning in Environmental Studies. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Ward, Harold, Ed.

    This volume is part of a series of 18 monographs on service learning and the academic disciplines. The essays in this volume focus on service-learning in a wide range of environmental studies. The Introduction, "Why is Service-Learning So Pervasive in Environmental Studies Programs?" was written by Harold Ward. The chapters in Part 1…

  17. Joint application of local earthquake tomography and Curie depth point analysis give evidence of magma presence below the geothermal field of Central Greece.

    NASA Astrophysics Data System (ADS)

    Karastathis, Vassilios; Papoulia, Joanna; di Fiore, Boris; Makris, Jannis; Tsambas, Anestis; Stampolidis, Alexandros; Papadopoulos, Gerassimos

    2010-05-01

    Along the coast of the North Evian Gulf, Central Greece, there are significant geothermal sites, thermal springs as Aedipsos, Yaltra, Lichades, Ilia, Kamena Vourla, Thermopylae etc. but also volcanoes of the Quaternary - Pleistocene age as Lichades and Vromolimni. Since for these local volcanoes and geothermal fields, their deep origin and their relation with the ones of the wider region have not been clarified yet in detail, we attempted a deep structure investigation by conducting a 3D local earthquake tomography study in combination with Curie Depth analysis from aeromagnetic data. A seismographic network of 23 portable land-stations and 7 OBS was deployed in the area of North Evian Gulf to record the microseismic activity for a 4-month period. Two thousand events were located with ML 0.7 to 4.5. To build the 3D seismic velocity structure for the investigation area, we implemented traveltime inversion with algorithm SIMULPS14 on the 540 best located events. The code performed simultaneous inversion of the model parameters Vp, Vp/Vs and hypocenter locations. In order to select a reliable 1D starting model for the tomography inversion, the seismic arrivals were inverted at first with the algorithm VELEST (minimum 1D velocity model). The values of the damping factor parameter were chosen with the aid of the trade-off curve between the model variance and data variance. Six horizontal slices of the 3D P-wave velocity model and the respective ones of the Poisson ratio are constructed. We also set a reliability limit on the sections based on the comparison between the graphical representations of the diagonal elements of the resolution matrix (RDE) and the recovery ability of "checkerboard" models. To estimate the Curie Depth Point we followed the centroid procedures so, the filtered residual dataset of the area was subdivided in 5 square subregions, named C1 up to C5, sized 90x90 km2 and overlapped each other by 70%. In each subregion the radially averaged power spectra was computed. The slope of the longest wavelength part for each subregion yield the centroid depth, zo, of the deepest layer of magnetic sources, while the slope of the second longest wavelength spectral segment yield the depth to the top, zt, for the some layer. Using the formula zb=2zo-zt the Curie Depth estimation was derived for each subregion C an assigned at its centre. The estimated depths are between 7 and 8.1 km below sea level. The results showed the existence of a low seismic velocity volume with high Poisson ratio at greater to 8 km depths. Since the Curie Depth Point analysis estimated the demagnetization of the material due to high temperatures at the top of this volume, we led to consider that this volume is related with the presence of a magma chamber. Below the sites of the quaternary volcanoes of Lichades, Vromolimni and Ag. Ioannis there is a local increase of the seismic velocity over the low velocity anomaly. This was attributed to a crystallized magma volume below the volcanoes. The coincidence of the spatial distribution of surface geothermal sites and volcanoes with the deep low velocity anomaly enhanced our consideration for magma presence at this anomaly. The seismic slices of 4 km depth showed that the supply of the thermal springs at the surface is related with the main faulted zones of the area.

  18. Assessing Learning in Service-Learning Courses through Critical Reflection

    ERIC Educational Resources Information Center

    Molee, Lenore M.; Henry, Mary E.; Sessa, Valerie I.; McKinney-Prupis, Erin R.

    2010-01-01

    The purpose of this study was to describe and examine a model for assessing student learning through reflection in service-learning courses. This model utilized a course-embedded process to frame, facilitate, support, and assess students' depth of learning and critical thinking. Student reflection products in two service-learning courses (a…

  19. ENHANCING HYDROLOGICAL SIMULATION PROGRAM - FORTRAN MODEL CHANNEL HYDRAULIC REPRESENTATION

    EPA Science Inventory

    The Hydrological Simulation Program– FORTRAN (HSPF) is a comprehensive watershed model that employs depth-area - volume - flow relationships known as the hydraulic function table (FTABLE) to represent the hydraulic characteristics of stream channel cross-sections and reservoirs. ...

  20. Establishing a clinical service for the management of sports-related concussions.

    PubMed

    Reynolds, Erin; Collins, Michael W; Mucha, Anne; Troutman-Ensecki, Cara

    2014-10-01

    The clinical management of sports-related concussions is a specialized area of interest with a lack of empirical findings regarding best practice approaches. The University of Pittsburgh Medical Center Sports Concussion Program was the first of its kind; 13 years after its inception, it remains a leader in the clinical management and research of sports-related concussions. This article outlines the essential components of a successful clinical service for the management of sports-related concussions, using the University of Pittsburgh Medical Center Sports Concussion Program as a case example. Drawing on both empirical evidence and anecdotal conclusions from this high-volume clinical practice, this article provides a detailed account of the inner workings of a multidisciplinary concussion clinic with a comprehensive approach to the management of sports-related concussions. A detailed description of the evaluation process and an in-depth analysis of targeted clinical pathways and subtypes of sports-related concussions effectively set the stage for a comprehensive understanding of the assessment, treatment, and rehabilitation model used in Pittsburgh today.

  1. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  2. Column study of chromium(VI) adsorption from electroplating industry by coconut coir pith.

    PubMed

    Suksabye, Parinda; Thiravetyan, Paitip; Nakbanpote, Woranan

    2008-12-15

    The removal of Cr(VI) from electroplating wastewater by coir pith was investigated in a fixed-bed column. The experiments were conducted to study the effect of important parameters such as bed depth (40-60cm) and flow rate (10-30ml min(-1)). At 0.05 C(t)/C(0), the breakthrough volume increased as flow rate decreased or a bed depth increased due to an increase in empty bed contact time (EBCT). The bed depth service time model (BDST) fit well with the experimental data in the initial region of the breakthrough curve, while the simulation of the whole curve using non-linear regression analysis was effective using the Thomas model. The adsorption capacity estimated from the BDST model was reduced with increasing flow rate, which was 16.40mg cm(-3) or 137.91mg Cr(VI)g(-1) coir pith for the flow rates of 10ml min(-1) and 14.05mg cm(-3) or 118.20mg Cr(VI)g(-1) coir pith for the flow rates of 30ml min(-1). At the highest bed depth (60cm) and the lowest flow rate (10mlmin(-1)), the maximum adsorption reached 201.47mg Cr(VI)g(-1) adsorbent according to the Thomas model. The column was regenerated by eluting chromium using 2M HNO(3) after adsorption studies. The desorption of Cr(III) in each of three cycles was about 67-70%. The desorption of Cr(III) in each cycle did not reach 100% due to the fact that Cr(V) was present through the reduction of Cr(VI), and was still in coir pith, possibly bound to glucose in the cellulose part of coir pith. Therefore, the Cr(V) complex cannot be desorbed in solution. The evidence of Cr(V) signal was observed in coir pith, alpha-cellulose and holocellulose extracted from coir pith using electron spin resonance (ESR).

  3. Design of transcranial magnetic stimulation coils with optimal trade-off between depth, focality, and energy.

    PubMed

    Gomez, Luis J; Goetz, Stefan M; Peterchev, Angel V

    2018-08-01

    Transcranial magnetic stimulation (TMS) is a noninvasive brain stimulation technique used for research and clinical applications. Existent TMS coils are limited in their precision of spatial targeting (focality), especially for deeper targets. This paper presents a methodology for designing TMS coils to achieve optimal trade-off between the depth and focality of the induced electric field (E-field), as well as the energy required by the coil. A multi-objective optimization technique is used for computationally designing TMS coils that achieve optimal trade-offs between E-field focality, depth, and energy (fdTMS coils). The fdTMS coil winding(s) maximize focality (minimize the volume of the brain region with E-field above a given threshold) while reaching a target at a specified depth and not exceeding predefined peak E-field strength and required coil energy. Spherical and MRI-derived head models are used to compute the fundamental depth-focality trade-off as well as focality-energy trade-offs for specific target depths. Across stimulation target depths of 1.0-3.4 cm from the brain surface, the suprathreshold volume can be theoretically decreased by 42%-55% compared to existing TMS coil designs. The suprathreshold volume of a figure-8 coil can be decreased by 36%, 44%, or 46%, for matched, doubled, or quadrupled energy. For matched focality and energy, the depth of a figure-8 coil can be increased by 22%. Computational design of TMS coils could enable more selective targeting of the induced E-field. The presented results appear to be the first significant advancement in the depth-focality trade-off of TMS coils since the introduction of the figure-8 coil three decades ago, and likely represent the fundamental physical limit.

  4. Are PCI Service Volumes Associated with 30-Day Mortality? A Population-Based Study from Taiwan.

    PubMed

    Yu, Tsung-Hsien; Chou, Ying-Yi; Wei, Chung-Jen; Tung, Yu-Chi

    2017-11-09

    The volume-outcome relationship has been discussed for over 30 years; however, the findings are inconsistent. This might be due to the heterogeneity of service volume definitions and categorization methods. This study takes percutaneous coronary intervention (PCI) as an example to examine whether the service volume was associated with PCI 30-day mortality, given different service volume definitions and categorization methods. A population-based, cross-sectional multilevel study was conducted. Two definitions of physician and hospital volume were used: (1) the cumulative PCI volume in a previous year before each PCI; (2) the cumulative PCI volume within the study period. The volume was further treated in three ways: (1) a categorical variable based on the American Heart Association's recommendation; (2) a semi-data-driven categorical variable based on k-means clustering algorithm; and (3) a data-driven categorical variable based on the Generalized Additive Model. The results showed that, after adjusting the patient-, physician-, and hospital-level covariates, physician volume was associated inversely with PCI 30-day mortality, but hospital volume was not, no matter which definitions and categorization methods of service volume were applied. Physician volume is negatively associated with PCI 30-day mortality, but the results might vary because of definition and categorization method.

  5. A Gaia DR2 Mock Stellar Catalog

    NASA Astrophysics Data System (ADS)

    Rybizki, Jan; Demleitner, Markus; Fouesneau, Morgan; Bailer-Jones, Coryn; Rix, Hans-Walter; Andrae, René

    2018-07-01

    We present a mock catalog of Milky Way stars, matching in volume and depth the content of the Gaia data release 2 (GDR2). We generated our catalog using Galaxia, a tool to sample stars from a Besançon Galactic model, together with a realistic 3D dust extinction map. The catalog mimics the complete GDR2 data model and contains most of the entries in the Gaia source catalog: five-parameter astrometry, three-band photometry, radial velocities, stellar parameters, and associated scaled nominal uncertainty estimates. In addition, we supplemented the catalog with extinctions and photometry for non-Gaia bands. This catalog can be used to prepare GDR2 queries in a realistic runtime environment, and it can serve as a Galactic model against which to compare the actual GDR2 data in the space of observables. The catalog is hosted through the virtual observatory GAVO’s Heidelberg data center (http://dc.g-vo.org/tableinfo/gdr2mock.main) service, and thus can be queried using ADQL as for GDR2 data.

  6. Miniature microwave applicator for murine bladder hyperthermia studies.

    PubMed

    Salahi, Sara; Maccarini, Paolo F; Rodrigues, Dario B; Etienne, Wiguins; Landon, Chelsea D; Inman, Brant A; Dewhirst, Mark W; Stauffer, Paul R

    2012-01-01

    Novel combinations of heat with chemotherapeutic agents are often studied in murine tumour models. Currently, no device exists to selectively heat small tumours at depth in mice. In this project we modelled, built and tested a miniature microwave heat applicator, the physical dimensions of which can be scaled to adjust the volume and depth of heating to focus on the tumour volume. Of particular interest is a device that can selectively heat murine bladder. Using Avizo(®) segmentation software, we created a numerical mouse model based on micro-MRI scan data. The model was imported into HFSS™ (Ansys) simulation software and parametric studies were performed to optimise the dimensions of a water-loaded circular waveguide for selective power deposition inside a 0.15 mL bladder. A working prototype was constructed operating at 2.45 GHz. Heating performance was characterised by mapping fibre-optic temperature sensors along catheters inserted at depths of 0-1 mm (subcutaneous), 2-3 mm (vaginal), and 4-5 mm (rectal) below the abdominal wall, with the mid depth catheter adjacent to the bladder. Core temperature was monitored orally. Thermal measurements confirm the simulations which demonstrate that this applicator can provide local heating at depth in small animals. Measured temperatures in murine pelvis show well-localised bladder heating to 42-43°C while maintaining normothermic skin and core temperatures. Simulation techniques facilitate the design optimisation of microwave antennas for use in pre-clinical applications such as localised tumour heating in small animals. Laboratory measurements demonstrate the effectiveness of a new miniature water-coupled microwave applicator for localised heating of murine bladder.

  7. Trends in Medicare Service Volume for Cataract Surgery and the Impact of the Medicare Physician Fee Schedule.

    PubMed

    Gong, Dan; Jun, Lin; Tsai, James C

    2017-08-01

    To calculate the associations between Medicare payment and service volume for complex and noncomplex cataract surgeries. The 2005-2009 CMS Part B National Summary Data Files, CMS Part B Carrier Summary Data Files, and the Medicare Physician Fee Schedule. Conducting a retrospective, longitudinal analysis using a fixed-effects model of Medicare Part B carriers representing all 50 states and the District of Columbia from 2005 to 2009, we calculated the Medicare payment-service volume elasticities for noncomplex (CPT 66984) and complex (CPT 66982) cataract surgeries. Service volume data were extracted from the CMS Part B National Summary and Carrier Summary Data Files. Payment data were extracted from the Medicare Physician Fee Schedule. From 2005 to 2009, the proportion of total cataract services billed as complex increased from 3.2 to 6.7 percent. Every 1 percent decrease in Medicare payment was associated with a nonsignificant change in noncomplex cataract service volume (elasticity = 0.15, 95 percent CI [-0.09, 0.38]) but a statistically significant increase in complex cataract service volume (elasticity = -1.12, 95 percent CI [-1.60, -0.63]). Reduced Medicare payment was associated with a significant increase in complex cataract service volume but not in noncomplex cataract service volume, resulting in a shift toward performing a greater proportion of complex cataract surgeries from 2005 to 2009. © Health Research and Educational Trust.

  8. Computational Fluid Dynamics-Population Balance Model Simulation of Effects of Cell Design and Operating Parameters on Gas-Liquid Two-Phase Flows and Bubble Distribution Characteristics in Aluminum Electrolysis Cells

    NASA Astrophysics Data System (ADS)

    Zhan, Shuiqing; Wang, Junfeng; Wang, Zhentao; Yang, Jianhong

    2018-02-01

    The effects of different cell design and operating parameters on the gas-liquid two-phase flows and bubble distribution characteristics under the anode bottom regions in aluminum electrolysis cells were analyzed using a three-dimensional computational fluid dynamics-population balance model. These parameters include inter-anode channel width, anode-cathode distance (ACD), anode width and length, current density, and electrolyte depth. The simulations results show that the inter-anode channel width has no significant effect on the gas volume fraction, electrolyte velocity, and bubble size. With increasing ACD, the above values decrease and more uniform bubbles can be obtained. Different effects of the anode width and length can be concluded in different cell regions. With increasing current density, the gas volume fraction and electrolyte velocity increase, but the bubble size keeps nearly the same. Increasing electrolyte depth decreased the gas volume fraction and bubble size in particular areas and the electrolyte velocity increased.

  9. The Acoustic Model Evaluation Committee (AMEC) Reports. Volume 1A. Summary of Range Independent Environment Acoustic Propagation Data Sets

    DTIC Science & Technology

    1982-09-01

    experiment were: isothermal layer depth 36 ft depressed channel axis 66 ft surface water temperature 59.4 F sea state 2 Discussion The propagation loss...experiments were: isothermal layer depths 56 ft surface water temperature 59.7 0F - sea state 1 Discussion The propagation loss measurements are summarized...number of observations 1854 isothermal layer depth 33 ft surface water temperature 59.9°F sea state 2 Discussion The propagation loss measurements

  10. A new method for depth profiling reconstruction in confocal microscopy

    NASA Astrophysics Data System (ADS)

    Esposito, Rosario; Scherillo, Giuseppe; Mensitieri, Giuseppe

    2018-05-01

    Confocal microscopy is commonly used to reconstruct depth profiles of chemical species in multicomponent systems and to image nuclear and cellular details in human tissues via image intensity measurements of optical sections. However, the performance of this technique is reduced by inherent effects related to wave diffraction phenomena, refractive index mismatch and finite beam spot size. All these effects distort the optical wave and cause an image to be captured of a small volume around the desired illuminated focal point within the specimen rather than an image of the focal point itself. The size of this small volume increases with depth, thus causing a further loss of resolution and distortion of the profile. Recently, we proposed a theoretical model that accounts for the above wave distortion and allows for a correct reconstruction of the depth profiles for homogeneous samples. In this paper, this theoretical approach has been adapted for describing the profiles measured from non-homogeneous distributions of emitters inside the investigated samples. The intensity image is built by summing the intensities collected from each of the emitters planes belonging to the illuminated volume, weighed by the emitters concentration. The true distribution of the emitters concentration is recovered by a new approach that implements this theoretical model in a numerical algorithm based on the Maximum Entropy Method. Comparisons with experimental data and numerical simulations show that this new approach is able to recover the real unknown concentration distribution from experimental profiles with an accuracy better than 3%.

  11. Estimating the volume of Alpine glacial lakes

    NASA Astrophysics Data System (ADS)

    Cook, S. J.; Quincey, D. J.

    2015-12-01

    Supraglacial, moraine-dammed and ice-dammed lakes represent a potential glacial lake outburst flood (GLOF) threat to downstream communities in many mountain regions. This has motivated the development of empirical relationships to predict lake volume given a measurement of lake surface area obtained from satellite imagery. Such relationships are based on the notion that lake depth, area and volume scale predictably. We critically evaluate the performance of these existing empirical relationships by examining a global database of glacial lake depths, areas and volumes. Results show that lake area and depth are not always well correlated (r2 = 0.38) and that although lake volume and area are well correlated (r2 = 0.91), and indeed are auto-correlated, there are distinct outliers in the data set. These outliers represent situations where it may not be appropriate to apply existing empirical relationships to predict lake volume and include growing supraglacial lakes, glaciers that recede into basins with complex overdeepened morphologies or that have been deepened by intense erosion and lakes formed where glaciers advance across and block a main trunk valley. We use the compiled data set to develop a conceptual model of how the volumes of supraglacial ponds and lakes, moraine-dammed lakes and ice-dammed lakes should be expected to evolve with increasing area. Although a large amount of bathymetric data exist for moraine-dammed and ice-dammed lakes, we suggest that further measurements of growing supraglacial ponds and lakes are needed to better understand their development.

  12. Estimating the volume of Alpine glacial lakes

    NASA Astrophysics Data System (ADS)

    Cook, S. J.; Quincey, D. J.

    2015-09-01

    Supraglacial, moraine-dammed and ice-dammed lakes represent a potential glacial lake outburst flood (GLOF) threat to downstream communities in many mountain regions. This has motivated the development of empirical relationships to predict lake volume given a measurement of lake surface area obtained from satellite imagery. Such relationships are based on the notion that lake depth, area and volume scale predictably. We critically evaluate the performance of these existing empirical relationships by examining a global database of measured glacial lake depths, areas and volumes. Results show that lake area and depth are not always well correlated (r2 = 0.38), and that although lake volume and area are well correlated (r2 = 0.91), there are distinct outliers in the dataset. These outliers represent situations where it may not be appropriate to apply existing empirical relationships to predict lake volume, and include growing supraglacial lakes, glaciers that recede into basins with complex overdeepened morphologies or that have been deepened by intense erosion, and lakes formed where glaciers advance across and block a main trunk valley. We use the compiled dataset to develop a conceptual model of how the volumes of supraglacial ponds and lakes, moraine-dammed lakes and ice-dammed lakes should be expected to evolve with increasing area. Although a large amount of bathymetric data exist for moraine-dammed and ice-dammed lakes, we suggest that further measurements of growing supraglacial ponds and lakes are needed to better understand their development.

  13. Velocity and Density Models Incorporating the Cascadia Subduction Zone for 3D Earthquake Ground Motion Simulations

    USGS Publications Warehouse

    Stephenson, William J.

    2007-01-01

    In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2°N to 50°N latitude, and from about -122°W to -129°W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.

  14. Models of railroad passenger-car requirements in the northeast corridor : volume 1. formulation and results.

    DOT National Transportation Integrated Search

    1976-09-30

    Models and techniques for determining passenger-car requirements in railroad service were developed and applied by a research project of which this is the final report. The report is published in two volumes. This volume considers a general problem o...

  15. Experimental Investigation of Relative Permeability Upscaling from the Micro-Scale to the Macro-Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pyrak-Nolte, Laura J.; Cheng, JiangTao; Yu, Ping

    2003-01-29

    During this reporting period, shown experimentally that the optical coherence imaging system can acquire information on grain interfaces and void shape for a maximum depth of half a millimeter into sandstone. The measurement of interfacial area per volume (IAV), capillary pressure and saturation in two dimensional micro-models structures has shown the existence of a unique relationship among these hydraulic parameters for different pore geometry. The measurement of interfacial area per volume on a three-dimensional natural sample, i.e., sandstone, has shown the homogeneity of IAV with depth in a sample when the fluids are in equilibrium.

  16. Contraction rate, flow modification and bed layering impact on scour at the elliptical guide banks

    NASA Astrophysics Data System (ADS)

    Gjunsburgs, B.; Jaudzems, G.; Bizane, M.; Bulankina, V.

    2017-10-01

    Flow contraction by the bridge crossing structures, intakes, embankments, piers, abutments and guide banks leads to general scour and the local scour in the vicinity of the structures. Local scour is depending on flow, river bed and structures parameters and correct understanding of the impact of each parameter can reduce failure possibility of the structures. The paper explores hydraulic contraction, the discharge redistribution between channel and floodplain during the flood, local flow modification and river bed layering on depth, width and volume of scour hole near the elliptical guide banks on low-land rivers. Experiments in a flume, our method for scour calculation and computer modelling results confirm a considerable impact of the contraction rate of the flow, the discharge redistribution between channel and floodplain, the local velocity, backwater and river bed layering on the depth, width, and volume of scour hole in steady and unsteady flow, under clear water condition. With increase of the contraction rate of the flow, the discharge redistribution between channel and floodplain, the local velocity, backwater values, the scour depth increases. At the same contraction rate, but at a different Fr number, the scour depth is different: with increase in the Fr number, the local velocity, backwater, scour depth, width, and volume is increasing. Acceptance of the geometrical contraction of the flow, approach velocity and top sand layer of the river bed for scour depth calculation as accepted now, may be the reason of the structures failure and human life losses.

  17. Teaching for Justice: Concepts and Models for Service-Learning in Peace Studies. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Crews, Robin J., Ed.; Weigert, Kathleen Maas, Ed.; Crews, Robin J., Ed.

    This volume is part of a series of 18 monographs on service learning and the academic disciplines. This volume offers a collection of essays on the integration of service learning in the field of peace studies. After a Preface by Elise Boulding and an Introduction by Kathleen Maas Weigert and Robin J. Crews, titles in Part 1, "Conceptual…

  18. Precipitation Modeling in Nitriding in Fe-M Binary System

    NASA Astrophysics Data System (ADS)

    Tomio, Yusaku; Miyamoto, Goro; Furuhara, Tadashi

    2016-10-01

    Precipitation of fine alloy nitrides near the specimen surface results in significant surface hardening in nitriding of alloyed steels. In this study, a simulation model of alloy nitride precipitation during nitriding is developed for Fe-M binary system based upon the Kampmann-Wagner numerical model in order to predict variations in the distribution of precipitates with depth. The model can predict the number density, average radius, and volume fraction of alloy nitrides as a function of depth from the surface and nitriding time. By a comparison with the experimental observation in a nitrided Fe-Cr alloy, it was found that the model can predict successfully the observed particle distribution from the surface into depth when appropriate solubility of CrN, interfacial energy between CrN and α, and nitrogen flux at the surface are selected.

  19. State-of-the-Art Resources (SOAR) for Software Vulnerability Detection, Test, and Evaluation

    DTIC Science & Technology

    2014-07-01

    preclude in-depth analysis, and widespread use of a Software -as-a- Service ( SaaS ) model that limits data availability and application to DoD systems...provide mobile application analysis using a Software - as-a- Service ( SaaS ) model. In this case, any software to be analyzed must be sent to the...tools are only available through a SaaS model. The widespread use of a Software -as-a- Service ( SaaS ) model as a sole evaluation model limits data

  20. A Model for coupled heat and moisture transfer in permafrost regions of three rivers source areas, Qinghai, China

    NASA Astrophysics Data System (ADS)

    Wu, X. L.; Xiang, X. H.; Wang, C. H.; Shao, Q. Q.

    2012-04-01

    Soil freezing occurs in winter in many parts of the world. The transfer of heat and moisture in freezing and thawing soil is interrelated, and this heat and moisture transport plays an important role in hydrological activity of seasonal frozen region especially for three rivers sources area of China. Soil freezing depth and ice content in frozen zone will significantly influence runoff and groundwater recharge. The purpose of this research is to develop a numerical model to simulate water and heat movement in the soil under freezing and thawing conditions. The basic elements of the model are the heat and water flow equations, which are heat conduction equation and unsaturated soil fluid mass conservation equation. A full-implicit finite volume scheme is used to solve the coupled equations in space. The model is calibrated and verified against the observed moisture and temperature of soil during freezing and thawing period from 2005 to 2007. Different characters of heat and moisture transfer are testified, such as frozen depth, temperature field of 40 cm depth and topsoil moisture content, et al. The model is calibrated and verified against observed value, which indicate that the new model can be used successfully to simulate numerically the coupled heat and mass transfer process in permafrost regions. By simulating the runoff generation process and the driven factors of seasonal changes, the agreement illustrates that the coupled model can be used to describe the local phonemes of hydrologic activities and provide a support to the local Ecosystem services. This research was supported by the National Natural Science Foundation of China (No. 51009045; 40930635; 41001011; 41101018; 51079038), the National Key Program for Developing Basic Science (No. 2009CB421105), the Fundamental Research Funds for the Central Universities (No. 2009B06614; 2010B00414), the National Non Profit Research Program of China (No. 200905013-8; 201101024; 20101224).

  1. Feasibility of imaging epileptic seizure onset with EIT and depth electrodes.

    PubMed

    Witkowska-Wrobel, Anna; Aristovich, Kirill; Faulkner, Mayo; Avery, James; Holder, David

    2018-06-01

    Imaging ictal and interictal activity with Electrical Impedance Tomography (EIT) using intracranial electrode mats has been demonstrated in animal models of epilepsy. In human epilepsy subjects undergoing presurgical evaluation, depth electrodes are often preferred. The purpose of this work was to evaluate the feasibility of using EIT to localise epileptogenic areas with intracranial electrodes in humans. The accuracy of localisation of the ictal onset zone was evaluated in computer simulations using 9M element FEM models derived from three subjects. 5 mm radius perturbations imitating a single seizure onset event were placed in several locations forming two groups: under depth electrode coverage and in the contralateral hemisphere. Simulations were made for impedance changes of 1% expected for neuronal depolarisation over milliseconds and 10% for cell swelling over seconds. Reconstructions were compared with EEG source modelling for a radially orientated dipole with respect to the closest EEG recording contact. The best accuracy of EIT was obtained using all depth and 32 scalp electrodes, greater than the equivalent accuracy with EEG inverse source modelling. The localisation error was 5.2 ± 1.8, 4.3 ± 0 and 46.2 ± 25.8 mm for perturbations within the volume enclosed by depth electrodes and 29.6 ± 38.7, 26.1 ± 36.2, 54.0 ± 26.2 mm for those without (EIT 1%, 10% change, EEG source modelling, n = 15 in 3 subjects, p < 0.01). As EIT was insensitive to source dipole orientation, all 15 perturbations within the volume enclosed by depth electrodes were localised, whereas the standard clinical method of visual inspection of EEG voltages, only localised 8 out of 15 cases. This suggests that adding EIT to SEEG measurements could be beneficial in localising the onset of seizures. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Temporal analysis of floodwater volumes in New Orleans after Hurricane Katrina: Chapter 3H in Science and the storms-the USGS response to the hurricanes of 2005

    USGS Publications Warehouse

    Smith, Jodie; Rowland, James

    2007-01-01

    Satellite images from multiple sensors and dates were analyzed to measure the extent of flooding caused by Hurricane Katrina in the New Orleans, La., area. The flood polygons were combined with a high-resolution digital elevation model to estimate water depths and volumes in designated areas. The multiple satellite acquisitions enabled monitoring of the floodwater volume and extent through time.

  3. Where Children Live: Solutions for Serving Young Children and Their Families. Advances in Applied Developmental Psychology, Volume 17.

    ERIC Educational Resources Information Center

    Roberts, Richard N., Ed.; Magrab, Phyllis R., Ed.

    The changing nature of communities necessitates a comprehensive theoretical model for effective delivery of child and family services. This edited volume provides a context for and examples of an emerging paradigm shift in human services toward one in which services for families with young children are provided in their communities. Section 1 of…

  4. The Center for In-Service Education. Final Evaluation Report. Volume I. Part 1.

    ERIC Educational Resources Information Center

    Tennessee State Dept. of Education, Nashville.

    The primary objectives of the Center for In-Service Education in implementing a model for in-service education were to a) implement and demonstrate the comprehensive in-service model developed during the planning phase, b) provide coordinated planning of in-service education for all participating school systems, c) directly assist regional…

  5. The exhumation of the (U)HP rocks of the Central and Western Penninic Alps: comparison study between thermo-mechanical models and field data

    NASA Astrophysics Data System (ADS)

    Schenker, Filippo Luca; Schmalholz, Stefan M.; Baumgartner, Lukas P.; Pleuger, Jan

    2015-04-01

    The Central and Western Penninic (CWP) Alps form an orogenic wedge of imbricate tectonic nappes. Orogenic wedges form typically at depths < 60 km. Nevertheless, a few nappes and massifs (i.e. Adula/Cima Lunga, Dora-Maira, Monte Rosa, Gran Paradiso, Zermatt-Saas) exhibit High- and Ultra-High-Pressure (U)HP metamorphic rocks suggesting that they were buried by subduction to depths >60 km and subsequently exhumed into the accretionary wedge. Mechanically, the exhumation of the (U)HP rocks from mantle depths can be explained by two contrasting buoyancy-driven models: (1) overall return flow of rocks in a subduction channel and (2) upward flow of individual, lighter rock units within a heavier material (Stokes flow). In this study we compare published numerical exhumation models of (1) and (2) with structural and metamorphic data of the CWP Alps. Model (1) predicts the exhumation of large volumes of (U)HP rocks within a viscous channel (1100-500 km2 in a 2D cross-section through the subduction zone). The moderate volume (e.g. ~7 km2 in a geological cross-section of the UHP unit of the Dora-Maira) and the coherent architecture of the (U)HP nappes suggests that the exhumation through (1) is unlikely for (U)HP nappes of the CWP Alps. Model (2) predicts the exhumation of appropriate volumes of (U)HP rocks, but generally the (U)HP rocks exhume vertically in the overriding plate and are not incorporated into the orogenic wedge. Nevertheless, the exhumation through (2) is feasible either with a vertical or with an extremely viscous and dense subduction channel. Whether these characteristics are applicable to the CWP UHP nappes will be discussed in light of field observations.

  6. Miniature Microwave Applicator for Murine Bladder Hyperthermia Studies

    PubMed Central

    Salahi, Sara; Maccarini, Paolo F.; Rodrigues, Dario B.; Etienne, Wiguins; Landon, Chelsea D.; Inman, Brant A.; Dewhirst, Mark W.; Stauffer, Paul R.

    2012-01-01

    Purpose Novel combinations of heat with chemotherapeutic agents are often studied in murine tumor models. Currently, no device exists to selectively heat small tumors at depth in mice. In this project, we modelled, built and tested a miniature microwave heat applicator, the physical dimensions of which can be scaled to adjust the volume and depth of heating to focus on the tumor volume. Of particular interest is a device that can selectively heat murine bladder. Materials and Methods Using Avizo® segmentation software, we created a numerical mouse model based on micro-MRI scan data. The model was imported into HFSS™ simulation software and parametric studies were performed to optimize the dimensions of a water-loaded circular waveguide for selective power deposition inside a 0.15ml bladder. A working prototype was constructed operating at 2.45GHz. Heating performance was characterized by mapping fiber-optic temperature sensors along catheters inserted at depths of 0-1mm (subcutaneous), 2-3mm (vaginal), and 4-5mm (rectal) below the abdominal wall, with the mid-depth catheter adjacent to the bladder. Core temperature was monitored orally. Results Thermal measurements confirm the simulations which demonstrate that this applicator can provide local heating at depth in small animals. Measured temperatures in murine pelvis show well-localized bladder heating to 42-43°C while maintaining normothermic skin and core temperatures. Conclusions Simulation techniques facilitate the design optimization of microwave antennas for use in pre-clinical applications such as localized tumor heating in small animals. Laboratory measurements demonstrate the effectiveness of a new miniature water-coupled microwave applicator for localized heating of murine bladder. PMID:22690856

  7. Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.

    ERIC Educational Resources Information Center

    Technology Management Corp., Alexandria, VA.

    A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…

  8. The volume and mean depth of Earth's lakes

    NASA Astrophysics Data System (ADS)

    Cael, B. B.; Heathcote, A. J.; Seekell, D. A.

    2017-01-01

    Global lake volume estimates are scarce, highly variable, and poorly documented. We developed a rigorous method for estimating global lake depth and volume based on the Hurst coefficient of Earth's surface, which provides a mechanistic connection between lake area and volume. Volume-area scaling based on the Hurst coefficient is accurate and consistent when applied to lake data sets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3). This volume is in the range of historical estimates (166,000-280,000 km3), but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62-151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles.

  9. Strategic outsourcing of clinical services: a model for volume-stressed academic medical centers.

    PubMed

    Billi, John E; Pai, Chih-Wen; Spahlinger, David A

    2004-01-01

    Many academic medical centers have significant capacity constraints and limited ability to expand services to meet demand. Health care management should employ strategic thinking to deal with service demands. This article uses three organizational models to develop a theoretical framework to guide the selection of clinical services for outsourcing.

  10. Modeling of fixed-bed column studies for the adsorption of cadmium onto novel polymer-clay composite adsorbent.

    PubMed

    Unuabonah, Emmanuel I; Olu-Owolabi, Bamidele I; Fasuyi, Esther I; Adebowale, Kayode O

    2010-07-15

    Kaolinite clay was treated with polyvinyl alcohol to produce a novel water-stable composite called polymer-clay composite adsorbent. The modified adsorbent was found to have a maximum adsorption capacity of 20,400+/-13 mg/L (1236 mg/g) and a maximum adsorption rate constant of approximately = 7.45x10(-3)+/-0.0002 L/(min mg) at 50% breakthrough. Increase in bed height increased both the breakpoint and exhaustion point of the polymer-clay composite adsorbent. The time for the movement of the Mass Transfer Zone (delta) down the column was found to increase with increasing bed height. The presence of preadsorbed electrolyte and regeneration were found to reduce this time. Increased initial Cd(2+) concentration, presence of preadsorbed electrolyte, and regeneration of polymer-clay composite adsorbent reduced the volume of effluent treated. Premodification of polymer-clay composite adsorbent with Ca- and Na-electrolytes reduced the rate of adsorption of Cd(2+) onto polymer-clay composite and lowered the breakthrough time of the adsorbent. Regeneration and re-adsorption studies on the polymer-clay composite adsorbent presented a decrease in the bed volume treated at both the breakpoint and exhaustion points of the regenerated bed. Experimental data were observed to show stronger fits to the Bed Depth Service Time (BDST) model than the Thomas model. 2010 Elsevier B.V. All rights reserved.

  11. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  12. Volume measurement of the leg with the depth camera for quantitative evaluation of edema

    NASA Astrophysics Data System (ADS)

    Kiyomitsu, Kaoru; Kakinuma, Akihiro; Takahashi, Hiroshi; Kamijo, Naohiro; Ogawa, Keiko; Tsumura, Norimichi

    2017-02-01

    Volume measurement of the leg is important in the evaluation of leg edema. Recently, method for measurement by using a depth camera is proposed. However, many depth cameras are expensive. Therefore, we propose a method using Microsoft Kinect. We obtain a point cloud of the leg by Kinect Fusion technique and calculate the volume. We measured the volume of leg for three healthy students during three days. In each measurement, the increase of volume was confirmed from morning to evening. It is known that the volume of leg is increased in doing office work. Our experimental results meet this expectation.

  13. A two-phase debris-flow model that includes coupled evolution of volume fractions, granular dilatancy, and pore-fluid pressure

    USGS Publications Warehouse

    George, David L.; Iverson, Richard M.

    2011-01-01

    Pore-fluid pressure plays a crucial role in debris flows because it counteracts normal stresses at grain contacts and thereby reduces intergranular friction. Pore-pressure feedback accompanying debris deformation is particularly important during the onset of debrisflow motion, when it can dramatically influence the balance of forces governing downslope acceleration. We consider further effects of this feedback by formulating a new, depth-averaged mathematical model that simulates coupled evolution of granular dilatancy, solid and fluid volume fractions, pore-fluid pressure, and flow depth and velocity during all stages of debris-flow motion. To illustrate implications of the model, we use a finite-volume method to compute one-dimensional motion of a debris flow descending a rigid, uniformly inclined slope, and we compare model predictions with data obtained in large-scale experiments at the USGS debris-flow flume. Predictions for the first 1 s of motion show that increasing pore pressures (due to debris contraction) cause liquefaction that enhances flow acceleration. As acceleration continues, however, debris dilation causes dissipation of pore pressures, and this dissipation helps stabilize debris-flow motion. Our numerical predictions of this process match experimental data reasonably well, but predictions might be improved by accounting for the effects of grain-size segregation.

  14. Quantifying the Restorable Water Volume of California's Sierra Nevada Meadows

    NASA Astrophysics Data System (ADS)

    Emmons, J. D.; Yarnell, S. M.; Fryjoff-Hung, A.; Viers, J.

    2013-12-01

    The Sierra Nevada is estimated to provide over 66% of California's water supply, which is largely derived from snowmelt. Global climate warming is expected to result in a decrease in snow pack and an increase in melting rate, making the attenuation of snowmelt by any means, an important ecosystem service for ensuring water availability. Montane meadows are dispersed throughout the mountain range and can act like natural reservoirs, and also provide wildlife habitat, water filtration, and water storage. Despite the important role of meadows in the Sierra Nevada, a large proportion is degraded from stream incision, which increases volume outflows and reduces overbank flooding, thus reducing infiltration and potential water storage. Restoration of meadow stream channels would therefore improve hydrological functioning, including increased water storage. The potential water holding capacity of restored meadows has yet to be quantified, thus this research seeks to address this knowledge gap by estimating the restorable water volume due to stream incision. More than 17,000 meadows were analyzed by categorizing their erosion potential using channel slope and soil texture, ultimately resulting in six general erodibility types. Field measurements of over 100 meadows, stratified by latitude, elevation, and geologic substrate, were then taken and analyzed for each erodibility type to determine average depth of incision. Restorable water volume was then quantified as a function of water holding capacity of the soil, meadow area and incised depth. Total restorable water volume was found to be 120 x 10^6 m3, or approximately 97,000 acre-feet. Using 95% confidence intervals for incised depth, the upper and lower bounds of the total restorable water volume were found to be 107 - 140 x 10^6 m3. Though this estimate of restorable water volume is small in regards to the storage capacity of typical California reservoirs, restoration of Sierra Nevada meadows remains an important objective. Storage of water in meadows benefits California wildlife, potentially attenuate floods, and elevates base flows, which can ease effects to the spring recession curve from the expected decline in Sierran snowpack with atmospheric warming.

  15. Edge gradients evaluation for 2D hybrid finite volume method model

    USDA-ARS?s Scientific Manuscript database

    In this study, a two-dimensional depth-integrated hydrodynamic model was developed using FVM on a hybrid unstructured collocated mesh system. To alleviate the negative effects of mesh irregularity and non-uniformity, a conservative evaluation method for edge gradients based on the second-order Tayl...

  16. Determination of depth-viewing volumes for stereo three-dimensional graphic displays

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.

    1990-01-01

    Real-world, 3-D, pictorial displays incorporating true depth cues via stereopsis techniques offer a potential means of displaying complex information in a natural way to prevent loss of situational awareness and provide increases in pilot/vehicle performance in advanced flight display concepts. Optimal use of stereopsis requires an understanding of the depth viewing volume available to the display designer. Suggested guidelines are presented for the depth viewing volume from an empirical determination of the effective region of stereopsis cueing (at several viewer-CRT screen distances) for a time multiplexed stereopsis display system. The results provide the display designer with information that will allow more effective placement of depth information to enable the full exploitation of stereopsis cueing. Increasing viewer-CRT screen distances provides increasing amounts of usable depth, but with decreasing fields-of-view. A stereopsis hardware system that permits an increased viewer-screen distance by incorporating larger screen sizes or collimation optics to maintain the field-of-view at required levels would provide a much larger stereo depth-viewing volume.

  17. California Integrated Service Delivery Evaluation Report. Phase I

    ERIC Educational Resources Information Center

    Moore, Richard W.; Rossy, Gerard; Roberts, William; Chapman, Kenneth; Sanchez, Urte; Hanley, Chris

    2010-01-01

    This study is a formative evaluation of the OneStop Career Center Integrated Service Delivery (ISD) Model within the California Workforce System. The study was sponsored by the California Workforce Investment Board. The study completed four in-depth case studies of California OneStops to describe how they implemented the ISD model which brings…

  18. Periodontitis is related to lung volumes and airflow limitation: a cross-sectional study.

    PubMed

    Holtfreter, Birte; Richter, Stefanie; Kocher, Thomas; Dörr, Marcus; Völzke, Henry; Ittermann, Till; Obst, Anne; Schäper, Christoph; John, Ulrich; Meisel, Peter; Grotevendt, Anne; Felix, Stephan B; Ewert, Ralf; Gläser, Sven

    2013-12-01

    This study aimed to assess the potential association of periodontal diseases with lung volumes and airflow limitation in a general adult population. Based on a representative population sample of the Study of Health in Pomerania (SHIP), 1463 subjects aged 25-86 years were included. Periodontal status was assessed by clinical attachment loss (CAL), probing depth and number of missing teeth. Lung function was measured using spirometry, body plethysmography and diffusing capacity of the lung for carbon monoxide. Linear regression models using fractional polynomials were used to assess associations between periodontal disease and lung function. Fibrinogen and high-sensitivity C-reactive protein (hs-CRP) were evaluated as potential intermediate factors. After full adjustment for potential confounders mean CAL was significantly associated with variables of mobile dynamic and static lung volumes, airflow limitation and hyperinflation (p<0.05). Including fibrinogen and hs-CRP did not change coefficients of mean CAL; associations remained statistically significant. Mean CAL was not associated with total lung capacity and diffusing capacity of the lung for carbon monoxide. Associations were confirmed for mean probing depth, extent measures of CAL/probing depth and number of missing teeth. Periodontal disease was significantly associated with reduced lung volumes and airflow limitation in this general adult population sample. Systemic inflammation did not provide a mechanism linking both diseases.

  19. New models to predict depth of infiltration in endometrial carcinoma based on transvaginal sonography.

    PubMed

    De Smet, F; De Brabanter, J; Van den Bosch, T; Pochet, N; Amant, F; Van Holsbeke, C; Moerman, P; De Moor, B; Vergote, I; Timmerman, D

    2006-06-01

    Preoperative knowledge of the depth of myometrial infiltration is important in patients with endometrial carcinoma. This study aimed at assessing the value of histopathological parameters obtained from an endometrial biopsy (Pipelle de Cornier; results available preoperatively) and ultrasound measurements obtained after transvaginal sonography with color Doppler imaging in the preoperative prediction of the depth of myometrial invasion, as determined by the final histopathological examination of the hysterectomy specimen (the gold standard). We first collected ultrasound and histopathological data from 97 consecutive women with endometrial carcinoma and divided them into two groups according to surgical stage (Stages Ia and Ib vs. Stages Ic and higher). The areas (AUC) under the receiver-operating characteristics curves of the subjective assessment of depth of invasion by an experienced gynecologist and of the individual ultrasound parameters were calculated. Subsequently, we used these variables to train a logistic regression model and least squares support vector machines (LS-SVM) with linear and RBF (radial basis function) kernels. Finally, these models were validated prospectively on data from 76 new patients in order to make a preoperative prediction of the depth of invasion. Of all ultrasound parameters, the ratio of the endometrial and uterine volumes had the largest AUC (78%), while that of the subjective assessment was 79%. The AUCs of the blood flow indices were low (range, 51-64%). Stepwise logistic regression selected the degree of differentiation, the number of fibroids, the endometrial thickness and the volume of the tumor. Compared with the AUC of the subjective assessment (72%), prospective evaluation of the mathematical models resulted in a higher AUC for the LS-SVM model with an RBF kernel (77%), but this difference was not significant. Single morphological parameters do not improve the predictive power when compared with the subjective assessment of depth of myometrial invasion of endometrial cancer, and blood flow indices do not contribute to the prediction of stage. In this study an LS-SVM model with an RBF kernel gave the best prediction; while this might be more reliable than subjective assessment, confirmation by larger prospective studies is required. Copyright 2006 ISUOG. Published by John Wiley & Sons, Ltd.

  20. Estimating terrestrial snow depth with the Topex-Poseidon altimeter and radiometer

    USGS Publications Warehouse

    Papa, F.; Legresy, B.; Mognard, N.M.; Josberger, E.G.; Remy, F.

    2002-01-01

    Active and passive microwave measurements obtained by the dual-frequency Topex-Poseidon radar altimeter from the Northern Great Plains of the United States are used to develop a snow pack radar backscatter model. The model results are compared with daily time series of surface snow observations made by the U.S. National Weather Service. The model results show that Ku-band provides more accurate snow depth determinations than does C-band. Comparing the snow depth determinations derived from the Topex-Poseidon nadir-looking passive microwave radiometers with the oblique-looking Satellite Sensor Microwave Imager (SSM/I) passive microwave observations and surface observations shows that both instruments accurately portray the temporal characteristics of the snow depth time series. While both retrievals consistently underestimate the actual snow depths, the Topex-Poseidon results are more accurate.

  1. Methods of Measuring and Mapping of Landslide Areas

    NASA Astrophysics Data System (ADS)

    Skrzypczak, Izabela; Kokoszka, Wanda; Kogut, Janusz; Oleniacz, Grzegorz

    2017-12-01

    The problem of attracting new investment areas and the inability of current zoning areas, allows us to understand why it is impossible to completely rule out building on landslide areas. Therefore, it becomes important issue of monitoring areas at risk of landslides. Only through appropriate monitoring and proper development of measurements resulting as maps of areas at risk of landslides enables us to estimate the risk and the relevant economic calculation for the realization of the anticipated investment in such areas. The results of monitoring of the surface and in-depth of the landslides are supplemented with constant observation of precipitation. The previous analyses and monitoring of landslides show that some of them are continuously active. GPS measurements, especially with laser scanning provide a unique activity data acquired on the surface of each individual landslide. The development of high resolution numerical models of terrain and the creation of differential models based on subsequent measurements, informs us about the size of deformation, both in units of distance (displacements) and volume. The compatibility of the data with information from in-depth monitoring allows the generation of a very reliable in-depth model of landslide, and as a result proper calculation of the volume of colluvium. Programs presented in the article are a very effective tool to generate in-depth model of landslide. In Poland, the steps taken under the SOPO project i.e. the monitoring and description of landslides are absolutely necessary for social and economic reasons and they may have a significant impact on the economy and finances of individual municipalities and also a whole country economy.

  2. Verification and transfer of thermal pollution model. Volume 4: User's manual for three-dimensional rigid-lid model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Nwadike, E. V.; Sinha, S. E.

    1982-01-01

    The theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model are described. Model verification at two sites, a separate user's manual for each model are included. The 3-D model has two forms: free surface and rigid lid. The former allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth, estuaries and coastal regions. The latter is suited for small surface wave heights compared to depth because surface elevation was removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free surface model also provides surface height variations with time.

  3. Breast cancer screening services: trade-offs in quality, capacity, outreach, and centralization.

    PubMed

    Güneş, Evrim D; Chick, Stephen E; Akşin, O Zeynep

    2004-11-01

    This work combines and extends previous work on breast cancer screening models by explicitly incorporating, for the first time, aspects of the dynamics of health care states, program outreach, and the screening volume-quality relationship in a service system model to examine the effect of public health policy and service capacity decisions on public health outcomes. We consider the impact of increasing standards for minimum reading volume to improve quality, expanding outreach with or without decentralization of service facilities, and the potential of queueing due to stochastic effects and limited capacity. The results indicate a strong relation between screening quality and the cost of screening and treatment, and emphasize the importance of accounting for service dynamics when assessing the performance of health care interventions. For breast cancer screening, increasing outreach without improving quality and maintaining capacity results in less benefit than predicted by standard models.

  4. Modeling of depth to base of Last Glacial Maximum and seafloor sediment thickness for the California State Waters Map Series, eastern Santa Barbara Channel, California

    USGS Publications Warehouse

    Wong, Florence L.; Phillips, Eleyne L.; Johnson, Samuel Y.; Sliter, Ray W.

    2012-01-01

    Models of the depth to the base of Last Glacial Maximum and sediment thickness over the base of Last Glacial Maximum for the eastern Santa Barbara Channel are a key part of the maps of shallow subsurface geology and structure for offshore Refugio to Hueneme Canyon, California, in the California State Waters Map Series. A satisfactory interpolation of the two datasets that accounted for regional geologic structure was developed using geographic information systems modeling and graphics software tools. Regional sediment volumes were determined from the model. Source data files suitable for geographic information systems mapping applications are provided.

  5. Modelling the effects of on-site greywater reuse and low flush toilets on municipal sewer systems.

    PubMed

    Penn, R; Schütze, M; Friedler, E

    2013-01-15

    On-site greywater reuse (GWR) and installation of water-efficient toilets (WET) reduce urban freshwater demand. Research on GWR and WET has generally overlooked the effects that GWR may have on municipal sewer systems. This paper discusses and quantifies these effects. The effects of GWR and WET, positive and negative, were studied by modelling a representative urban sewer system. GWR scenarios were modelled and analysed using the SIMBA simulation system. The results show that, as expected, the flow, velocity and proportional depth decrease as GWR increases. Nevertheless, the reduction is not evenly distributed throughout the day but mainly occurs during the morning and evening peaks. Examination of the effects of reduced toilet flush volumes revealed that in some of the GWR scenarios flows, velocities and proportional depths in the sewer were reduced, while in other GWR scenarios discharge volumes, velocities and proportional depths did not change. Further, it is indicated that as a result of GWR and installation of WET, sewer blockage rates are not expected to increase significantly. The results support the option to construct new sewer systems with smaller pipe diameters. The analysis shows that as the penetration of GWR systems increase, and with the installation of WET, concentrations of pollutants also increase. In GWR scenarios (when toilet flush volume is not reduced) the increase in pollutant concentrations is lower than the proportional reduction of sewage flow. Moreover, the results show that the spatial distribution of houses reusing GW does not significantly affect the parameters examined. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. System for the Analysis of Global Energy Markets - Vol. I, Model Documentation

    EIA Publications

    2003-01-01

    Documents the objectives and the conceptual and methodological approach used in the development of projections for the International Energy Outlook. The first volume of this report describes the System for the Analysis of Global Energy Markets (SAGE) methodology and provides an in-depth explanation of the equations of the model.

  7. Evaluating the accuracy of wear formulae for acetabular cup liners.

    PubMed

    Wu, James Shih-Shyn; Hsu, Shu-Ling; Chen, Jian-Horng

    2010-02-01

    This study proposes two methods for exploring the wear volume of a worn liner. The first method is a numerical method, in which SolidWorks software is used to create models of the worn out regions of liners at various wear directions and depths. The second method is an experimental one, in which a machining center is used to mill polyoxymethylene to manufacture worn and unworn liner models, then the volumes of the models are measured. The results show that the SolidWorks software is a good tool for presenting the wear pattern and volume of a worn liner. The formula provided by Ilchmann is the most suitable for computing liner volume loss, but is not accurate enough. This study suggests that a more accurate wear formula is required. This is crucial for accurate evaluation of the performance of hip components implanted in patients, as well as for designing new hip components.

  8. A physically-based method for predicting peak discharge of floods caused by failure of natural and constructed earthen dams

    USGS Publications Warehouse

    Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,

    1997-01-01

    We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?????1 or ?????1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.

  9. Time-dependent source model of the Lusi mud volcano

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  10. Interpreting DNAPL saturations in a laboratory-scale injection using one- and two-dimensional modeling of GPR Data

    USGS Publications Warehouse

    Johnson, R.H.; Poeter, E.P.

    2005-01-01

    Ground-penetrating radar (GPR) is used to track a dense non-aqueous phase liquid (DNAPL) injection in a laboratory sand tank. Before modeling, the GPR data provide a qualitative image of DNAPL saturation and movement. One-dimensional (1D) GPR modeling provides a quantitative interpretation of DNAPL volume within a given thickness during and after the injection. DNAPL saturation in sublayers of a specified thickness could not be quantified because calibration of the 1D GPR model is nonunique when both permittivity and depth of multiple layers are unknown. One-dimensional GPR modeling of the sand tank indicates geometric interferences in a small portion of the tank. These influences are removed from the interpretation using an alternate matching target. Two-dimensional (2D) GPR modeling provides a qualitative interpretation of the DNAPL distribution through pattern matching and tests for possible 2D influences that are not accounted for in the 1D GPR modeling. Accurate quantitative interpretation of DNAPL volumes using GPR modeling requires (1) identification of a suitable target that produces a strong reflection and is not subject to any geometric interference; (2) knowledge of the exact depth of that target; and (3) use of two-way radar-wave travel times through the medium to the target to determine the permittivity of the intervening material, which eliminates reliance on signal amplitude. With geologic conditions that are suitable for GPR surveys (i.e., shallow depths, low electrical conductivities, and a known reflective target), the procedures in this laboratory study can be adapted to a field site to delineate shallow DNAPL source zones.

  11. Integrated orbital servicing and payloads study. Volume 2: Technical and cost analysis

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The details and background used in the investigation of orbital servicing and payloads are presented. Topics discussed include review of previous models, application of servicing to communications satellites, assessment of spacecraft servicing, cost of servicing, and launch vehicle effects on spacecraft.

  12. Irradiance attenuation coefficient in a stratified ocean - A local property of the medium

    NASA Technical Reports Server (NTRS)

    Gordon, H. R.

    1980-01-01

    The influence of optically important constituents of water on the absorption (a) and scattering (b) coefficients and the backscattering probability is considered, with emphasis placed on measuring the volume scattering function (B/theta/). Two stratification models are examined; one in which the phase function (B(theta)/b) is depth independent and only b/c is allowed to vary with optical depth, and the other in which both b/c and the phase function depend on depth. The results demonstrate that Gordon's (1977) technique of estimating a and b is applicable without change to a stratified ocean.

  13. Mechanics of airway and alveolar collapse in human breath-hold diving.

    PubMed

    Fitz-Clarke, John R

    2007-11-15

    A computational model of the human respiratory tract was developed to study airway and alveolar compression and re-expansion during deep breath-hold dives. The model incorporates the chest wall, supraglottic airway, trachea, branched airway tree, and elastic alveoli assigned time-dependent surfactant properties. Total lung collapse with degassing of all alveoli is predicted to occur around 235 m, much deeper than estimates for aquatic mammals. Hysteresis of the pressure-volume loop increases with maximum diving depth due to progressive alveolar collapse. Reopening of alveoli occurs stochastically as airway pressure overcomes adhesive and compressive forces on ascent. Surface area for gas exchange vanishes at collapse depth, implying that the risk of decompression sickness should reach a plateau beyond this depth. Pulmonary capillary transmural stresses cannot increase after local alveolar collapse. Consolidation of lung parenchyma might provide protection from capillary injury or leakage caused by vascular engorgement due to outward chest wall recoil at extreme depths.

  14. Associations of olfactory bulb and depth of olfactory sulcus with basal ganglia and hippocampus in patients with Parkinson's disease.

    PubMed

    Tanik, Nermin; Serin, Halil Ibrahim; Celikbilek, Asuman; Inan, Levent Ertugrul; Gundogdu, Fatma

    2016-05-04

    Parkinson's disease (PD) is a neurodegenerative disorder characterized by hyposmia in the preclinical stages. We investigated the relationships of olfactory bulb (OB) volume and olfactory sulcus (OS) depth with basal ganglia and hippocampal volumes. The study included 25 patients with PD and 40 age- and sex-matched control subjects. Idiopathic PD was diagnosed according to published diagnostic criteria. The Hoehn and Yahr (HY) scale, the motor subscale of the Unified Parkinson's Disease Rating Scale (UPDRS III), and the Mini-Mental State Examination (MMSE) were administered to participants. Volumetric measurements of olfactory structures, the basal ganglia, and hippocampus were performed using magnetic resonance imaging (MRI). OB volume and OS depth were significantly reduced in PD patients compared to healthy control subjects (p<0.001 and p<0.001, respectively). The OB and left putamen volumes were significantly correlated (p=0.048), and the depth of the right OS was significantly correlated with right hippocampal volume (p=0.018). We found significant correlations between OB and putamen volumes and OS depth and hippocampal volume. Our study is the first to demonstrate associations of olfactory structures with the putamen and hippocampus using MRI volumetric measurements. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Improved biovolume estimation of Microcystis aeruginosa colonies: A statistical approach.

    PubMed

    Alcántara, I; Piccini, C; Segura, A M; Deus, S; González, C; Martínez de la Escalera, G; Kruk, C

    2018-05-27

    The Microcystis aeruginosa complex (MAC) clusters many of the most common freshwater and brackish bloom-forming cyanobacteria. In monitoring protocols, biovolume estimation is a common approach to determine MAC colonies biomass and useful for prediction purposes. Biovolume (μm 3 mL -1 ) is calculated multiplying organism abundance (orgL -1 ) by colonial volume (μm 3 org -1 ). Colonial volume is estimated based on geometric shapes and requires accurate measurements of dimensions using optical microscopy. A trade-off between easy-to-measure but low-accuracy simple shapes (e.g. sphere) and time costly but high-accuracy complex shapes (e.g. ellipsoid) volume estimation is posed. Overestimations effects in ecological studies and management decisions associated to harmful blooms are significant due to the large sizes of MAC colonies. In this work, we aimed to increase the precision of MAC biovolume estimations by developing a statistical model based on two easy-to-measure dimensions. We analyzed field data from a wide environmental gradient (800 km) spanning freshwater to estuarine and seawater. We measured length, width and depth from ca. 5700 colonies under an inverted microscope and estimated colonial volume using three different recommended geometrical shapes (sphere, prolate spheroid and ellipsoid). Because of the non-spherical shape of MAC the ellipsoid resulted in the most accurate approximation, whereas the sphere overestimated colonial volume (3-80) especially for large colonies (MLD higher than 300 μm). Ellipsoid requires measuring three dimensions and is time-consuming. Therefore, we constructed different statistical models to predict organisms depth based on length and width. Splitting the data into training (2/3) and test (1/3) sets, all models resulted in low training (1.41-1.44%) and testing average error (1.3-2.0%). The models were also evaluated using three other independent datasets. The multiple linear model was finally selected to calculate MAC volume as an ellipsoid based on length and width. This work contributes to achieve a better estimation of MAC volume applicable to monitoring programs as well as to ecological research. Copyright © 2017. Published by Elsevier B.V.

  16. On-line applications of numerical models in the Black Sea GIS

    NASA Astrophysics Data System (ADS)

    Zhuk, E.; Khaliulin, A.; Zodiatis, G.; Nikolaidis, A.; Nikolaidis, M.; Stylianou, Stavros

    2017-09-01

    The Black Sea Geographical Information System (GIS) is developed based on cutting edge information technologies, and provides automated data processing and visualization on-line. Mapserver is used as a mapping service; the data are stored in MySQL DBMS; PHP and Python modules are utilized for data access, processing, and exchange. New numerical models can be incorporated in the GIS environment as individual software modules, compiled for a server-based operational system, providing interaction with the GIS. A common interface allows setting the input parameters; then the model performs the calculation of the output data in specifically predefined files and format. The calculation results are then passed to the GIS for visualization. Initially, a test scenario of integration of a numerical model into the GIS was performed, using software, developed to describe a two-dimensional tsunami propagation in variable basin depth, based on a linear long surface wave model which is legitimate for more than 5 m depth. Furthermore, the well established oil spill and trajectory 3-D model MEDSLIK (http://www.oceanography.ucy.ac.cy/medslik/) was integrated into the GIS with more advanced GIS functionality and capabilities. MEDSLIK is able to forecast and hind cast the trajectories of oil pollution and floating objects, by using meteo-ocean data and the state of oil spill. The MEDSLIK module interface allows a user to enter all the necessary oil spill parameters, i.e. date and time, rate of spill or spill volume, forecasting time, coordinates, oil spill type, currents, wind, and waves, as well as the specification of the output parameters. The entered data are passed on to MEDSLIK; then the oil pollution characteristics are calculated for pre-defined time steps. The results of the forecast or hind cast are then visualized upon a map.

  17. Controls on Martian Hydrothermal Systems: Application to Valley Network and Magnetic Anomaly Formation

    NASA Technical Reports Server (NTRS)

    Harrison, Keith P.; Grimm, Robert E.

    2002-01-01

    Models of hydrothermal groundwater circulation can quantify limits to the role of hydrothermal activity in Martian crustal processes. We present here the results of numerical simulations of convection in a porous medium due to the presence of a hot intruded magma chamber. The parameter space includes magma chamber depth, volume, aspect ratio, and host rock permeability and porosity. A primary goal of the models is the computation of surface discharge. Discharge increases approximately linearly with chamber volume, decreases weakly with depth (at low geothermal gradients), and is maximized for equant-shaped chambers. Discharge increases linearly with permeability until limited by the energy available from the intrusion. Changes in the average porosity are balanced by changes in flow velocity and therefore have little effect. Water/rock ratios of approximately 0.1, obtained by other workers from models based on the mineralogy of the Shergotty meteorite, imply minimum permeabilities of 10(exp -16) sq m2 during hydrothermal alteration. If substantial vapor volumes are required for soil alteration, the permeability must exceed 10(exp -15) sq m. The principal application of our model is to test the viability of hydrothermal circulation as the primary process responsible for the broad spatial correlation of Martian valley networks with magnetic anomalies. For host rock permeabilities as low as 10(exp -17) sq m and intrusion volumes as low as 50 cu km, the total discharge due to intrusions building that part of the southern highlands crust associated with magnetic anomalies spans a comparable range as the inferred discharge from the overlying valley networks.

  18. Oklahoma's induced seismicity strongly linked to wastewater injection depth

    NASA Astrophysics Data System (ADS)

    Hincks, Thea; Aspinall, Willy; Cooke, Roger; Gernon, Thomas

    2018-03-01

    The sharp rise in Oklahoma seismicity since 2009 is due to wastewater injection. The role of injection depth is an open, complex issue, yet critical for hazard assessment and regulation. We developed an advanced Bayesian network to model joint conditional dependencies between spatial, operational, and seismicity parameters. We found that injection depth relative to crystalline basement most strongly correlates with seismic moment release. The joint effects of depth and volume are critical, as injection rate becomes more influential near the basement interface. Restricting injection depths to 200 to 500 meters above basement could reduce annual seismic moment release by a factor of 1.4 to 2.8. Our approach enables identification of subregions where targeted regulation may mitigate effects of induced earthquakes, aiding operators and regulators in wastewater disposal regions.

  19. Earth's Constant Mean Elevation: Implication for Long-Term Sea Level and Controlled by Ocean Lithosphere Dynamics in a Pitman World

    NASA Astrophysics Data System (ADS)

    Rowley, David

    2017-04-01

    On a spherical Earth, the mean elevation ( -2440m) would be everywhere at a mean Earth radius from the center. This directly links an elevation at the surface to physical dimensions of the Earth, including surface area and volume that are at most very slowly evolving components of the Earth system. Earth's mean elevation thus provides a framework within which to consider changes in heights of Earth's solid surface as a function of time. In this paper the focus will be on long-term, non-glacially controlled sea level. Long-term sea level has long been argued to be largely controlled by changes in ocean basin volume related to changes in area-age distribution of oceanic lithosphere. As generally modeled by Pitman (1978) and subsequent workers, the age-depth relationship of oceanic lithosphere, including both the ridge depth and coefficients describing the age-depth relationship are assumed constant. This paper examines the consequences of adhering to these assumptions when placed within the larger framework of maintaining a constant mean radius of the Earth. Self-consistent estimates of long-term sea level height and changes in mean depth of the oceanic crust are derived from the assumption that the mean elevation and corresponding mean radius are unchanging aspects of Earth's shorter-term evolution. Within this context, changes in mean depth of the oceanic crust, corresponding with changes in mean age of the oceanic lithosphere, acting over the area of the oceanic crust represent a volume change that is required to be balanced by a compensating equal but opposite volume change under the area of the continental crust. Models of paleo-cumulative hypsometry derived from a starting glacial isostatic adjustment (GIA)-corrected ice-free hypsometry that conserve mean elevation provide a basis for understanding how these compensating changes impact global hypsometry and particularly estimates of global mean shoreline height. Paleo-shoreline height and areal extent of flooding can be defined as the height and corresponding cumulative area of the solid surface of the Earth at which the integral of area as a function of elevation, from the maximum depth upwards, equals the volume of ocean water filling it with respect to cumulative paleo-hypsometry. Present height of the paleo-shoreline is the height on the GIA-corrected cumulative hypsometry at an area equal to the areal extent of flooding. Paleogeographic estimates of global extent of ocean flooding from the Middle Jurassic to end Eocene, when combined with conservation of mean elevation and ocean water volume allow an explicit estimate of the paleo-height and present height of the paleo-shoreline. The best-fitting estimate of present height of the paleo-shoreline, equivalent to a long-term "eustatic" sea level curve, implies very modest (25±22m) changes in long-term sea level above the ice-free sea level height of +40m. These, in turn, imply quite limited changes in mean depth of the oceanic crust (15±11m), and mean age of the oceanic lithosphere ( 62.1±2.4 my) since the Middle Jurassic.

  20. Detecting both melanoma depth and volume in vivo with a handheld photoacoustic probe

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Li, Guo; Zhu, Liren; Li, Chiye; Cornelius, Lynn A.; Wang, Lihong V.

    2016-03-01

    We applied a linear-array-based photoacoustic probe to detect the tumor depth and volume of melanin-containing melanoma in nude mice in vivo. We demonstrated the ability of this linear-array-based system to measure both the depth and volume of melanoma through phantom, ex vivo, and in vivo experiments. The volume detection ability also enables us to accurately calculate the rate of growth of the tumor, which is important in quantifying tumor activity. Our results show that this system can be used for clinical melanoma diagnosis and treatment at the bedside.

  1. Mathematical modeling of the heat transfer for determining the depth of thawing basin buildings with long service life

    NASA Astrophysics Data System (ADS)

    Sirditov, Ivan; Stepanov, Sergei

    2017-11-01

    In this paper, a numerical study of the problem of determining a thawing basin in the permafrost soil for buildings with a long service life is carried out using two methods, with the formulas of set of rules 25.13330.2012 "Soil bases and foundations on permafrost soils" and using a mathematical model.

  2. Air-gas exchange reevaluated: clinically important results of a computer simulation.

    PubMed

    Shunmugam, Manoharan; Shunmugam, Sudhakaran; Williamson, Tom H; Laidlaw, D Alistair

    2011-10-21

    The primary aim of this study was to evaluate the efficiency of air-gas exchange techniques and the factors that influence the final concentration of an intraocular gas tamponade. Parameters were varied to find the optimum method of performing an air-gas exchange in ideal circumstances. A computer model of the eye was designed using 3D software with fluid flow analysis capabilities. Factors such as angular distance between ports, gas infusion gauge, exhaust vent gauge and depth were varied in the model. Flow rate and axial length were also modulated to simulate faster injections and more myopic eyes, respectively. The flush volume of gas required to achieve a 97% intraocular gas fraction concentration were compared. Modulating individual factors did not reveal any clinically significant difference in the angular distance between ports, exhaust vent size, and depth or rate of gas injection. In combination, however, there was a 28% increase in air-gas exchange efficiency comparing the most efficient with the least efficient studied parameters in this model. The gas flush volume required to achieve a 97% gas fill also increased proportionately at a ratio of 5.5 to 6.2 times the volume of the eye. A 35-mL flush is adequate for eyes up to 25 mm in axial length; however, eyes longer than this would require a much greater flush volume, and surgeons should consider using two separate 50-mL gas syringes to ensure optimal gas concentration for eyes greater than 25 mm in axial length.

  3. Models of railroad passenger-car requirements in the northeast corridor : volume II user's guide

    DOT National Transportation Integrated Search

    1976-09-30

    Models and techniques for determining passenger-car requirements in railroad service were developed and applied by a research project of which this is the final report. The report is published in two volumes. The solution and analysis of the Northeas...

  4. SToRM: A Model for Unsteady Surface Hydraulics Over Complex Terrain

    USGS Publications Warehouse

    Simoes, Francisco J.

    2014-01-01

    A two-dimensional (depth-averaged) finite volume Godunov-type shallow water model developed for flow over complex topography is presented. The model is based on an unstructured cellcentered finite volume formulation and a nonlinear strong stability preserving Runge-Kutta time stepping scheme. The numerical discretization is founded on the classical and well established shallow water equations in hyperbolic conservative form, but the convective fluxes are calculated using auto-switching Riemann and diffusive numerical fluxes. The model’s implementation within a graphical user interface is discussed. Field application of the model is illustrated by utilizing it to estimate peak flow discharges in a flooding event of historic significance in Colorado, U.S.A., in 2013.

  5. Limited role for thermal erosion by turbulent lava in proximal Athabasca Valles, Mars

    PubMed Central

    Cataldo, Vincenzo; Williams, David A.; Dundas, Colin M.; Keszthelyi, Laszlo P.

    2017-01-01

    The Athabasca Valles flood lava is among the most recent (<50 Ma) and best preserved effusive lava flows on Mars and was probably emplaced turbulently. The Williams et al. [2005] model of thermal erosion by lava has been applied to what we term “proximal Athabasca,” the 75 km long upstream portion of Athabasca Valles. For emplacement volumes of 5000 and 7500 km3 and average flow thicknesses of 20 and 30 m, the duration of the eruption varies between ~11 and ~37 days. The erosion of the lava flow substrate is investigated for three eruption temperatures (1270°C, 1260°C, and 1250°C), and volatile contents equivalent to 0–65 vol% bubbles. The largest erosion depths of ~3.8–7.5 m are at the lava source, for 20 m thick and bubble-free flows that erupted at their liquidus temperature (1270°C). A substrate containing 25 vol% ice leads to maximum erosion. A lava temperature 20°C below liquidus reduces erosion depths by a factor of ~2.2. If flow viscosity increases with increasing bubble content in the lava, the presence of 30–50 vol % bubbles leads to erosion depths lower than those relative to bubble-free lava by a factor of ~2.4. The presence of 25 vol % ice in the substrate increases erosion depths by a factor of 1.3. Nevertheless, modeled erosion depths, consistent with the emplacement volume and flow duration constraints, are far less than the depth of the channel (~35–100 m). We conclude that thermal erosion does not appear to have had a major role in excavating Athabasca Valles. PMID:29082120

  6. Limited role for thermal erosion by turbulent lava in proximal Athabasca Valles, Mars

    USGS Publications Warehouse

    Cataldo, Vincenzo; Williams, David A.; Dundas, Colin M.; Kestay, Laszlo P.

    2015-01-01

    The Athabasca Valles flood lava is among the most recent (<50 Ma) and best preserved effusive lava flows on Mars and was probably emplaced turbulently. The Williams et al. (2005) model of thermal erosion by lava has been applied to what we term “proximal Athabasca,” the 75 km long upstream portion of Athabasca Valles. For emplacement volumes of 5000 and 7500 km3and average flow thicknesses of 20 and 30 m, the duration of the eruption varies between ~11 and ~37 days. The erosion of the lava flow substrate is investigated for three eruption temperatures (1270°C, 1260°C, and 1250°C), and volatile contents equivalent to 0–65 vol % bubbles. The largest erosion depths of ~3.8–7.5 m are at the lava source, for 20 m thick and bubble-free flows that erupted at their liquidus temperature (1270°C). A substrate containing 25 vol % ice leads to maximum erosion. A lava temperature 20°C below liquidus reduces erosion depths by a factor of ~2.2. If flow viscosity increases with increasing bubble content in the lava, the presence of 30–50 vol % bubbles leads to erosion depths lower than those relative to bubble-free lava by a factor of ~2.4. The presence of 25 vol % ice in the substrate increases erosion depths by a factor of 1.3. Nevertheless, modeled erosion depths, consistent with the emplacement volume and flow duration constraints, are far less than the depth of the channel (~35–100 m). We conclude that thermal erosion does not appear to have had a major role in excavating Athabasca Valles.

  7. Estimated probabilities, volumes, and inundation areas depths of potential postwildfire debris flows from Carbonate, Slate, Raspberry, and Milton Creeks, near Marble, Gunnison County, Colorado

    USGS Publications Warehouse

    Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.

    2011-01-01

    During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for the 25-year-recurrence, 1-hour-duration rainfall. The largest postwildfire debris-flow volumes were estimated for Carbonate Creek and Milton Creek drainage basins, for both the 5- and 25-year-recurrence, 1-hour-duration rainfalls. Results from FLO-2D modeling of the 5-year and 25-year recurrence, 1-hour rainfalls indicate that the debris flows from the four drainage basins would reach or nearly reach the Crystal River. The model estimates maximum instantaneous depths of debris-flow material during postwildfire debris flows that exceeded 5 meters in some areas, but the differences in model results between the 5-year and 25-year recurrence, 1-hour rainfalls are small. Existing stream channels or topographic flow paths likely control the distribution of debris-flow material, and the difference in estimated debris-flow volume (about 25 percent more volume for the 25-year-recurrence, 1-hour-duration rainfall compared to the 5-year-recurrence, 1-hour-duration rainfall) does not seem to substantially affect the estimated spatial distribution of debris-flow material. Historically, the Marble area has experienced periodic debris flows in the absence of wildfire. This report estimates the probability and volume of debris flow and maximum instantaneous inundation area depths after hypothetical wildfire and rainfall. This postwildfire debris-flow report does not address the current (2010) prewildfire debris-flow hazards that exist near Marble.

  8. Renal Artery Embolization Combined With Radiofrequency Ablation in a Porcine Kidney Model: Effect of Small and Narrowly Calibrated Microparticles as Embolization Material on Coagulation Diameter, Volume, and Shape

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sommer, C. M., E-mail: christof.sommer@med.uni-heidelberg.de; Kortes, N.; Zelzer, S.

    2011-02-15

    The purpose of this study was to evaluate the effect of renal artery embolization with small and narrowly calibrated microparticles on the coagulation diameter, volume, and shape of radiofrequency ablations (RFAs) in porcine kidneys. Forty-eight RFAs were performed in 24 kidneys of 12 pigs. In 6 animals, bilateral renal artery embolization was performed with small and narrowly calibrated microparticles. Upper and lower kidney poles were ablated with identical system parameters. Applying three-dimensional segmentation software, RFAs were segmented on registered 2 mm-thin macroscopic slices. Length, depth, width, volume{sub s}egmented, and volume{sub c}alculated were determined to describe the size of the RFAs.more » To evaluate the shape of the RFAs, depth-to-width ratio (perfect symmetry-to-lesion length was indicated by a ratio of 1), sphericity ratio (perfect sphere was indicated by a sphericity ratio of 1), eccentricity (perfect sphere was indicated by an eccentricity of 0), and circularity (perfect circle was indicated by a circularity of 1) were determined. Embolized compared with nonembolized RFAs showed significantly greater depth (23.4 {+-} 3.6 vs. 17.2 {+-} 1.8 mm; p < 0.001) and width (20.1 {+-} 2.9 vs. 12.6 {+-} 3.7 mm; p < 0.001); significantly larger volume{sub s}egmented (8.6 {+-} 3.2 vs. 3.0 {+-} 0.7 ml; p < 0.001) and volume{sub c}alculated (8.4 {+-} 3.0 ml vs. 3.3 {+-} 1.1 ml; p < 0.001); significantly lower depth-to-width (1.17 {+-} 0.10 vs. 1.48 {+-} 0.44; p < 0.05), sphericity (1.55 {+-} 0.44 vs. 1.96 {+-} 0.43; p < 0.01), and eccentricity (0.84 {+-} 0.61 vs. 1.73 {+-} 0.91; p < 0.01) ratios; and significantly greater circularity (0.62 {+-} 0.14 vs. 0.45 {+-} 0.16; p < 0.01). Renal artery embolization with small and narrowly calibrated microparticles affected the coagulation diameter, volume, and shape of RFAs in porcine kidneys. Embolized RFAs were significantly larger and more spherical compared with nonembolized RFAs.« less

  9. Modeling the Structure of Partnership between Researchers and Front-Line Service Providers: Strengthening Collaborative Public Health Research

    ERIC Educational Resources Information Center

    Pinto, Rogério M.; Wall, Melanie M.; Spector, Anya Y.

    2014-01-01

    Partnerships between HIV researchers and service providers are essential for reducing the gap between research and practice. Community-Based Participatory Research principles guided this cross-sectional study, combining 40 in-depth interviews with surveys of 141 providers in 24 social service agencies in New York City. We generated the…

  10. Improving sand and gravel utilization and land-use planning. - 3D-modelling gravel resources with geospatial data.

    NASA Astrophysics Data System (ADS)

    Rolstad Libach, Lars; Wolden, Knut; Dagestad, Atle; Eskil Larsen, Bjørn

    2017-04-01

    The Norwegian aggregate industry produces approximately 14 million tons of sand and gravel aggregates annually to a value of approximately 100 million Euros. Utilization of aggregates are often linked to land-use conflicts and complex environmental impacts at the extraction site. These topics are managed on a local municipal level in Norway. The Geological Survey of Norway has a database and a web map service with information about sand and gravel deposits with considerable volumes and an importance evaluation. Some of the deposits covers large areas where the land-use conflicts are high. To ease and improve land-use planning, safeguard other important resources like groundwater and sustainable utilization of sand and gravel resources - there is a need for more detailed information of already mapped important resources. Detailed 3D-models of gravel deposits is a tool for a better land-use- and resource management. By combining seismic, GPR and resistivity geophysical profile data, borehole data, quaternary maps and lidar surface data, it has been possible to make 3D-models of deposits and to further research the possibilities for distinguishing different qualities and volumes. Good datasets and a detailed resource map is a prerequisite to assess geological resources for planners, extractors and neighbours. Future challenges lies in use of, often old, geophysical data, and combining these. What kind of information is it possible to grasp from depth-data that actually argues for a more detailed delineation of resources?

  11. Design Recommendations for Concrete Tunnel Linings : Volume II. Summary of Research and Proposed Recommendations.

    DOT National Transportation Integrated Search

    1983-11-01

    The report presents design recommendations for concrete tunnel linings for transportation tunnels. The recommendations developed as a result of in-depth analysis and model testing of the behavior of concrete tunnel linings. The research addressed pro...

  12. Estimating merchantable tree volume in Oregon and Washington using stem profile models

    Treesearch

    Raymond L. Czaplewski; Amy S. Brown; Dale G. Guenther

    1989-01-01

    The profile model of Max and Burkhart was fit to eight tree species in the Pacific Northwest Region (Oregon and Washington) of the Forest Service. Most estimates of merchantable volume had an average error less than 10% when applied to independent test data for three national forests.

  13. The Incredible Shrinking Cup Lab: An Investigation of the Effect of Depth and Water Pressure on Polystyrene

    EPA Science Inventory

    This activity familiarizes students with the effects of increased depth on pressure and volume. Students will determine the volume of polystyrene cups before and after they are submerged to differing depths in the ocean and the Laurentian Great Lakes. Students will also calculate...

  14. Developing a Model for Predicting Snowpack Parameters Affecting Vehicle Mobility,

    DTIC Science & Technology

    1983-05-01

    Service River Forecast System -Snow accumulation and JO ablation model. NOAA Technical Memorandum NWS HYDRO-17, National Weather Service, JS Silver Spring... Forecast System . This model indexes each phys- ical process that occurs in the snowpack to the air temperature. Although this results in a signifi...pressure P Probability Q Energy Q Specific humidity R Precipitation s Snowfall depth T Air temperature t Time U Wind speed V Water vapor

  15. Structural and functional correlates of hypnotic depth and suggestibility.

    PubMed

    McGeown, William Jonathan; Mazzoni, Giuliana; Vannucci, Manila; Venneri, Annalena

    2015-02-28

    This study explores whether self-reported depth of hypnosis and hypnotic suggestibility are associated with individual differences in neuroanatomy and/or levels of functional connectivity. Twenty-nine people varying in suggestibility were recruited and underwent structural, and after a hypnotic induction, functional magnetic resonance imaging at rest. We used voxel-based morphometry to assess the correlation of grey matter (GM) and white matter (WM) against the independent variables: depth of hypnosis, level of relaxation and hypnotic suggestibility. Functional networks identified with independent components analysis were regressed with the independent variables. Hypnotic depth ratings were positively correlated with GM volume in the frontal cortex and the anterior cingulate cortex (ACC). Hypnotic suggestibility was positively correlated with GM volume in the left temporal-occipital cortex. Relaxation ratings did not correlate significantly with GM volume and none of the independent variables correlated with regional WM volume measures. Self-reported deeper levels of hypnosis were associated with less connectivity within the anterior default mode network. Taken together, the results suggest that the greater GM volume in the medial frontal cortex and ACC, and lower connectivity in the DMN during hypnosis facilitate experiences of greater hypnotic depth. The patterns of results suggest that hypnotic depth and hypnotic suggestibility should not be considered synonyms. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Endoluminal ultrasound applicator configurations utilizing deployable arrays, reflectors and lenses to augment and dynamically adjust treatment volume, gain, and depth

    NASA Astrophysics Data System (ADS)

    Adams, Matthew S.; Salgaonkar, Vasant A.; Sommer, Graham; Diederich, Chris J.

    2017-02-01

    Endoluminal high-intensity ultrasound offers spatially-precise thermal ablation of tissues adjacent to body lumens, but is constrained in treatment volume and penetration depth by the effective aperture of integrated transducers, which are limited in size to enable delivery through anatomical passages, endoscopic instrumentation, or laparoscopic ports. This study introduced and investigated three distinct endoluminal ultrasound applicator designs that can be delivered in a compact state then deployed or expanded at the target luminal site to increase the effective therapeutic aperture. The first design incorporated an array of planar transducers which could be unfolded at specific angles of convergence between the transducers. Two alternative designs consisted of fixed transducer sources surrounded by an expandable multicompartment balloon that contained acoustic reflector and dynamically-adjustable fluid lenses compartments. Parametric studies of acoustic output were performed across device design parameters via the rectangular radiator and secondary sources methods. Biothermal models were used to simulate resulting temperature distributions in three-dimensional heterogeneous tissue models. Simulations indicate that a deployable transducer array can increase volumetric coverage and penetration depth by 80% and 20%, respectively, while permitting more conformal thermal lesion shapes based on the degree of convergence between the transducers. The applicator designs incorporating reflector and fluid lenses demonstrated enhanced focal gain and penetration depth that increased with the diameter of the expanded reflector-lens balloon. Thermal simulations of assemblies with 12 mm compact profiles and 50 mm expanded balloon diameters demonstrated generation of localized thermal lesions at depths up to 10 cm in liver tissue.

  17. Customer premises services market demand assessment 1980 - 2000: Volume 2

    NASA Technical Reports Server (NTRS)

    Gamble, R. B.; Saporta, L.; Heidenrich, G. A.

    1983-01-01

    Potential customer premises service (CPS), telecommunication services, potential CPS user classes, a primary research survey, comparative economics, market demand forcasts, distance distribution of traffic, segmentation of market demand, and a nationwide traffic distribution model are discussed.

  18. Crustal P-Wave Speed Structure Under Mount St. Helens From Local Earthquake Tomography

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Moran, S. C.

    2006-12-01

    We used local earthquake data to model the P-wave speed structure of Mount St. Helens with the aim of improving our understanding of the active magmatic system. Our study used new data recorded by a dense array of 19 broadband seismographs that were deployed during the current eruption together with permanent network data recorded since the May 18, 1980 eruption. Most earthquakes around Mount St. Helens during the last 25 years were clustered in a narrow vertical column beneath the volcano from the surface to a depth of about 10 km. Earthquakes also occurred in a well-defined zone extending to the NNW from the volcano known as the St. Helens Seismic Zone (SHZ). During the current eruption, earthquakes have been confined to within 3 km of the surface beneath the crater floor. These earthquakes apparently radiate little shear-wave energy and the shear arrivals are usually contaminated by surface waves. Thus, we focused on developing an improved P- wave speed model. We used two data sources: (1) the short-period, vertical-component Pacific Northwest Seismograph Network and (2) new data recorded on a temporary array between June 2005 and February 2006. We first solved for a minimum one-dimensional model, incorporating the Moho depth found during an earlier wide-aperture refraction study. The three-dimensional model was solved simultaneously with hypocenter locations using the computer code SIMULPS14, extended for full three-dimensional ray shooting. We modified the code to force raypaths to remain below the ground surface. We began with large grid spacing and progressed to smaller grid spacing where the earthquakes and stations were denser. In this way we achieve a 40 km by 40 km regional model as well as a 10 km by 10 km fine-scale model directly beneath Mount St. Helens. The large-scale model is consistent with mapped geology and other geophysical data in the vicinity of Mount St. Helens. For example, there is a zone of relatively low velocities (-2% to -5% lower than background model) from 3 to at least 10 km depth extending NNW from the volcano parallel to the SHZ. The low-wave- speed zone coincides with a linear magnetic low, the western edge of a magnetotelluric conductive anomaly, and a localized gravity low. The coincidence of the volcano and these anomalies indicates this preexisting zone of weakness may control the location of Mount St. Helens, as has been suggested by previous investigators. Prominent high-wave-speed anomalies (+3% to +6% relative to background) on either side of this zone are due to plutons, which are also imaged with other geophysical data. Fine-scale modeling of the upper crust directly beneath Mount St. Helens reveals subtle structures not seen in the larger-scale model. The key structure is a cylindrical volume with speeds almost 10% slower than the background model extending from 6 to at least 10 km depth. The vertical, cylindrical volume of earthquakes, which reaches from the surface to more than 10 km depth, splits around this low-wave-speed volume creating an aseismic zone coincident with the low P-wave speeds. We interpret this volume as a melt-rich reservoir surrounded by hot rock.

  19. Summary of the Georgia Agricultural Water Conservation and Metering Program and evaluation of methods used to collect and analyze irrigation data in the middle and lower Chattahoochee and Flint River basins, 2004-2010

    USGS Publications Warehouse

    Torak, Lynn J.; Painter, Jaime A.

    2011-01-01

    Since receiving jurisdiction from the State Legislature in June 2003 to implement the Georgia Agricultural Water Conservation and Metering Program, the Georgia Soil and Water Conservation Commission (Commission) by year-end 2010 installed more than 10,000 annually read water meters and nearly 200 daily reporting, satellite-transmitted, telemetry sites on irrigation systems located primarily in southern Georgia. More than 3,000 annually reported meters and 50 telemetry sites were installed during 2010 alone. The Commission monitored rates and volumes of agricultural irrigation supplied by groundwater, surface-water, and well-to-pond sources to inform water managers on the patterns and amounts of such water use and to determine effective and efficient resource utilization. Summary analyses of 4 complete years of irrigation data collected from annually read water meters in the middle and lower Chattahoochee and Flint River basins during 2007-2010 indicated that groundwater-supplied fields received slightly more irrigation depth per acre than surface-water-supplied fields. Year 2007 yielded the largest disparity between irrigation depth supplied by groundwater and surface-water sources as farmers responded to severe-to-exceptional drought conditions with increased irrigation. Groundwater sources (wells and well-to-pond systems) outnumbered surface-water sources by a factor of five; each groundwater source applied a third more irrigation volume than surface water; and, total irrigation volume from groundwater exceeded that of surface water by a factor of 6.7. Metered irrigation volume indicated a pattern of low-to-high water use from northwest to southeast that could point to relations between agricultural water use, water-resource potential and availability, soil type, and crop patterns. Normalizing metered irrigation-volume data by factoring out irrigated acres allowed irrigation water use to be expressed as an irrigation depth and nearly eliminated the disparity between volumes of applied irrigation derived from groundwater and surface water. Analysis of per-acre irrigation depths provided a commonality for comparing irrigation practices across the entire range of field sizes in southern Georgia and indicated underreporting of irrigated acres for some systems. Well-to-pond systems supplied irrigation at depths similar to groundwater and can be combined with groundwater irrigation data for subsequent analyses. Average irrigation depths during 2010 indicated an increase from average irrigation depths during 2008 and 2009, most likely the result of relatively dry conditions during 2010 compared to conditions in 2008 and 2009. Geostatistical models facilitated estimation of irrigation water use for unmetered systems and demonstrated usefulness in redesigning the telemetry network. Geospatial analysis evaluated the ability of the telemetry network to represent annually reported water-meter data and presented an objective, unbiased method for revising the network.

  20. Economic valuation of flood mitigation services: A case study from the Otter Creek, VT.

    NASA Astrophysics Data System (ADS)

    Galford, G. L.; Ricketts, T.; Bryan, K. L.; ONeil-Dunne, J.; Polasky, S.

    2014-12-01

    The ecosystem services provided by wetlands are widely recognized but difficult to quantify. In particular, estimating the effect of landcover and land use on downstream flood outcomes remains challenging, but is increasingly important in light of climate change predictions of increased precipitation in many areas. Economic valuation can help incorporate ecosystem services into decisions and enable communities to plan for climate and flood resiliency. Here we estimate the economic value of Otter Creek wetlands for Middlebury, VT in mitigating the flood that followed Tropical Storm Irene, as well as for ten historic floods. Observationally, hydrographs above and below the wetlands in the case of each storm indicated the wetlands functioned as a temporary reservoir, slowing the delivery of water to Middlebury. We compare observed floods, based on Middlebury's hydrograph, with simulated floods for scenarios without wetlands. To simulate these "without wetlands" scenarios, we assume the same volume of water was delivered to Middlebury, but in a shorter time pulse similar to a hydrograph upstream of the wetlands. For scenarios with and without wetlands, we map the spatial extent of flooding using LiDAR digital elevation data. We then estimate flood depth at each affected building, and calculate monetary losses as a function of the flood depth and house value using established depth damage relationships. For example, we expect damages equal to 20% of the houses value for a flood depth of two feet in a two-story home with a basement. We define the value of flood mitigation services as the difference in damages between the with and without wetlands scenario, and find that the Otter Creek wetlands reduced flood damage in Middlebury by 88% following Hurricane Irene. Using the 10 additional historic floods, we estimate an ongoing mean value of $400,000 in avoided damages per year. Economic impacts of this magnitude stress the importance of wetland conservation and warrant the consideration of ecosystem services in land use decisions. Our study indicates that here and elsewhere, green infrastructure may have to potential to increase the resilience of communities to projected changes in climate.

  1. Effects of injection pressure variation on mixing in a cold supersonic combustor with kerosene fuel

    NASA Astrophysics Data System (ADS)

    Liu, Wei-Lai; Zhu, Lin; Qi, Yin-Yin; Ge, Jia-Ru; Luo, Feng; Zou, Hao-Ran; Wei, Min; Jen, Tien-Chien

    2017-10-01

    Spray jet in cold kerosene-fueled supersonic flow has been characterized under different injection pressures to assess the effects of the pressure variation on the mixing between incident shock wave and transverse cavity injection. Based on the real scramjet combustor, a detailed computational fluid dynamics model is developed. The injection pressures are specified as 0.5, 1.0, 2.0, 3.0 and 4.0 MPa, respectively, with the other constant operation parameters (such as the injection diameter, angle and velocity). A three dimensional Couple Level Set & Volume of Fluids approach incorporating an improved Kelvin-Helmholtz & Rayleigh-Taylor model is used to investigate the interaction between kerosene and supersonic air. The numerical simulations primarily concentrate on penetration depth, span expansion area, angle of shock wave and sauter mean diameter distribution of the kerosene droplets with/without evaporation. Validation has been implemented by comparing the calculated against the measured in literature with good qualitative agreement. Results show that the penetration depth, span-wise angle and expansion area of the transverse cavity jet are all increased with the injection pressure. However, when the injection pressure is further increased, the value in either penetration depth or expansion area increases appreciably. This study demonstrates the feasibility and effectiveness of the combination of Couple Level Set & Volume of Fluids approach and an improved Kelvin-Helmholtz & Rayleigh-Taylor model, in turn providing insights into scramjet design improvement.

  2. A Quantitative Analysis of the Relationship between Medicare Payment and Service Volume for Glaucoma Procedures from 2005 through 2009.

    PubMed

    Gong, Dan; Jun, Lin; Tsai, James C

    2015-05-01

    To calculate the association between Medicare payment and service volume for 6 commonly performed glaucoma procedures. Retrospective, longitudinal database study. A 100% dataset of all glaucoma procedures performed on Medicare Part B beneficiaries within the United States from 2005 to 2009. Fixed-effects regression model using Medicare Part B carrier data for all 50 states and the District of Columbia, controlling for time-invariant carrier-specific characteristics, national trends in glaucoma service volume, Medicare beneficiary population, number of ophthalmologists, and income per capita. Payment-volume elasticities, defined as the percent change in service volume per 1% change in Medicare payment, for laser trabeculoplasty (Current Procedural Terminology [CPT] code 65855), trabeculectomy without previous surgery (CPT code 66170), trabeculectomy with previous surgery (CPT code 66172), aqueous shunt to reservoir (CPT code 66180), laser iridotomy (CPT code 66761), and scleral reinforcement with graft (CPT code 67255). The payment-volume elasticity was nonsignificant for 4 of 6 procedures studied: laser trabeculoplasty (elasticity, -0.27; 95% confidence interval [CI], -1.31 to 0.77; P = 0.61), trabeculectomy without previous surgery (elasticity, -0.42; 95% CI, -0.85 to 0.01; P = 0.053), trabeculectomy with previous surgery (elasticity, -0.28; 95% CI, -0.83 to 0.28; P = 0.32), and aqueous shunt to reservoir (elasticity, -0.47; 95% CI, -3.32 to 2.37; P = 0.74). Two procedures yielded significant associations between Medicare payment and service volume. For laser iridotomy, the payment-volume elasticity was -1.06 (95% CI, -1.39 to -0.72; P < 0.001): for every 1% decrease in CPT code 66761 payment, laser iridotomy service volume increased by 1.06%. For scleral reinforcement with graft, the payment-volume elasticity was -2.92 (95% CI, -5.72 to -0.12; P = 0.041): for every 1% decrease in CPT code 67255 payment, scleral reinforcement with graft service volume increased by 2.92%. This study calculated the association between Medicare payment and service volume for 6 commonly performed glaucoma procedures and found varying magnitudes of payment-volume elasticities, suggesting that the volume response to changes in Medicare payments, if present, is not uniform across all Medicare procedures. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  3. 30 CFR 260.114 - How does MMS assign and monitor royalty suspension volumes for eligible leases?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... have specified the water depth category for each eligible lease in the final Notice of OCS Lease Sale... OCS Lease Sale Package is available on the MMS Web site. Our determination of water depth for each... royalty suspension volume applicable to each water depth. The following table shows the royalty suspension...

  4. GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries

    NASA Astrophysics Data System (ADS)

    Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh

    2018-04-01

    Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.

  5. Does Categorization Method Matter in Exploring Volume-Outcome Relation? A Multiple Categorization Methods Comparison in Coronary Artery Bypass Graft Surgery Surgical Site Infection.

    PubMed

    Yu, Tsung-Hsien; Tung, Yu-Chi; Chung, Kuo-Piao

    2015-08-01

    Volume-infection relation studies have been published for high-risk surgical procedures, although the conclusions remain controversial. Inconsistent results may be caused by inconsistent categorization methods, the definitions of service volume, and different statistical approaches. The purpose of this study was to examine whether a relation exists between provider volume and coronary artery bypass graft (CABG) surgical site infection (SSI) using different categorization methods. A population-based cross-sectional multi-level study was conducted. A total of 10,405 patients who received CABG surgery between 2006 and 2008 in Taiwan were recruited. The outcome of interest was surgical site infection for CABG surgery. The associations among several patient, surgeon, and hospital characteristics was examined. The definition of surgeons' and hospitals' service volume was the cumulative CABG service volumes in the previous year for each CABG operation and categorized by three types of approaches: Continuous, quartile, and k-means clustering. The results of multi-level mixed effects modeling showed that hospital volume had no association with SSI. Although the relation between surgeon volume and surgical site infection was negative, it was inconsistent among the different categorization methods. Categorization of service volume is an important issue in volume-infection study. The findings of the current study suggest that different categorization methods might influence the relation between volume and SSI. The selection of an optimal cutoff point should be taken into account for future research.

  6. Monitoring massive fracture growth at 2-km depths using surface tiltmeter arrays

    USGS Publications Warehouse

    Wood, M.D.

    1979-01-01

    Tilt due to massive hydraulic fractures induced in sedimentary rocks at depths of up to 2.2 km have been recorded by surface tiltmeters. Injection of fluid volumes up to 4 ?? 105 liters and masses of propping agent up to 5 ?? 105 kg is designed to produce fractures approximately 1 km long, 50-100 m high and about 1 cm wide. The surface tilt data adequately fit a dislocation model of a tensional fault in a half-space. Theoretical and observational results indicate that maximum tilt occurs at a distance off the strike of the fracture equivalent to 0.4 of the depth to the fracture. Azimuth and extent of the fracture deduced from the geometry of the tilt field agree with other kinds of geophysical measurements. Detailed correlation of the tilt signatures with pumping parameters (pressure, rate, volume, mass) have provided details on asymmetry in geometry and growth rate. Whereas amplitude variations in tilt vary inversely with the square of the depth, changes in flow rate or pressure gradient can produce a cubic change in width. These studies offer a large-scale experimental approach to the study of problems involving fracturing, mass transport, and dilatancy processes. ?? 1979.

  7. High frequency sonar variability in littoral environments: Irregular particles and bubbles

    NASA Astrophysics Data System (ADS)

    Richards, Simon D.; Leighton, Timothy G.; White, Paul R.

    2002-11-01

    Littoral environments may be characterized by high concentrations of suspended particles. Such suspensions contribute to attenuation through visco-inertial absorption and scattering and may therefore be partially responsible for the observed variability in high frequency sonar performance in littoral environments. Microbubbles which are prevalent in littoral waters also contribute to volume attenuation through radiation, viscous and thermal damping and cause dispersion. The attenuation due to a polydisperse suspension of particles with depth-dependent concentration has been included in a sonar model. The effects of a depth-dependent, polydisperse population of microbubbles on attenuation, sound speed and volume reverberation are also included. Marine suspensions are characterized by nonspherical particles, often plate-like clay particles. Measurements of absorption in dilute suspensions of nonspherical particles have shown disagreement with predictions of spherical particle models. These measurements have been reanalyzed using three techniques for particle sizing: laser diffraction, gravitational sedimentation, and centrifugal sedimentation, highlighting the difficulty of characterizing polydisperse suspensions of irregular particles. The measurements have been compared with predictions of a model for suspensions of oblate spheroids. Excellent agreement is obtained between this model and the measurements for kaolin particles, without requiring any a priori knowledge of the measurements.

  8. User Services Offered By Medical School Libraries in 1968: Results of a National Survey Employing New Methodology *

    PubMed Central

    Orr, Richard H.; Bloomquist, Harold; Cruzat, Gwendolyn S.; Schless, Arthur P.

    1970-01-01

    The breadth and depth of services that ninety-two medical school libraries offer to individual users were ascertained by interviewing the heads of these libraries, employing a standardized inventory procedure developed earlier (Bulletin 56:380-403, Oct. 1968). Selected aspects of the descriptive data obtained on services to faculty and to medical students are presented and commented upon. Comparisons with the findings of earlier surveys suggest that increases in the staffs and budgets of medical school libraries over the past two decades have gone largely to supporting a rapidly increasing volume of service, rather than to any striking increase in the breadth and depth of services. To facilitate summarization and comparisions among libraries the descriptive data were weighted and converted to quantitative measures; the weighting scheme was established by a group of five academic medical librarians to reflect the relative values the group assigned to different services. One of these quantitative measures, the percentage score for overall services relative to the optimal library, summarizes a library's services in a single figure. On this measure, medical school libraries ranged from 38 percent to 87 percent; the median overall score was 63 percent. Results of some exploratory analyses are described; these analyses attempted to find explanations for the observed differences among libraries and among geographical regions on the quantitative measures. Present and potential uses of the survey data for managerial and research purposes are discussed. One of the most important of these uses is in establishing and implementing standards—activities which should be carried out by the library profession itself—and recommendations are made for a program of such activities that is appropriate for the Medical Library Association. PMID:5496234

  9. Explosive change in crater properties during high power nanosecond laser ablation of silicon

    NASA Astrophysics Data System (ADS)

    Yoo, J. H.; Jeong, S. H.; Greif, R.; Russo, R. E.

    2000-08-01

    Mass removed from single crystal silicon samples by high irradiance (1×109 to 1×1011W/cm2) single pulse laser ablation was studied by measuring the resulting crater morphology with a white light interferometric microscope. The craters show a strong nonlinear change in both the volume and depth when the laser irradiance is less than or greater than ≈2.2×1010W/cm2. Time-resolved shadowgraph images of the ablated silicon plume were obtained over this irradiance range. The images show that the increase in crater volume and depth at the threshold of 2.2×1010W/cm2 is accompanied by large size droplets leaving the silicon surface, with a time delay ˜300 ns. A numerical model was used to estimate the thickness of the layer heated to approximately the critical temperature. The model includes transformation of liquid metal into liquid dielectric near the critical state (i.e., induced transparency). In this case, the estimated thickness of the superheated layer at a delay time of 200-300 ns shows a close agreement with measured crater depths. Induced transparency is demonstrated to play an important role in the formation of a deep superheated liquid layer, with subsequent explosive boiling responsible for large-particulate ejection.

  10. Report on Pilot Test Of Impact and In-Depth Measures. Child and Family Mental Health Project.

    ERIC Educational Resources Information Center

    Condry, Sandra; Hayes, William A.

    This document reports the pilot test of the two components of the Child and Family Mental Health (CFMH) Evaluation Project -- the impact evaluation component and the in-depth evaluation component. (The impact evaluation is designed to determine the effects of the two primary prevention models of service and activities on the CFMH Head Start…

  11. Sniffin' Sticks and olfactory system imaging in patients with Kallmann syndrome.

    PubMed

    Ottaviano, Giancarlo; Cantone, Elena; D'Errico, Arianna; Salvalaggio, Alessandro; Citton, Valentina; Scarpa, Bruno; Favaro, Angela; Sinisi, Antonio Agostino; Liuzzi, Raffaele; Bonanni, Guglielmo; Di Salle, Francesco; Elefante, Andrea; Manara, Renzo; Staffieri, Alberto; Martini, Alessandro; Brunetti, Arturo

    2015-09-01

    The relationship between olfactory function, rhinencephalon and forebrain changes in Kallmann syndrome (KS) have not been adequately investigated. We evaluated a large cohort of male KS patients using Sniffin' Sticks and MRI in order to study olfactory bulb (OB) volume, olfactory sulcus (OS) depth, cortical thickness close to the OS, and olfactory phenotype. Olfaction was assessed administering Sniffin' Sticks®, in 38 KS patients and 17 controls (by means of Screening 12 test®). All subjects underwent magnetic resonance imaging (MRI) to study OB volume, sulcus depth, and cortical thickness. Compared to controls, KS patients showed smaller OB volume (p<0.0001), reduced sulcus depth (p<0.0001), and thicker cortex in the region close to the OS (p<0.0001). Anosmic KS patients had smaller OB than controls and hyposmic KS patients; there was no difference between hyposmic KS patients and controls. OB volume correlated with Sniffin' Sticks score (r = 0.64; p < 0.001), OS depth (p<0.0001) and, inversely, with cortical thickness changes (p<0.0001). Sniffin' Sticks showed an inverse correlation with cortical thickness (r = -0.5; p<0.0001) and a trend toward a statistically significant correlation with OS depth. The present study provides further evidence of the strict relationship between olfaction and OB volume. The strong correlation between OB volume and the overlying cortical changes highlights the key role of rhinencephalon in forebrain embryogenesis. © 2015 ARS-AAOA, LLC.

  12. Development and application of a spatial hydrology model of Okefenokee Swamp, Georgia

    USGS Publications Warehouse

    Loftin, C.S.; Kitchens, W.M.; Ansay, N.

    2001-01-01

    The model described herein was used to assess effects of the Suwannee River sill (a low earthen dam constructed to impound the Suwannee River within the Okefenokee National Wildlife Refuge to eliminate wildfires) on the hydrologic environment of Okefenokee Swamp, Georgia. Developed with Arc/Info Macro Language routines in the GRID environment, the model distributes water in the swamp landscape using precipitation, inflow, evapotranspiration, outflow, and standing water. Water movement direction and rate are determined by the neighborhood topographic gradient, determined using survey grade Global Positioning Systems technology. Model data include flow rates from USGS monitored gauges, precipitation volumes and water levels measured within the swamp, and estimated evapotranspiration volumes spatially modified by vegetation type. Model output in semi-monthly time steps includes water depth, water surface elevation above mean sea level, and movement direction and volume. Model simulations indicate the sill impoundment affects 18 percent of the swamp during high water conditions when wildfires are scarce and has minimal spatial effect (increasing hydroperiods in less than 5 percent of the swamp) during low water and drought conditions when fire occurrence is high but precipitation and inflow volumes are limited.

  13. Steady state volcanism - Evidence from eruption histories of polygenetic volcanoes

    NASA Technical Reports Server (NTRS)

    Wadge, G.

    1982-01-01

    Cumulative volcano volume curves are presented as evidence for steady-state behavior at certain volcanoes and to develop a model of steady-state volcanism. A minimum criteria of five eruptions over a year was chosen to characterize a steady-state volcano. The subsequent model features a constant head of magmatic pressure from a reservoir supplied from depth, a sawtooth curve produced by the magma arrivals or discharge from the subvolcanic reservoir, large volume eruptions with long repose periods, and conditions of nonsupply of magma. The behavior of Mts. Etna, Nyamuragira, and Kilauea are described and show continuous levels of plasma output resulting in cumulative volume increases. Further discussion is made of steady-state andesitic and dacitic volcanism, long term patterns of the steady state, and magma storage, and the lack of a sufficient number of steady-state volcanoes in the world is taken as evidence that further data is required for a comprehensive model.

  14. Exploration drilling and reservoir model of the Platanares geothermal system, Honduras, Central America

    USGS Publications Warehouse

    Goff, F.; Goff, S.J.; Kelkar, S.; Shevenell, L.; Truesdell, A.H.; Musgrave, J.; Rufenacht, H.; Flores, W.

    1991-01-01

    Results of drilling, logging, and testing of three exploration core holes, combined with results of geologic and hydrogeochemical investigations, have been used to present a reservoir model of the Platanares geothermal system, Honduras. Geothermal fluids circulate at depths ??? 1.5 km in a region of active tectonism devoid of Quaternary volcanism. Large, artesian water entries of 160 to 165??C geothermal fluid in two core holes at 625 to 644 m and 460 to 635 m depth have maximum flow rates of roughly 355 and 560 l/min, respectively, which are equivalent to power outputs of about 3.1 and 5.1 MW(thermal). Dilute, alkali-chloride reservoir fluids (TDS ??? 1200 mg/kg) are produced from fractured Miocene andesite and Cretaceous to Eocene redbeds that are hydrothermally altered. Fracture permeabillity in producing horizons is locally greater than 1500 and bulk porosity is ??? 6%. A simple, fracture-dominated, volume-impedance model assuming turbulent flow indicates that the calculated reservoir storage capacity of each flowing hole is approximately 9.7 ?? 106 l/(kg cm-2), Tritium data indicate a mean residence time of 450 yr for water in the reservoir. Multiplying the natural fluid discharge rate by the mean residence time gives an estimated water volume of the Platanares system of ??? 0.78 km3. Downward continuation of a 139??C/km "conductive" gradient at a depth of 400 m in a third core hole implies that the depth to a 225??C source reservoir (predicted from chemical geothermometers) is at least 1.5 km. Uranium-thorium disequilibrium ages on calcite veins at the surface and in the core holes indicate that the present Platanares hydrothermal system has been active for the last 0.25 m.y. ?? 1991.

  15. Lightning Damage of Carbon Fiber/Epoxy Laminates with Interlayers Modified by Nickel-Coated Multi-Walled Carbon Nanotubes

    NASA Astrophysics Data System (ADS)

    Dong, Qi; Wan, Guoshun; Xu, Yongzheng; Guo, Yunli; Du, Tianxiang; Yi, Xiaosu; Jia, Yuxi

    2017-12-01

    The numerical model of carbon fiber reinforced polymer (CFRP) laminates with electrically modified interlayers subjected to lightning strike is constructed through finite element simulation, in which both intra-laminar and inter-laminar lightning damages are considered by means of coupled electrical-thermal-pyrolytic analysis method. Then the lightning damage extents including the damage volume and maximum damage depth are investigated. The results reveal that the simulated lightning damages could be qualitatively compared to the experimental counterparts of CFRP laminates with interlayers modified by nickel-coated multi-walled carbon nanotubes (Ni-MWCNTs). With higher electrical conductivity of modified interlayer and more amount of modified interlayers, both damage volume and maximum damage depth are reduced. This work provides an effective guidance to the anti-lightning optimization of CFRP laminates.

  16. Forecasting magma-chamber rupture at Santorini volcano, Greece.

    PubMed

    Browning, John; Drymoni, Kyriaki; Gudmundsson, Agust

    2015-10-28

    How much magma needs to be added to a shallow magma chamber to cause rupture, dyke injection, and a potential eruption? Models that yield reliable answers to this question are needed in order to facilitate eruption forecasting. Development of a long-lived shallow magma chamber requires periodic influx of magmas from a parental body at depth. This redistribution process does not necessarily cause an eruption but produces a net volume change that can be measured geodetically by inversion techniques. Using continuum-mechanics and fracture-mechanics principles, we calculate the amount of magma contained at shallow depth beneath Santorini volcano, Greece. We demonstrate through structural analysis of dykes exposed within the Santorini caldera, previously published data on the volume of recent eruptions, and geodetic measurements of the 2011-2012 unrest period, that the measured 0.02% increase in volume of Santorini's shallow magma chamber was associated with magmatic excess pressure increase of around 1.1 MPa. This excess pressure was high enough to bring the chamber roof close to rupture and dyke injection. For volcanoes with known typical extrusion and intrusion (dyke) volumes, the new methodology presented here makes it possible to forecast the conditions for magma-chamber failure and dyke injection at any geodetically well-monitored volcano.

  17. Maximum Neutral Buoyancy Depth of Juvenile Chinook Salmon: Implications for Survival during Hydroturbine Passage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.

    This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrallymore » buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.« less

  18. Body density and diving gas volume of the northern bottlenose whale (Hyperoodon ampullatus).

    PubMed

    Miller, Patrick; Narazaki, Tomoko; Isojunno, Saana; Aoki, Kagari; Smout, Sophie; Sato, Katsufumi

    2016-08-15

    Diving lung volume and tissue density, reflecting lipid store volume, are important physiological parameters that have only been estimated for a few breath-hold diving species. We fitted 12 northern bottlenose whales with data loggers that recorded depth, 3-axis acceleration and speed either with a fly-wheel or from change of depth corrected by pitch angle. We fitted measured values of the change in speed during 5 s descent and ascent glides to a hydrodynamic model of drag and buoyancy forces using a Bayesian estimation framework. The resulting estimate of diving gas volume was 27.4±4.2 (95% credible interval, CI) ml kg(-1), closely matching the measured lung capacity of the species. Dive-by-dive variation in gas volume did not correlate with dive depth or duration. Estimated body densities of individuals ranged from 1028.4 to 1033.9 kg m(-3) at the sea surface, indicating overall negative tissue buoyancy of this species in seawater. Body density estimates were highly precise with ±95% CI ranging from 0.1 to 0.4 kg m(-3), which would equate to a precision of <0.5% of lipid content based upon extrapolation from the elephant seal. Six whales tagged near Jan Mayen (Norway, 71°N) had lower body density and were closer to neutral buoyancy than six whales tagged in the Gully (Nova Scotia, Canada, 44°N), a difference that was consistent with the amount of gliding observed during ascent versus descent phases in these animals. Implementation of this approach using longer-duration tags could be used to track longitudinal changes in body density and lipid store body condition of free-ranging cetaceans. © 2016. Published by The Company of Biologists Ltd.

  19. Body density and diving gas volume of the northern bottlenose whale (Hyperoodon ampullatus)

    PubMed Central

    Miller, Patrick; Narazaki, Tomoko; Isojunno, Saana; Aoki, Kagari; Smout, Sophie; Sato, Katsufumi

    2016-01-01

    ABSTRACT Diving lung volume and tissue density, reflecting lipid store volume, are important physiological parameters that have only been estimated for a few breath-hold diving species. We fitted 12 northern bottlenose whales with data loggers that recorded depth, 3-axis acceleration and speed either with a fly-wheel or from change of depth corrected by pitch angle. We fitted measured values of the change in speed during 5 s descent and ascent glides to a hydrodynamic model of drag and buoyancy forces using a Bayesian estimation framework. The resulting estimate of diving gas volume was 27.4±4.2 (95% credible interval, CI) ml kg−1, closely matching the measured lung capacity of the species. Dive-by-dive variation in gas volume did not correlate with dive depth or duration. Estimated body densities of individuals ranged from 1028.4 to 1033.9 kg m−3 at the sea surface, indicating overall negative tissue buoyancy of this species in seawater. Body density estimates were highly precise with ±95% CI ranging from 0.1 to 0.4 kg m−3, which would equate to a precision of <0.5% of lipid content based upon extrapolation from the elephant seal. Six whales tagged near Jan Mayen (Norway, 71°N) had lower body density and were closer to neutral buoyancy than six whales tagged in the Gully (Nova Scotia, Canada, 44°N), a difference that was consistent with the amount of gliding observed during ascent versus descent phases in these animals. Implementation of this approach using longer-duration tags could be used to track longitudinal changes in body density and lipid store body condition of free-ranging cetaceans. PMID:27296044

  20. Voices of Strong Democracy: Concepts and Models for Service-Learning in Communication Studies. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Droge, David, Ed.; Murphy, Bren Ortega, Ed.

    This volume is part of a series of 18 monographs on service learning and the academic disciplines. These essays demonstrate some "best practices" for service-learning, providing rigorous learning experiences for students and high-quality service to the community. A Preface by James L. Applegate and Sherwyn P. Morreale, "Service-Learning in…

  1. Pressure as a limit to bloater (Coregonus hoyi) vertical migration

    USGS Publications Warehouse

    TeWinkel, Leslie M.; Fleischer, Guy W.

    1998-01-01

    Observations of bloater vertical migration showed a limit to the vertical depth changes that bloater experience. In this paper, we conducted an analysis of maximum differences in pressure encountered by bloater during vertical migration. Throughout the bottom depths studied, bloater experienced maximum reductions in swim bladder volume equal to approximately 50-60% of the volume in midwater. The analysis indicated that the limit in vertical depth change may be related to a maximum level of positive or negative buoyancy for which bloater can compensate using alternative mechanisms such as hydrodynamic lift. Bloater may be limited in the extent of migration by either their depth of neutral buoyancy or the distance above the depth of neutral buoyancy at which they can still maintain their position in the water column. Although a migration limit for the bloater population was evident, individual distances of migration varied at each site. These variations in migration distances may indicate differences in depths of neutral buoyancy within the population. However, in spite of these variations, the strong correlation between shallowest depths of migration and swim bladder volume reduction across depths provides evidence that hydrostatic pressure limits the extent of daily vertical movement in bloater.

  2. The Practice of Change: Concepts and Models for Service-Learning in Women's Studies. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Balliet, Barbara J., Ed.; Heffernan, Kerrissa, Ed.

    This volume, 17th in a series of monographs on service-learning and the academic disciplines, discusses the role of service learning as part of women's studies. Essays discuss the ways that the ideology of service has allowed the devaluation of service work, and they consider the importance of service learning for the student as well as the…

  3. Residential Programming for Mentally Retarded Persons. Volume II, A Developmental Model for Residential Services.

    ERIC Educational Resources Information Center

    National Association for Retarded Children, Arlington, TX. South Central Regional Office.

    The second of a series of four booklets on residential programing for the mentally retarded (MR) presents a developmental model for residential services based on the premise that MR persons are capable of growth, development, and learning. Architectural factors, staff resistance and financial considerations are described as impediments to…

  4. Hand volume estimates based on a geometric algorithm in comparison to water displacement.

    PubMed

    Mayrovitz, H N; Sims, N; Hill, C J; Hernandez, T; Greenshner, A; Diep, H

    2006-06-01

    Assessing changes in upper extremity limb volume during lymphedema therapy is important for determining treatment efficacy and documenting outcomes. Although arm volumes may be determined by tape measure, the suitability of circumference measurements to estimate hand volumes is questionable because of the deviation in circularity of hand shape. Our aim was to develop an alternative measurement procedure and algorithm for routine use to estimate hand volumes. A caliper was used to measure hand width and depth in 33 subjects (66 hands) and volumes (VE) were calculated using an elliptical frustum model. Using regression analysis and limits of agreement (LOA), VE was compared to volumes determined by water displacement (VW), to volumes calculated from tape-measure determined circumferences (VC), and to a trapezoidal model (VT). VW and VE (mean +/- SD) were similar (363 +/- 98 vs. 362 +/-100 ml) and highly correlated; VE = 1.01VW -3.1 ml, r=0.986, p<0.001, with LOA of +/- 33.5 ml and +/- 9.9 %. In contrast, VC (480 +/- 138 ml) and VT (432 +/- 122 ml) significantly overestimated volume (p<0.0001). These results indicate that the elliptical algorithm can be a useful alternative to water displacement when hand volumes are needed and the water displacement method is contra-indicated, impractical to implement, too time consuming or not available.

  5. Copper and zinc removal from roof runoff: from research to full-scale adsorber systems.

    PubMed

    Steiner, M; Boller, M

    2006-01-01

    Large, uncoated copper and zinc roofs cause environmental problems if their runoff is infiltrated into the underground or discharged into receiving waters. Since source control is not always feasible, barrier systems for efficient copper and zinc removal are recommended in Switzerland. During the last few years, research carried out in order to test the performance of GIH-calcite adsorber filters as a barrier system. Adsorption and mass transport processes were assessed and described in a mathematical model. However, this model is not suitable for practical design, because it does not give explicit access to design parameters such as adsorber diameter and adsorber bed depth. Therefore, for e.g. engineers, an easy to use design guideline for GIH-calcite adsorber systems was developed, mainly based on the mathematical model. The core of this guideline is the design of the depth of the GIH-calcite adsorber layer. The depth is calculated by adding up the GIH depth for sorption equilibrium and the depth for the mass transfer zone (MTZ). Additionally, the arrangement of other adsorber system components such as particle separation and retention volume was considered in the guideline. Investigations of a full-scale adsorber confirm the successful application of this newly developed design guideline for the application of GIH-calcite adsorber systems in practice.

  6. EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2013-04-01

    EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be constructed to calculate, for example, the thickness between two surfaces in a 3D model or the depth from ground surface to the top of a particular geologic unit. In the first version of the service a simple interface showing some example queries has been implemented in order to show the potential of the technologies. The project aims to develop the services available in light of user feedback, both in terms of the data available, the functionality and the interface. User feedback on the services guides the software and standards development aspects of the project, leading to enhanced versions of the software which will be implemented in upgraded versions of the services during the lifetime of the project.

  7. Regional Disparities in Online Map User Access Volume and Determining Factors

    NASA Astrophysics Data System (ADS)

    Li, R.; Yang, N.; Li, R.; Huang, W.; Wu, H.

    2017-09-01

    The regional disparities of online map user access volume (use `user access volume' in this paper to indicate briefly) is a topic of growing interest with the increment of popularity in public users, which helps to target the construction of geographic information services for different areas. At first place we statistically analysed the online map user access logs and quantified these regional access disparities on different scales. The results show that the volume of user access is decreasing from east to the west in China as a whole, while East China produces the most access volume; these cities are also the crucial economic and transport centres. Then Principal Component Regression (PCR) is applied to explore the regional disparities of user access volume. A determining model for Online Map access volume is proposed afterwards, which indicates that area scale is the primary determining factor for regional disparities, followed by public transport development level and public service development level. Other factors like user quality index and financial index have very limited influence on the user access volume. According to the study of regional disparities in user access volume, map providers can reasonably dispatch and allocate the data resources and service resources in each area and improve the operational efficiency of the Online Map server cluster.

  8. Simultaneous retrieval of sea ice thickness and snow depth using concurrent active altimetry and passive L-band remote sensing data

    NASA Astrophysics Data System (ADS)

    Zhou, L.; Xu, S.; Liu, J.

    2017-12-01

    The retrieval of sea ice thickness mainly relies on satellite altimetry, and the freeboard measurements are converted to sea ice thickness (hi) under certain assumptions over snow loading. The uncertain in snow depth (hs) is a major source of uncertainty in the retrieved sea ice thickness and total volume for both radar and laser altimetry. In this study, novel algorithms for the simultaneous retrieval of hi and hs are proposed for the data synergy of L-band (1.4 GHz) passive remote sensing and both types of active altimetry: (1) L-band (1.4GHz) brightness temperature (TB) from Soil Moisture Ocean Salinity (SMOS) satellite and sea ice freeboard (FBice) from radar altimetry, (2) L-band TB data and snow freeboard (FBsnow) from laser altimetry. Two physical models serve as the forward models for the retrieval: L-band radiation model, and the hydrostatic equilibrium model. Verification with SMOS and Operational IceBridge (OIB) data is carried out, showing overall good retrieval accuracy for both sea ice parameters. Specifically, we show that the covariability between hs and FBsnow is crucial for the synergy between TB and FBsnow. Comparison with existing algorithms shows lower uncertainty in both sea ice parameters, and that the uncertainty in the retrieved sea ice thickness as caused by that of snow depth is spatially uncorrelated, with the potential reduction of the volume uncertainty through spatial sampling. The proposed algorithms can be applied to the retrieval of sea ice parameters at basin-scale, using concurrent active and passive remote sensing data based on satellites.

  9. Soil moisture, dielectric permittivity and emissivity of soil: effective depth of emission measured by the L-band radiometer ELBARA

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Usowicz, Jerzy; Lipiec, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan

    2014-05-01

    Due to the large variation of soil moisture in space and in time, obtaining soil water balance with an aid of data acquired from the surface is still a challenge. Microwave remote sensing is widely used to determine the water content in soil. It is based on the fact that the dielectric constant of the soil is strongly dependent on its water content. This method provides the data in both local and global scales. Very important issue that is still not solved, is the soil depth at which radiometer "sees" the incoming radiation and how this "depth of view" depends on water content and physical properties of soil. The microwave emission comes from its entire profile, but much of this energy is absorbed by the upper layers of soil. As a result, the contribution of each layer to radiation visible for radiometer decreases with depth. The thickness of the surface layer, which significantly contributes to the energy measured by the radiometer is defined as the "penetration depth". In order to improve the physical base of the methodology of soil moisture measurements using microwave remote sensing and to determine the effective emission depth seen by the radiometer, a new algorithm was developed. This algorithm determines the reflectance coefficient from Fresnel equations, and, what is new, the complex dielectric constant of the soil, calculated from the Usowicz's statistical-physical model (S-PM) of dielectric permittivity and conductivity of soil. The model is expressed in terms of electrical resistance and capacity. The unit volume of soil in the model consists of solid, water and air, and is treated as a system made up of spheres, filling volume by overlapping layers. It was assumed that connections between layers and spheres in the layer are represented by serial and parallel connections of "resistors" and "capacitors". The emissivity of the soil surface is calculated from the ratio between the brightness temperature measured by the ELBARA radiometer (GAMMA Remote Sensing AG) and the physical temperature of the soil surface measured by infrared sensor. As the input data for S-PM: volumes of soil components, mineralogical composition, organic matter content, specific surface area and bulk density of the soil were used. Water contents in the model are iteratively changed, until emissivities calculated from the S-PM reach the best agreement with emissivities measured by the radiometer. Final water content will correspond to the soil moisture measured by the radiometer. Then, the examined soil profile will be virtually divided into thin slices where moisture, temperature and thermal properties will be measured and simultaneously modelled via S-PM. In the next step, the slices will be "added" starting from top (soil surface), until the effective soil moisture will be equal to the soil moisture measured by ELBARA. The thickness of obtained stack will be equal to desired "penetration depth". Moreover, it will be verified further by measuring the moisture content using thermal inertia. The work was partially funded by the Government of Poland through an ESA Contract under the PECS ELBARA_PD project No. 4000107897/13/NL/KML.

  10. Localized and generalized simulated wear of resin composites.

    PubMed

    Barkmeier, W W; Takamizawa, T; Erickson, R L; Tsujimoto, A; Latta, M; Miyazaki, M

    2015-01-01

    A laboratory study was conducted to examine the wear of resin composite materials using both a localized and generalized wear simulation model. Twenty specimens each of seven resin composites (Esthet•X HD [HD], Filtek Supreme Ultra [SU], Herculite Ultra [HU], SonicFill [SF], Tetric EvoCeram Bulk Fill [TB], Venus Diamond [VD], and Z100 Restorative [Z]) were subjected to a wear challenge of 400,000 cycles for both localized and generalized wear in a Leinfelder-Suzuki wear simulator (Alabama machine). The materials were placed in custom cylinder-shaped stainless steel fixtures. A stainless steel ball bearing (r=2.387 mm) was used as the antagonist for localized wear, and a stainless steel, cylindrical antagonist with a flat tip was used for generalized wear. A water slurry of polymethylmethacrylate (PMMA) beads was used as the abrasive media. A noncontact profilometer (Proscan 2100) with Proscan software was used to digitize the surface contours of the pretest and posttest specimens. AnSur 3D software was used for wear assessment. For localized testing, maximum facet depth (μm) and volume loss (mm(3)) were used to compare the materials. The mean depth of the facet surface (μm) and volume loss (mm(3)) were used for comparison of the generalized wear specimens. A one-way analysis of variance (ANOVA) and Tukey post hoc test were used for data analysis of volume loss for both localized and generalized wear, maximum facet depth for localized wear, and mean depth of the facet for generalized wear. The results for localized wear simulation were as follows [mean (standard deviation)]: maximum facet depth (μm)--Z, 59.5 (14.7); HU, 99.3 (16.3); SU, 102.8 (13.8); HD, 110.2 (13.3); VD, 114.0 (10.3); TB, 125.5 (12.1); SF, 195.9 (16.9); volume loss (mm(3))--Z, 0.013 (0.002); SU, 0.026 (0.006); HU, 0.043 (0.008); VD, 0.057 (0.009); HD, 0.058 (0.014); TB, 0.061 (0.010); SF, 0.135 (0.024). Generalized wear simulation results were as follows: mean depth of facet (μm)--Z, 9.3 (3.4); SU, 12.8 (3.1); HU, 15.6 (3.2); TB, 19.2 (4.8); HD, 26.8 (6.5); VD, 29.1 (5.5); SF, 35.6 (8.4); volume loss (mm(3))--Z, 0.132 (0.049); SU, 0.0179 (0.042); HU, 0.224 (0.044); TB, 0.274 (0.065); HD, 0.386 (0.101); VD, 0.417 (0.076); SF, 0.505 (0.105). The ANOVA showed a significant difference among materials (p<0.001) for facet depth and volume loss for both localized and generalized wear. The post hoc test revealed differences (p<0.05) in localized and generalized wear values among the seven resin composites examined in this study. The findings provide valuable information regarding the relative wear characteristics of the materials in this study.

  11. An extensive phase space for the potential martian biosphere.

    PubMed

    Jones, Eriita G; Lineweaver, Charles H; Clarke, Jonathan D

    2011-12-01

    We present a comprehensive model of martian pressure-temperature (P-T) phase space and compare it with that of Earth. Martian P-T conditions compatible with liquid water extend to a depth of ∼310 km. We use our phase space model of Mars and of terrestrial life to estimate the depths and extent of the water on Mars that is habitable for terrestrial life. We find an extensive overlap between inhabited terrestrial phase space and martian phase space. The lower martian surface temperatures and shallower martian geotherm suggest that, if there is a hot deep biosphere on Mars, it could extend 7 times deeper than the ∼5 km depth of the hot deep terrestrial biosphere in the crust inhabited by hyperthermophilic chemolithotrophs. This corresponds to ∼3.2% of the volume of present-day Mars being potentially habitable for terrestrial-like life.

  12. [In-depth interviews and the Kano model to determine user requirements in a burns unit].

    PubMed

    González-Revaldería, J; Holguín-Holgado, P; Lumbreras-Marín, E; Núñez-López, G

    To determine the healthcare requirements of patients in a Burns Unit, using qualitative techniques, such us in-depth personal interviews and Kano's methodology. Qualitative methodology using in-depth personal interviews (12 patients), Kano's conceptual model, and the SERVQHOS questionnaire (24 patients). All patients had been hospitalised in the last 12 months in the Burns Unit. Using Kano's methodology, service attributes were grouped by affinity diagrams, and classified as follows: must-be, attractive (unexpected, great satisfaction), and one-dimensional (linked to the degree of functionality of the service). The outcomes were compared with those obtained with SERVQHOS questionnaire. From the analysis of in-depth interviews, 11 requirements were obtained, referring to hotel aspects, information, need for closer staff relationship, and organisational aspects. The attributes classified as must-be were free television and automatic TV disconnection at midnight. Those classified as attractive were: individual room for more privacy, information about dressing change times in order to avoid anxiety, and additional staff for in-patients. The results were complementary to those obtained with the SERVQHOS questionnaire. In-depth personal interviews provide extra knowledge about patient requirements, complementing the information obtained with questionnaires. With this methodology, a more active patient participation is achieved and the companion's opinion is also taken into account. Copyright © 2016 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Export of health services from developing countries: the case of Tunisia.

    PubMed

    Lautier, Marc

    2008-07-01

    Although the subject of health services exports by developing countries has been much discussed, the phenomenon is still in its early stage, and its real implications are not yet clear. Given the rapid development in this area, little empirical data are available. This paper aims to fill this gap by providing reliable data on consumption of health services abroad (GATS mode 2 of international service supply). It starts by assessing the magnitude of the volume of international trade in health services. This is followed by an in-depth analysis of the case of Tunisia based on an original field research. Because of the high quality of its health sector and its proximity with Europe, Tunisia has the highest export potential for health services in the Middle-East and North Africa (MENA) Region. Health services exports may represent a quarter of Tunisia's private health sector output and generate jobs for 5000 employees. If one takes into account tourism expenses by the incoming patient (and their relatives), these exports contribute to nearly 1% of the country's total exports. Finally, this case study highlights the regional dimension of external demand for health services and the predominance of South-South trade.

  14. Sensitivity of greenhouse summer dryness to changes in plant rooting characteristics

    USGS Publications Warehouse

    Milly, P.C.D.

    1997-01-01

    A possible consequence of increased concentrations of greenhouse gases in Earth's atmosphere is "summer dryness," a decrease of summer plant-available soil water in middle latitudes, caused by increased availability of energy to drive evapotranspiration. Results from a numerical climate model indicate that summer dryness and related changes of land-surface water balances are highly sensitive to possible concomitant changes of plant-available water-holding capacity of soil, which depends on plant rooting depth and density. The model suggests that a 14% decrease of the soil volume whose water is accessible to plant roots would generate the same summer dryness, by one measure, as an equilibrium doubling of atmospheric carbon dioxide. Conversely, a 14% increase of that soil volume would be sufficient to offset the summer dryness associated with carbon-dioxide doubling. Global and regional changes in rooting depth and density may result from (1) plant and plant-community responses to greenhouse warming, to carbon-dioxide fertilization, and to associated changes in the water balance and (2) anthropogenic deforestation and desertification. Given their apparently critical role, heretofore ignored, in global hydroclimatic change, such changes of rooting characteristics should be carefully evaluated using ecosystem observations, theory, and models.

  15. Source Models of the June 17th, 2007 Kilauea Intrusion: Monte Carlo Optimization

    NASA Astrophysics Data System (ADS)

    Sinnett, D. K.; Montgomery-Brown, E. D.; Segall, P.; Miklius, A.; Poland, M.; Yun, S.; Zebker, H.

    2007-12-01

    Father's Day, 17 June 2007, marked the beginning of the 56th episode of the ongoing eruption of Kilauea volcano, Hawaii. The episode culminated in a short-lived eruption approximately 6 km west of Pu\\`{}u \\`{}O\\`{}o and 13 km southeast of Kilauea summit. The interruption of magma supply to, and withdrawal from, the reservoir beneath Pu\\`{}u \\`{}O\\`{}o caused cessation of activity and ~100 m of crater floor subsidence there. The continuous and campaign GPS, electronic tiltmeter, and seismic networks, as well as InSAR captured the episode in fine detail. Visual inspection of the data show subsidence at Kilauea summit and Pu\\`{}u \\`{}O\\`{}o, which fed the inflating dike. We began by modeling the intrusion with a Mogi source beneath Kilauea summit and a dislocation with uniform opening beneath the east rift zone embedded in an isotropic, homogenous, elastic, half space. We invert for the 12 source parameters (length, width, depth, dip, strike, horizontal position, and opening of the dike, and position, depth, and volume change of the Mogi source) using Monte Carlo optimization. The inversion used three component displacement data from 23 continuous and campaign GPS stations, diurnally and tidally filtered tilt from 6 stations, and an ENVISAT InSAR interferogram spanning 04/12/07 to 06/21/07 decimated using a quadtree algorithm. The optimum model included ~-4.1 * 106 m3 of volume loss from a reservoir 3 km beneath the summit, and a total dike volume of ~19*106 m3 (~4.84 km length x 2.45 km width x 1.6 m opening at 2.4 km depth). The discrepancy between summit volume loss and total dike volume suggests that other sources must have fed the dike. A crude estimate of volume loss from Pu\\`{}u \\`{}O\\`{}o is 8.5*106 m3 accounting for ~ 66% of the volume of the dike. The eruption site lies inside the eastern edge of the model, and ~0.5 km to the south of the best fit dike top. The best fit dike top parallels the northern margin of an area of ground cracking near Makaopuhui and terminates at its western margin near Mauna Ulu. The western termination is ~2.5 km east of the westernmost observed ground cracks. Within 95% bounds the dike top may intersect the eruption area and extend to all regions of ground cracking. It is also interesting to note that this dike is located in an area between the 1997 and 1999 intrusions. The best fit single dislocation model explains only 35% of the variance in the data. This is in part due to the inadequacies of a single planar dike with uniform opening to explain surface deformation and perhaps to inelastic deformation associated with ground cracking near the western edge of the dike. Models with distributed opening, in which the dike plane honors the optimization results as well as the region of decorrelation in the ENVISAT interferogram, explain 69% of the data (Montgomery-Brown et al., this session).

  16. Mechanical specific energy versus depth of cut in rock cutting and drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yaneng; Zhang, Wu; Gamwo, Isaac

    The relationship between Mechanical Specific Energy (MSE) and the Rate of Penetration (ROP), or equivalently the depth of cut per revolution, provides an important measure for strategizing a drilling operation. This study explores how MSE evolves with depth of cut, and presents a concerted effort that encompasses analytical, computational and experimental approaches. A simple model for the relationship between MSE and cutting depth is first derived with consideration of the wear progression of a circular cutter. This is an extension of Detournay and Defourny's phenomenological cutting model. Wear is modeled as a flat contact area at the bottom of amore » cutter referred to as a wear flat, and that wear flat in the past is often considered to be fixed during cutting. But during a drilling operation by a full bit that consists of multiple circular cutters, the wear flat length may increase because of various wear mechanisms involved. The wear progression of cutters generally results in reduced efficiency with either increased MSE or decreased ROP. Also, an accurate estimate of removed rock volume is found important for the evaluation of MSE. The derived model is compared with experiment results from a single circular cutter, for cutting soft rock under ambient pressure with actual depth measured through a micrometer, and for cutting high strength rock under high pressure with actual cutting area measured by a confocal microscope. Lastly, the model is employed to interpret the evolution of MSE with depth of cut for a full drilling bit under confining pressure. The general form of equation of the developed model is found to describe well the experiment data and can be applied to interpret the drilling data for a full bit.« less

  17. Mechanical specific energy versus depth of cut in rock cutting and drilling

    DOE PAGES

    Zhou, Yaneng; Zhang, Wu; Gamwo, Isaac; ...

    2017-12-07

    The relationship between Mechanical Specific Energy (MSE) and the Rate of Penetration (ROP), or equivalently the depth of cut per revolution, provides an important measure for strategizing a drilling operation. This study explores how MSE evolves with depth of cut, and presents a concerted effort that encompasses analytical, computational and experimental approaches. A simple model for the relationship between MSE and cutting depth is first derived with consideration of the wear progression of a circular cutter. This is an extension of Detournay and Defourny's phenomenological cutting model. Wear is modeled as a flat contact area at the bottom of amore » cutter referred to as a wear flat, and that wear flat in the past is often considered to be fixed during cutting. But during a drilling operation by a full bit that consists of multiple circular cutters, the wear flat length may increase because of various wear mechanisms involved. The wear progression of cutters generally results in reduced efficiency with either increased MSE or decreased ROP. Also, an accurate estimate of removed rock volume is found important for the evaluation of MSE. The derived model is compared with experiment results from a single circular cutter, for cutting soft rock under ambient pressure with actual depth measured through a micrometer, and for cutting high strength rock under high pressure with actual cutting area measured by a confocal microscope. Lastly, the model is employed to interpret the evolution of MSE with depth of cut for a full drilling bit under confining pressure. The general form of equation of the developed model is found to describe well the experiment data and can be applied to interpret the drilling data for a full bit.« less

  18. The GeoClaw software for depth-averaged flows with adaptive refinement

    USGS Publications Warehouse

    Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.

    2011-01-01

    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.

  19. Improvement effect on the depth-dose distribution by CSF drainage and air infusion of a tumour-removed cavity in boron neutron capture therapy for malignant brain tumours

    NASA Astrophysics Data System (ADS)

    Sakurai, Yoshinori; Ono, Koji; Miyatake, Shin-ichi; Maruhashi, Akira

    2006-03-01

    Boron neutron capture therapy (BNCT) without craniotomy for malignant brain tumours was started using an epi-thermal neutron beam at the Kyoto University Reactor in June 2002. We have tried some techniques to overcome the treatable-depth limit in BNCT. One of the effective techniques is void formation utilizing a tumour-removed cavity. The tumorous part is removed by craniotomy about 1 week before a BNCT treatment in our protocol. Just before the BNCT irradiation, the cerebro-spinal fluid (CSF) in the tumour-removed cavity is drained out, air is infused to the cavity and then the void is made. This void improves the neutron penetration, and the thermal neutron flux at depth increases. The phantom experiments and survey simulations modelling the CSF drainage and air infusion of the tumour-removed cavity were performed for the size and shape of the void. The advantage of the CSF drainage and air infusion is confirmed for the improvement in the depth-dose distribution. From the parametric surveys, it was confirmed that the cavity volume had good correlation with the improvement effect, and the larger effect was expected as the cavity volume was larger.

  20. Improvement effect on the depth-dose distribution by CSF drainage and air infusion of a tumour-removed cavity in boron neutron capture therapy for malignant brain tumours.

    PubMed

    Sakurai, Yoshinori; Ono, Koji; Miyatake, Shin-Ichi; Maruhashi, Akira

    2006-03-07

    Boron neutron capture therapy (BNCT) without craniotomy for malignant brain tumours was started using an epi-thermal neutron beam at the Kyoto University Reactor in June 2002. We have tried some techniques to overcome the treatable-depth limit in BNCT. One of the effective techniques is void formation utilizing a tumour-removed cavity. The tumorous part is removed by craniotomy about 1 week before a BNCT treatment in our protocol. Just before the BNCT irradiation, the cerebro-spinal fluid (CSF) in the tumour-removed cavity is drained out, air is infused to the cavity and then the void is made. This void improves the neutron penetration, and the thermal neutron flux at depth increases. The phantom experiments and survey simulations modelling the CSF drainage and air infusion of the tumour-removed cavity were performed for the size and shape of the void. The advantage of the CSF drainage and air infusion is confirmed for the improvement in the depth-dose distribution. From the parametric surveys, it was confirmed that the cavity volume had good correlation with the improvement effect, and the larger effect was expected as the cavity volume was larger.

  1. Computer simulation of backscattering spectra from paint

    NASA Astrophysics Data System (ADS)

    Mayer, M.; Silva, T. F.

    2017-09-01

    To study the role of lateral non-homogeneity on backscattering analysis of paintings, a simplified model of paint consisting of randomly distributed spherical pigment particles embedded in oil/binder has been developed. Backscattering spectra for lead white pigment particles in linseed oil have been calculated for 3 MeV H+ at a scattering angle of 165° for pigment volume concentrations ranging from 30 vol.% to 70 vol.% using the program STRUCTNRA. For identical pigment volume concentrations the heights and shapes of the backscattering spectra depend on the diameter of the pigment particles: This is a structural ambiguity for identical mean atomic concentrations but different lateral arrangement of materials. Only for very small pigment particles the resulting spectra are close to spectra calculated supposing atomic mixing and assuming identical concentrations of all elements. Generally, a good fit can be achieved when evaluating spectra from structured materials assuming atomic mixing of all elements and laterally homogeneous depth distributions. However, the derived depth profiles are inaccurate by a factor of up to 3. The depth range affected by this structural ambiguity ranges from the surface to a depth of roughly 0.5-1 pigment particle diameters. Accurate quantitative evaluation of backscattering spectra from paintings therefore requires taking the correct microstructure of the paint layer into account.

  2. From Cloister to Commons: Concepts and Models for Service-Learning in Religious Studies. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Devine, Richard, Ed.; Favazza, Joseph A., Ed.; McLain, F. Michael, Ed.

    This essays in this volume, 19th in a series, discuss why and how service-learning can be implemented in Religious Studies and what that discipline contributes to the pedagogy of service-learning. Part 1, "Service-Learning and the Dilemma of Religious Studies," contains: (1) "Service-Learning and the Dilemma of Religious Studies: Descriptive or…

  3. Influence of tundra snow layer thickness on measured and modelled radar backscatter

    NASA Astrophysics Data System (ADS)

    Rutter, N.; Sandells, M. J.; Derksen, C.; King, J. M.; Toose, P.; Wake, L. M.; Watts, T.

    2017-12-01

    Microwave radar backscatter within a tundra snowpack is strongly influenced by spatial variability of the thickness of internal layering. Arctic tundra snowpacks often comprise layers consisting of two dominant snow microstructures; a basal depth hoar layer overlain by a layer of wind slab. Occasionally there is also a surface layer of decomposing fresh snow. The two main layers have strongly different microwave scattering properties. Depth hoar has a greater capacity for scattering electromagnetic energy than wind slab, however, wind slab usually has a larger snow water equivalent (SWE) than depth hoar per unit volume due to having a higher density. So, determining the relative proportions of depth hoar and wind slab from a snowpack of a known depth may help our future capacity to invert forward models of electromagnetic backscatter within a data assimilation scheme to improve modelled estimates of SWE. Extensive snow measurements were made within Trail Valley Creek, NWT, Canada in April 2013. Snow microstructure was measured at 18 pit and 9 trench locations throughout the catchment (trench extent ranged between 5 to 50 m). Ground microstructure measurements included traditional stratigraphy, near infrared stratigraphy, Specific Surface Area (SSA), and density. Coincident airborne Lidar measurements were made to estimate distributed snow depth across the catchment, in addition to airborne radar snow backscatter using a dual polarized (VV/VH) X- and Ku-band Synthetic Aperture Radar (SnowSAR). Ground measurements showed the mean proportion of depth hoar was just under 30% of total snow depth and was largely unresponsive to increasing snow depth. The mean proportion of wind slab is consistently greater than 50% and showed an increasing trend with increasing total snow depth. A decreasing trend in the mean proportion of surface snow (approximately 25% to 10%) with increasing total depth accounted for this increase in wind slab. This new knowledge of variability in stratigraphic thickness, relative to respective proportions of total snow depth, was used to investigate the representativeness of point measurements of density and microstructure for forward simulations of the SMRT microwave scattering model, using Lidar derived snow depths.

  4. Estimating Summer Nutrient Concentrations in Northeastern Lakes from SPARROW Load Predictions and Modeled Lake Depth and Volume

    EPA Science Inventory

    Global nutrient cycles have been altered by use of fossil fuels and fertilizers resulting in increases in nutrient loads to aquatic systems. In the United States, excess nutrients have been repeatedly reported as the primary cause of lake water quality impairments. Setting nutr...

  5. The influence of tyre characteristics on measures of rolling performance during cross-country mountain biking.

    PubMed

    Macdermid, Paul William; Fink, Philip W; Stannard, Stephen R

    2015-01-01

    This investigation sets out to assess the effect of five different models of mountain bike tyre on rolling performance over hard-pack mud. Independent characteristics included total weight, volume, tread surface area and tread depth. One male cyclist performed multiple (30) trials of a deceleration field test to assess reliability. Further tests performed on a separate occasion included multiple (15) trials of the deceleration test and six fixed power output hill climb tests for each tyre. The deceleration test proved to be reliable as a means of assessing rolling performance via differences in initial and final speed (coefficient of variation (CV) = 4.52%). Overall differences between tyre performance for both deceleration test (P = 0.014) and hill climb (P = 0.032) were found, enabling significant (P < 0.0001 and P = 0.049) models to be generated, allowing tyre performance prediction based on tyre characteristics. The ideal tyre for rolling and climbing performance on hard-pack surfaces would be to decrease tyre weight by way of reductions in tread surface area and tread depth while keeping volume high.

  6. Energy modeling. Volume 2: Inventory and details of state energy models

    NASA Astrophysics Data System (ADS)

    Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.

    1981-05-01

    An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.

  7. Application Level Protocol Development for Library and Information Science Applications. Volume 1: Service Definition. Volume 2: Protocol Specification. Report No. TG.1.5; TG.50.

    ERIC Educational Resources Information Center

    Aagaard, James S.; And Others

    This two-volume document specifies a protocol that was developed using the Reference Model for Open Systems Interconnection (OSI), which provides a framework for communications within a heterogeneous network environment. The protocol implements the features necessary for bibliographic searching, record maintenance, and mail transfer between…

  8. A Study of Child Variance, Volume 4: The Future; Conceptual Project in Emotional Disturbance.

    ERIC Educational Resources Information Center

    Rhodes, William C.

    Presented in the fourth volume in a series are a discussion of critical issues related to child variance and predictions for how society will perceive and respond to child variance in the future. Reviewed in an introductory chapter are the contents of the first three volumes which deal with conceptual models, interventions, and service delivery…

  9. External validation of a forest inventory and analysis volume equation and comparisons with estimates from multiple stem-profile models

    Treesearch

    Christopher M. Oswalt; Adam M. Saunders

    2009-01-01

    Sound estimation procedures are desideratum for generating credible population estimates to evaluate the status and trends in resource conditions. As such, volume estimation is an integral component of the U.S. Department of Agriculture, Forest Service, Forest Inventory and Analysis (FIA) program's reporting. In effect, reliable volume estimation procedures are...

  10. On the effect of velocity gradients on the depth of correlation in μPIV

    NASA Astrophysics Data System (ADS)

    Mustin, B.; Stoeber, B.

    2016-03-01

    The present work revisits the effect of velocity gradients on the depth of the measurement volume (depth of correlation) in microscopic particle image velocimetry (μPIV). General relations between the μPIV weighting functions and the local correlation function are derived from the original definition of the weighting functions. These relations are used to investigate under which circumstances the weighting functions are related to the curvature of the local correlation function. Furthermore, this work proposes a modified definition of the depth of correlation that leads to more realistic results than previous definitions for the case when flow gradients are taken into account. Dimensionless parameters suitable to describe the effect of velocity gradients on μPIV cross correlation are derived and visual interpretations of these parameters are proposed. We then investigate the effect of the dimensionless parameters on the weighting functions and the depth of correlation for different flow fields with spatially constant flow gradients and with spatially varying gradients. Finally this work demonstrates that the results and dimensionless parameters are not strictly bound to a certain model for particle image intensity distributions but are also meaningful when other models for particle images are used.

  11. Volume of Valley Networks on Mars and Its Hydrologic Implications

    NASA Astrophysics Data System (ADS)

    Luo, W.; Cang, X.; Howard, A. D.; Heo, J.

    2015-12-01

    Valley networks on Mars are river-like features that offer the best evidence for water activities in its geologic past. Previous studies have extracted valley network lines automatically from digital elevation model (DEM) data and manually from remotely sensed images. The volume of material removed by valley networks is an important parameter that could help us infer the amount of water needed to carve the valleys. A progressive black top hat (PBTH) transformation algorithm has been adapted from image processing to extract valley volume and successfully applied to simulated landform and Ma'adim Valles, Mars. However, the volume of valley network excavation on Mars has not been estimated on a global scale. In this study, the PBTH method was applied to the whole Mars to estimate this important parameter. The process was automated with Python in ArcGIS. Polygons delineating the valley associated depressions were generated by using a multi-flow direction growth method, which started with selected high point seeds on a depth grid (essentially an inverted valley) created by PBTH transformation and grew outward following multi-flow direction on the depth grid. Two published versions of valley network lines were integrated to automatically select depression polygons that represent the valleys. Some crater depressions that are connected with valleys and thus selected in the previous step were removed by using information from a crater database. Because of large distortion associated with global dataset in projected maps, the volume of each cell within a valley was calculated using the depth of the cell multiplied by the spherical area of the cell. The volumes of all the valley cells were then summed to produce the estimate of global valley excavation volume. Our initial result of this estimate was ~2.4×1014 m3. Assuming a sediment density of 2900 kg/m3, a porosity of 0.35, and a sediment load of 1.5 kg/m3, the global volume of water needed to carve the valleys was estimated to be ~7.1×1017 m3. Because of the coarse resolution of MOLA data, this is a conservative lower bound. Comparing with the hypothesized northern ocean volume 2.3×1016 m3 estimated by Carr and Head (2003), our estimate of water volume suggests and confirms an active hydrologic cycle for early Mars. Further hydrologic analysis will improve the estimate accuracy.

  12. Learning by Doing: Concepts and Models for Service-Learning in Accounting. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Rama, D. V., Ed.

    This volume is part of a series of 18 monographs on service learning and the academic disciplines. It is designed to (1) develop a theoretical framework for service learning in accounting consistent with the goals identified by accounting educators and the recent efforts toward curriculum reform, and (2) describe specific active learning…

  13. Comparison of estimated and observed stormwater runoff for fifteen watersheds in west-central Florida, using five common design techniques

    USGS Publications Warehouse

    Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia

    1996-01-01

    Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er

  14. The maximum economic depth of groundwater abstraction for irrigation

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.

  15. Decision support system in an international-voice-services business company

    NASA Astrophysics Data System (ADS)

    Hadianti, R.; Uttunggadewa, S.; Syamsuddin, M.; Soewono, E.

    2017-01-01

    We consider a problem facing by an international telecommunication services company in maximizing its profit. From voice services by controlling cost and business partnership. The competitiveness in this industry is very high, so that any efficiency from controlling cost and business partnership can help the company to survive in the very high competitiveness situation. The company trades voice traffic with a large number of business partners. There are four trading schemes that can be chosen by this company, namely, flat rate, class tiering, volume commitment, and revenue capped. Each scheme has a specific characteristic on the rate and volume deal, where the last three schemes are regarded as strategic schemes to be offered to business partner to ensure incoming traffic volume for both parties. This company and each business partner need to choose an optimal agreement in a certain period of time that can maximize the company’s profit. In this agreement, both parties agree to use a certain trading scheme, rate and rate/volume/revenue deal. A decision support system is then needed in order to give a comprehensive information to the sales officers to deal with the business partners. This paper discusses the mathematical model of the optimal decision for incoming traffic volume control, which is a part of the analysis needed to build the decision support system. The mathematical model is built by first performing data analysis to see how elastic the incoming traffic volume is. As the level of elasticity is obtained, we then derive a mathematical modelling that can simulate the impact of any decision on trading to the revenue of the company. The optimal decision can be obtained from these simulations results. To evaluate the performance of the proposed method we implement our decision model to the historical data. A software tool incorporating our methodology is currently in construction.

  16. Spatial variations in the frequency-magnitude distribution of earthquakes at Soufriere Hills Volcano, Montserrat, West Indies

    USGS Publications Warehouse

    Power, J.A.; Wyss, M.; Latchman, J.L.

    1998-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is determined as a function of space beneath Soufriere Hills Volcano, Montserrat, from data recorded between August 1, 1995 and March 31, 1996. A volume of anomalously high b-values (b > 3.0) with a 1.5 km radius is imaged at depths of 0 and 1.5 km beneath English's Crater and Chance's Peak. This high b-value anomaly extends southwest to Gage's Soufriere. At depths greater than 2.5 km volumes of comparatively low b-values (b-1) are found beneath St. George's Hill, Windy Hill, and below 2.5 km depth and to the south of English's Crater. We speculate the depth of high b-value anomalies under volcanoes may be a function of silica content, modified by some additional factors, with the most siliceous having these volumes that are highly fractured or contain high pore pressure at the shallowest depths. Copyright 1998 by the American Geophysical Union.

  17. Advanced 3D Geological Modelling Using Multi Geophysical Data in the Yamagawa Geothermal Field, Japan

    NASA Astrophysics Data System (ADS)

    Mochinaga, H.; Aoki, N.; Mouri, T.

    2017-12-01

    We propose a robust workflow of 3D geological modelling based on integrated analysis while honouring seismic, gravity, and wellbore data for exploration and development at flash steam geothermal power plants. We design the workflow using temperature logs at less than 10 well locations for practical use at an early stage of geothermal exploration and development. In the workflow, geostatistical technique, multi-attribute analysis, and artificial neural network are employed for the integration of multi geophysical data. The geological modelling is verified by using a 3D seismic data which was acquired in the Yamagawa Demonstration Area (approximately 36 km2), located at the city of Ibusuki in Kagoshima, Japan in 2015. Temperature-depth profiles are typically characterized by heat transfer of conduction, outflow, and up-flow which have low frequency trends. On the other hand, feed and injection zones with high permeability would cause high frequency perturbation on temperature-depth profiles. Each trend is supposed to be caused by different geological properties and subsurface structures. In this study, we estimate high frequency (> 2 cycles/km) and low frequency (< 1 cycle/km) models separately by means of different types of attribute volumes. These attributes are mathematically generated from P-impedance and density volumes derived from seismic inversion, an ant-tracking seismic volume, and a geostatistical temperature model prior to application of artificial neural network on the geothermal modelling. As a result, the band-limited stepwise approach predicts a more precise geothermal model than that of full-band temperature profiles at a time. Besides, lithofacies interpretation confirms reliability of the predicted geothermal model. The integrated interpretation is significantly consistent with geological reports from previous studies. Isotherm geobodies illustrate specific features of geothermal reservoir and cap rock, shallow aquifer, and its hydrothermal circulation in 3D visualization. The advanced workflow of 3D geological modelling is suitable for optimization of well locations for production and reinjection in geothermal fields.

  18. Topology-aware illumination design for volume rendering.

    PubMed

    Zhou, Jianlong; Wang, Xiuying; Cui, Hui; Gong, Peng; Miao, Xianglin; Miao, Yalin; Xiao, Chun; Chen, Fang; Feng, Dagan

    2016-08-19

    Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.

  19. Connecting Past and Present: Concepts and Models for Service-Learning in History. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Harkavy, Ira, Ed.; Donovan, Bill M., Ed.

    This volume, 16th in a series about service learning and the academic disciplines, focuses on the ways service learning adds immediacy and relevance to the study of history. The authors of this collection provide answers to why history and service learning should be connected, and they describe strategies to bring this about. The chapters are: (1)…

  20. Experiencing Citizenship: Concepts and Models for Service-Learning in Political Science. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Battistoni, Richard M., Ed.; Hudson, William E., Ed.

    This volume is part of a series of 18 monographs service learning and the academic disciplines. This collection of essays focuses on the use of service learning as an approach to teaching and learning in political science. Following an Introduction by Richard M. Battistoni and William E. Hudson, the four essays in Part 1, "Service-Learning as…

  1. A depth-averaged debris-flow model that includes the effects of evolving dilatancy. I. physical basis

    USGS Publications Warehouse

    Iverson, Richard M.; George, David L.

    2014-01-01

    To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.

  2. Forecasting magma-chamber rupture at Santorini volcano, Greece

    PubMed Central

    Browning, John; Drymoni, Kyriaki; Gudmundsson, Agust

    2015-01-01

    How much magma needs to be added to a shallow magma chamber to cause rupture, dyke injection, and a potential eruption? Models that yield reliable answers to this question are needed in order to facilitate eruption forecasting. Development of a long-lived shallow magma chamber requires periodic influx of magmas from a parental body at depth. This redistribution process does not necessarily cause an eruption but produces a net volume change that can be measured geodetically by inversion techniques. Using continuum-mechanics and fracture-mechanics principles, we calculate the amount of magma contained at shallow depth beneath Santorini volcano, Greece. We demonstrate through structural analysis of dykes exposed within the Santorini caldera, previously published data on the volume of recent eruptions, and geodetic measurements of the 2011–2012 unrest period, that the measured 0.02% increase in volume of Santorini’s shallow magma chamber was associated with magmatic excess pressure increase of around 1.1 MPa. This excess pressure was high enough to bring the chamber roof close to rupture and dyke injection. For volcanoes with known typical extrusion and intrusion (dyke) volumes, the new methodology presented here makes it possible to forecast the conditions for magma-chamber failure and dyke injection at any geodetically well-monitored volcano. PMID:26507183

  3. Pre-Service Teachers in PE Involved in an Organizational Critical Incident: Emotions, Appraisal and Coping Strategies

    ERIC Educational Resources Information Center

    Vandercleyen, François; Boudreau, Pierre; Carlier, Ghislain; Delens, Cécile

    2014-01-01

    Background: Emotions play a major role in the learning of pre-service teachers. However, there is a lack of in-depth research on emotion in the context of physical education (PE), especially during the practicum. Lazarus's model and its concepts of appraisal and coping is a salient theoretical framework for understanding the emotional process.…

  4. Comparision between Brain Atrophy and Subdural Volume to Predict Chronic Subdural Hematoma: Volumetric CT Imaging Analysis

    PubMed Central

    Ju, Min-Wook; Kwon, Hyon-Jo; Choi, Seung-Won; Koh, Hyeon-Song; Youm, Jin-Young; Song, Shi-Hun

    2015-01-01

    Objective Brain atrophy and subdural hygroma were well known factors that enlarge the subdural space, which induced formation of chronic subdural hematoma (CSDH). Thus, we identified the subdural volume that could be used to predict the rate of future CSDH after head trauma using a computed tomography (CT) volumetric analysis. Methods A single institution case-control study was conducted involving 1,186 patients who visited our hospital after head trauma from January 1, 2010 to December 31, 2014. Fifty-one patients with delayed CSDH were identified, and 50 patients with age and sex matched for control. Intracranial volume (ICV), the brain parenchyme, and the subdural space were segmented using CT image-based software. To adjust for variations in head size, volume ratios were assessed as a percentage of ICV [brain volume index (BVI), subdural volume index (SVI)]. The maximum depth of the subdural space on both sides was used to estimate the SVI. Results Before adjusting for cranium size, brain volume tended to be smaller, and subdural space volume was significantly larger in the CSDH group (p=0.138, p=0.021, respectively). The BVI and SVI were significantly different (p=0.003, p=0.001, respectively). SVI [area under the curve (AUC), 77.3%; p=0.008] was a more reliable technique for predicting CSDH than BVI (AUC, 68.1%; p=0.001). Bilateral subdural depth (sum of subdural depth on both sides) increased linearly with SVI (p<0.0001). Conclusion Subdural space volume was significantly larger in CSDH groups. SVI was a more reliable technique for predicting CSDH. Bilateral subdural depth was useful to measure SVI. PMID:27169071

  5. Comparision between Brain Atrophy and Subdural Volume to Predict Chronic Subdural Hematoma: Volumetric CT Imaging Analysis.

    PubMed

    Ju, Min-Wook; Kim, Seon-Hwan; Kwon, Hyon-Jo; Choi, Seung-Won; Koh, Hyeon-Song; Youm, Jin-Young; Song, Shi-Hun

    2015-10-01

    Brain atrophy and subdural hygroma were well known factors that enlarge the subdural space, which induced formation of chronic subdural hematoma (CSDH). Thus, we identified the subdural volume that could be used to predict the rate of future CSDH after head trauma using a computed tomography (CT) volumetric analysis. A single institution case-control study was conducted involving 1,186 patients who visited our hospital after head trauma from January 1, 2010 to December 31, 2014. Fifty-one patients with delayed CSDH were identified, and 50 patients with age and sex matched for control. Intracranial volume (ICV), the brain parenchyme, and the subdural space were segmented using CT image-based software. To adjust for variations in head size, volume ratios were assessed as a percentage of ICV [brain volume index (BVI), subdural volume index (SVI)]. The maximum depth of the subdural space on both sides was used to estimate the SVI. Before adjusting for cranium size, brain volume tended to be smaller, and subdural space volume was significantly larger in the CSDH group (p=0.138, p=0.021, respectively). The BVI and SVI were significantly different (p=0.003, p=0.001, respectively). SVI [area under the curve (AUC), 77.3%; p=0.008] was a more reliable technique for predicting CSDH than BVI (AUC, 68.1%; p=0.001). Bilateral subdural depth (sum of subdural depth on both sides) increased linearly with SVI (p<0.0001). Subdural space volume was significantly larger in CSDH groups. SVI was a more reliable technique for predicting CSDH. Bilateral subdural depth was useful to measure SVI.

  6. Mapping potential carbon and timber losses from hurricanes using a decision tree and ecosystem services driver model.

    PubMed

    Delphin, S; Escobedo, F J; Abd-Elrahman, A; Cropper, W

    2013-11-15

    Information on the effect of direct drivers such as hurricanes on ecosystem services is relevant to landowners and policy makers due to predicted effects from climate change. We identified forest damage risk zones due to hurricanes and estimated the potential loss of 2 key ecosystem services: aboveground carbon storage and timber volume. Using land cover, plot-level forest inventory data, the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) model, and a decision tree-based framework; we determined potential damage to subtropical forests from hurricanes in the Lower Suwannee River (LS) and Pensacola Bay (PB) watersheds in Florida, US. We used biophysical factors identified in previous studies as being influential in forest damage in our decision tree and hurricane wind risk maps. Results show that 31% and 0.5% of the total aboveground carbon storage in the LS and PB, respectively was located in high forest damage risk (HR) zones. Overall 15% and 0.7% of the total timber net volume in the LS and PB, respectively, was in HR zones. This model can also be used for identifying timber salvage areas, developing ecosystem service provision and management scenarios, and assessing the effect of other drivers on ecosystem services and goods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. High volume acupuncture clinic (HVAC) for chronic knee pain--audit of a possible model for delivery of acupuncture in the National Health Service.

    PubMed

    Berkovitz, Saul; Cummings, Mike; Perrin, Chris; Ito, Rieko

    2008-03-01

    Recent research has established the efficacy, effectiveness and cost effectiveness of acupuncture for some forms of chronic musculoskeletal pain. However, there are practical problems with delivery which currently prevent its large scale implementation in the National Health Service. We have developed a delivery model at our hospital, a 'high volume' acupuncture clinic (HVAC) in which patients are treated in a group setting for single conditions using standardised or semi-standardised electroacupuncture protocols by practitioners with basic training. We discuss our experiences using this model for chronic knee pain and present an outcome audit for the first 77 patients, demonstrating satisfactory initial (eight week) clinical results. Longer term (one year) data are currently being collected and the model should next be tested in primary care to confirm its feasibility.

  8. Working for the Common Good: Concepts and Models for Service-Learning in Management. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Godfrey, Paul C., Ed.; Grasso, Edward T., Ed.

    The articles in this volume, 15th in a series of monographs on service learning and the academic disciplines, show how student learning can be enhanced by joining management theory with experience and management analysis with action. Service learning prepares business students to see new dimensions of relevance in their coursework, and it provides…

  9. Long-term development of hypolimnetic oxygen depletion rates in the large Lake Constance.

    PubMed

    Rhodes, Justin; Hetzenauer, Harald; Frassl, Marieke A; Rothhaupt, Karl-Otto; Rinke, Karsten

    2017-09-01

    This study investigates over 30 years of dissolved oxygen dynamics in the deep interior of Lake Constance (max. depth: 250 m). This lake supplies approximately four million people with drinking water and has undergone strong re-oligotrophication over the past decades. We calculated depth-specific annual oxygen depletion rates (ODRs) during the period of stratification and found that 50% of the observed variability in ODR was already explained by a simple separation into a sediment- and volume-related oxygen consumption. Adding a linear factor for water depth further improved the model indicating that oxygen depletion increased substantially along the depth. Two other factors turned out to significantly influence ODR: total phosphorus as a proxy for the lake's trophic state and mean oxygen concentration in the respective depth layer. Our analysis points to the importance of nutrient reductions as effective management measures to improve and protect the oxygen status of such large and deep lakes.

  10. Using "residual depths" to monitor pool depths independently of discharge

    Treesearch

    Thomas E. Lisle

    1987-01-01

    As vital components of habitat for stream fishes, pools are often monitored to follow the effects of enhancement projects and natural stream processes. Variations of water depth with discharge, however, can complicate monitoring changes in the depth and volume of pools. To subtract the effect of discharge on depth in pools, residual depths can be measured. Residual...

  11. The EarthServer Geology Service: web coverage services for geosciences

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2014-05-01

    The EarthServer FP7 project is implementing web coverage services using the OGC WCS and WCPS standards for a range of earth science domains: cryospheric; atmospheric; oceanographic; planetary; and geological. BGS is providing the geological service (http://earthserver.bgs.ac.uk/). Geoscience has used remote sensed data from satellites and planes for some considerable time, but other areas of geosciences are less familiar with the use of coverage data. This is rapidly changing with the development of new sensor networks and the move from geological maps to geological spatial models. The BGS geology service is designed initially to address two coverage data use cases and three levels of data access restriction. Databases of remote sensed data are typically very large and commonly held offline, making it time-consuming for users to assess and then download data. The service is designed to allow the spatial selection, editing and display of Landsat and aerial photographic imagery, including band selection and contrast stretching. This enables users to rapidly view data, assess is usefulness for their purposes, and then enhance and download it if it is suitable. At present the service contains six band Landsat 7 (Blue, Green, Red, NIR 1, NIR 2, MIR) and three band false colour aerial photography (NIR, green, blue), totalling around 1Tb. Increasingly 3D spatial models are being produced in place of traditional geological maps. Models make explicit spatial information implicit on maps and thus are seen as a better way of delivering geosciences information to non-geoscientists. However web delivery of models, including the provision of suitable visualisation clients, has proved more challenging than delivering maps. The EarthServer geology service is delivering 35 surfaces as coverages, comprising the modelled superficial deposits of the Glasgow area. These can be viewed using a 3D web client developed in the EarthServer project by Fraunhofer. As well as remote sensed imagery and 3D models, the geology service is also delivering DTM coverages which can be viewed in the 3D client in conjunction with both imagery and models. The service is accessible through a web GUI which allows the imagery to be viewed against a range of background maps and DTMs, and in the 3D client; spatial selection to be carried out graphically; the results of image enhancement to be displayed; and selected data to be downloaded. The GUI also provides access to the Glasgow model in the 3D client, as well as tutorial material. In the final year of the project it is intended to increase the volume of data to 20Tb and enhance the WCPS processing, including depth and thickness querying of 3D models. We have also investigated the use of GeoSciML, developed to describe and interchange the information on geological maps, to describe model surface coverages. EarthServer is developing a combined WCPS and xQuery query language, and we will investigate applying this to the GeoSciML described surfaces to answer questions such as 'find all units with a predominant sand lithology within 25m of the surface'.

  12. Observing eruptions of gas-rich compressible magmas from space

    PubMed Central

    Kilbride, Brendan McCormick; Edmonds, Marie; Biggs, Juliet

    2016-01-01

    Observations of volcanoes from space are a critical component of volcano monitoring, but we lack quantitative integrated models to interpret them. The atmospheric sulfur yields of eruptions are variable and not well correlated with eruption magnitude and for many eruptions the volume of erupted material is much greater than the subsurface volume change inferred from ground displacements. Up to now, these observations have been treated independently, but they are fundamentally linked. If magmas are vapour-saturated before eruption, bubbles cause the magma to become more compressible, resulting in muted ground displacements. The bubbles contain the sulfur-bearing vapour injected into the atmosphere during eruptions. Here we present a model that allows the inferred volume change of the reservoir and the sulfur mass loading to be predicted as a function of reservoir depth and the magma's oxidation state and volatile content, which is consistent with the array of natural data. PMID:28000791

  13. Estimation of grazing-induced erosion through remote-sensing technologies in the Autonomous Province of Trento, Northern Italy

    NASA Astrophysics Data System (ADS)

    Torresani, Loris; Prosdocimi, Massimo; Masin, Roberta; Penasa, Mauro; Tarolli, Paolo

    2017-04-01

    Grassland and pasturelands cover a vast portion of the Earth surface and are vital for biodiversity richness, environmental protection and feed resources for livestock. Overgrazing is considered one of the major causes of soil degradation worldwide, mainly in pasturelands grazed by domestic animals. Therefore, an in-depth investigation to better quantify the effects of overgrazing in terms of soil loss is needed. At this regard, this work aims to estimate the volume of eroded materials caused by mismanagement of grazing areas in the whole Autonomous Province of Trento (Northern Italy). To achieve this goal, the first step dealt with the analysis of the entire provincial area by means of freely available aerial images, which allowed the identification and accurate mapping of every eroded area caused by grazing animals. The terrestrial digital photogrammetric technique, namely Structure from Motion (SfM), was then applied to obtain high-resolution Digital Surface Models (DSMs) of two representative eroded areas. By having the pre-event surface conditions, DSMs of difference, namely DoDs, was computed to estimate the erosion volume and the average depth of erosion for both areas. The average depths obtained from the DoDs were compared and validated by measures taken in the field. A large amount of depth measures from different sites were then collected to obtain a reference value for the whole province. This value was used as reference depth for calculating the eroded volume in the whole province. In the final stage, the Connectivity Index (CI) was adopted to analyse the existing connection between the eroded areas and the channel network. This work highlighted that SfM can be a solid low-cost technique for the low-cost and fast quantification of eroded soil due to grazing. It can also be used as a strategic instrument for improving the grazing management system at large scales, with the goal of reducing the risk of pastureland degradation.

  14. Forest volume-to-biomass models and estimates of mass for live and standing dead trees of U.S. forests.

    Treesearch

    James E. Smith; Linda S. Heath; Jennifer C. Jenkins

    2003-01-01

    Includes methods and equations for nationally consistent estimates of tree-mass density at the stand level (Mg/ha) as predicted by growing-stock volumes reported by the USDA Forest Service for forests of the conterminous United States. Developed for use in FORCARB, a carbon budget model for U.S. forests, the equations also are useful for converting plot-, stand- and...

  15. Methods and equations for estimating aboveground volume, biomass, and carbon for trees in the U.S. forest inventory, 2010

    Treesearch

    Christopher W. Woodall; Linda S. Heath; Grant M. Domke; Michael C. Nichols

    2011-01-01

    The U.S. Forest Service, Forest Inventory and Analysis (FIA) program uses numerous models and associated coefficients to estimate aboveground volume, biomass, and carbon for live and standing dead trees for most tree species in forests of the United States. The tree attribute models are coupled with FIA's national inventory of sampled trees to produce estimates of...

  16. Simulation of hydrodynamics, water quality, and lake sturgeon habitat volumes in Lake St. Croix, Wisconsin and Minnesota, 2013

    USGS Publications Warehouse

    Smith, Erik A.; Kiesling, Richard L.; Ziegeweid, Jeffrey R.; Elliott, Sarah M.; Magdalene, Suzanne

    2018-01-05

    Lake St. Croix is a naturally impounded, riverine lake that makes up the last 40 kilometers of the St. Croix River. Substantial land-use changes during the past 150 years, including increased agriculture and urban development, have reduced Lake St. Croix water-quality and increased nutrient loads delivered to Lake St. Croix. A recent (2012–13) total maximum daily load phosphorus-reduction plan set the goal to reduce total phosphorus loads to Lake St. Croix by 20 percent by 2020 and reduce Lake St. Croix algal bloom frequencies. The U.S. Geological Survey, in cooperation with the National Park Service, developed a two-dimensional, carbon-based, laterally averaged, hydrodynamic and water-quality model, CE–QUAL–W2, that addresses the interaction between nutrient cycling, primary production, and trophic dynamics to predict responses in the distribution of water temperature, oxygen, and chlorophyll a. Distribution is evaluated in the context of habitat for lake sturgeon, including a combination of temperature and dissolved oxygen conditions termed oxy-thermal habitat.The Lake St. Croix CE–QUAL–W2 model successfully reproduced temperature and dissolved oxygen in the lake longitudinally (from upstream to downstream), vertically, and temporally over the seasons. The simulated water temperature profiles closely matched the measured water temperature profiles throughout the year, including the prediction of thermocline transition depths (often within 1 meter), the absolute temperature of the thermocline transitions (often within 1.0 degree Celsius), and profiles without a strong thermocline transition. Simulated dissolved oxygen profiles matched the trajectories of the measured dissolved oxygen concentrations at multiple depths over time, and the simulated concentrations matched the depth and slope of the measured concentrations.Additionally, trends in the measured water-quality data were captured by the model simulation, gaining some potential insights into the underlying mechanisms of critical Lake St. Croix metabolic processes. The CE–QUAL–W2 model tracked nitrate plus nitrite, total nitrogen, and total phosphorus throughout the year. Inflow nutrient contributions (loads), largely dominated by upstream St. Croix River loads, were the most important controls on Lake St. Croix water quality. Close to 60 percent of total phosphorus to the lake was from phosphorus derived from organic matter, and about 89 percent of phosphorus to Lake St. Croix was delivered by St. Croix River inflows. The Lake St. Croix CE–QUAL–W2 model offered potential mechanisms for the effect of external and internal loadings on the biotic response regarding the modeled algal community types of diatoms, green algae, and blue-green algae. The model also suggested the seasonal dominance of blue-green algae in all four pools of the lake.A sensitivity analysis was completed to test the total maximum daily load phosphorus-reduction scenario responses of total phosphorus and chlorophyll a. The modeling indicates that phosphorus reductions would result in similar Lake St. Croix reduced concentrations, although chlorophyll a concentrations did not decrease in the same proportional amounts as the total phosphorus concentrations had decreased. The smaller than expected reduction in algal growth rates highlighted that although inflow phosphorus loads are important, other constituents also can affect the algal response of the lake, such as changes in light penetration and the breakdown of organic matter releasing nutrients.The available habitat suitable for lake sturgeon was evaluated using the modeling results to determine the total volume of good-growth habitat, optimal growth habitat, and lethal temperature habitat. Overall, with the calibrated model, the fish habitat volume in general contained a large proportion of good-growth habitat and a sustained period of optimal growth habitat in the summer. Only brief periods of lethal oxy-thermal habitat were present in Lake St. Croix during the model simulation.

  17. The Center for In-Service Education. Final Evaluation Report. Volume II. Part 1.

    ERIC Educational Resources Information Center

    Tennessee State Dept. of Education, Nashville.

    This is an overview of the extensive in-service education inventory conducted as an integral portion of the planning contract for Models for In-Service Education supported by the Tennessee State Department of Education under Title III, Elementary and Secondary Education Act. The narrative descriptions are free of extensive statistical references…

  18. Distributive Effects of Forest Service Attempts to Maintain Community Stability

    Treesearch

    Steven E. Daniels; William F. Hyde; David N. Wear

    1991-01-01

    Community stability is an objective of USDA Forest Service timber sales. This paper examines that objective, and the success the Forest Service can have in attaining it, through its intended maintenance of a constant volume timber harvest schedule. We apply a three-factor, two-sector modified general equilibrium model with empirical evidence from the timber-based...

  19. Greater Than the Sum: Professionals in a Comprehensive Services Model. Teacher Education Monograph No. 17.

    ERIC Educational Resources Information Center

    Levin, Rebekah A., Ed.

    This book provides a picture of comprehensive children's services from a global, theoretical perspective, as well as a more practical guide to the potential roles for participating service providers and the structuring of such programs. Following an introduction, the volume is organized into 14 chapters: (1) "Moving from Cooperation to…

  20. Nonlinear ecosystem services response to groundwater availability under climate extremes

    NASA Astrophysics Data System (ADS)

    Qiu, J.; Zipper, S. C.; Motew, M.; Booth, E.; Kucharik, C. J.; Steven, L. I.

    2017-12-01

    Depletion of groundwater has been accelerating at regional to global scales. Besides serving domestic, industrial and agricultural needs, in situ groundwater is also a key control on biological, physical and chemical processes across the critical zone, all of which underpin supply of ecosystem services essential for humanity. While there is a rich history of research on groundwater effects on subsurface and surface processes, understanding interactions, nonlinearity and feedbacks between groundwater and ecosystem services remain limited, and almost absent in the ecosystem service literature. Moreover, how climate extremes may alter groundwater effects on services is underexplored. In this research, we used a process-based ecosystem model (Agro-IBIS) to quantify groundwater effects on eight ecosystem services related to food, water and biogeochemical processes in an urbanizing agricultural watershed in the Midwest, USA. We asked: (1) Which ecosystem services are more susceptible to shallow groundwater influences? (2) Do effects of groundwater on ecosystem services vary under contrasting climate conditions (i.e., dry, wet and average)? (3) Where on the landscape are groundwater effects on ecosystem services most pronounced? (4) How do groundwater effects depend on water table depth? Overall, groundwater significantly impacted all services studied, with the largest effects on food production, water quality and quantity, and flood regulation services. Climate also mediated groundwater effects with the strongest effects occurring under dry climatic conditions. There was substantial spatial heterogeneity in groundwater effects across the landscape that is driven in part by spatial variations in water table depth. Most ecosystem services responded nonlinearly to groundwater availability, with most apparent groundwater effects occurring when the water table is shallower than a critical depth of 2.5-m. Our findings provide compelling evidence that groundwater plays a vital role in sustaining ecosystem services. Our research highlights the pressing need to consider groundwater during the assessment and management of ecosystem services, and suggests that protecting groundwater resources may enhance ecosystem service resilience to future climate extremes and increased climate variability.

  1. Magma reservoir subsidence mechanics: Theoretical summary and application to Kilauea Volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Ryan, Michael P.; Blevins, James Y. K.; Okamura, Arnold T.; Koyanagi, Robert Y.

    1983-05-01

    An analytic model has been developed for the prediction of the three-dimensional deformation field generated by the withdrawal of magma from a sill-like storage compartment during an intrusion or eruption cycle. The model is based on the work of Berry and Sales (1961, 1962) and predicts the vertical displacement components over the areal plane. Model parameters are the depth of burial h, the intrusion half width a, the intrusion half length b, the thickness of the magmatic interior at the moment of melt withdrawal tm, and the planform aspect ratio ξ = a/b. The products of the model include areal deformation maps. Systematic variation in model parameters within the context of Kilauea Volcano, Hawaii, have revealed that circular and elliptical deformation patterns result from the collapse of draining rectilinear intrusions at depth. Moreover, the geometric parameters of a storage compartment may interact in complex ways to produce similar deformation patterns. The model has been applied to Kilauea Volcano for three periods of pronounced summit subsidence: (1) 1921-1927 (bracketing the steamblast eruptive phases of 1924); (2) June 1972 to December 1972, and (3) December 1972 to May 1973. Application of the model requires the simultaneous optimization of five predicted deformation features with respect to field measurements and the derivative deformation maps: (1) the vertical displacement maxima; (2) the vertical displacement gradients over the areal plane, (3) the lateral extent of the deformation field, (4) the aspect ratio of the subsidence pattern, and (5) the strike of the major axis of the deformation field. The constrained geometries and volumes of the inferred collapsed storage cavities for each period are (1) 1921-1927: depth ≅ 3 km, a ≅ 1500 m, b ≅ 4500 m, tm ≅ 20 m, V 540×106 m3, (2) June 1972 to December 1972: depth ≅ 3.3 km, a ≅ 600 m, b ≅ 2000 m, tm ≅ 1 m, V ≅ 4.8×106 m3, and (3) December 1972 to May 1973: depth ≅ 2.2 km, a ≅ 500 m, b ≅ 1612 m, tm ≅ 1 m, V ≅ 3.2×106 m3. For 2 and 3, calculated magmatic thicknesses tm happen to be in the range (3.48-0.15 m) of measurements for sill-like bodies in deeply dissected Hawaiian shield volcanoes. The fits obtained between calculated and observed deformation patterns allow quantification of the location, overall dimensions, orientation, and volume of the discrete, still molten, interior of sill-like compartments from which magma is tapped during eruption or intrusion.

  2. Erosion and deposition on a debris-flow fan

    NASA Astrophysics Data System (ADS)

    Densmore, A. L.; Schuerch, P.; Rosser, N. J.; McArdell, B. W.

    2011-12-01

    The ability of a debris flow to entrain or deposit sediment controls the downstream evolution of flow volume, and ultimately dictates both the geomorphic impact of the flow and the potential hazard that it represents. Our understanding of the patterns of, and controls on, such flow volume changes remains extremely limited, however, partly due to a poor mechanistic grasp of the interactions between debris flows and their bed and banks. In addition, we lack a good understanding of the cumulative long-term effects of sequences of flows in a single catchment-fan system. Here we begin to address these issues by using repeated terrestrial laser scanning (TLS) to characterize the detailed surface change associated with the passage of multiple debris flows on the Illgraben fan, Switzerland. We calculate surface elevation change along a 300 m study reach, and from this derive the downfan rate of flow volume change, or lag rate; for comparison, we also derive the spatially-averaged lag rate over the entire ~2 km length of the fan. Lag rates are broadly comparable over both length scales, indicating that flow behavior does not vary significantly across the fan for most flows, but importantly we find that flow volume at the fan head is a poor predictor of volume at the fan toe. The sign and magnitude of bed elevation change scale with local flow depth; at flow depths < 2 m, erosion and deposition are approximately equally likely, but erosion becomes increasingly dominant for flow depths > 2 m. On the Illgraben fan, this depth corresponds to a basal shear stress of 3-4 kPa. Because flow depth is in part a function of channel cross-sectional topography, which varies strongly both within and between flows, this result indicates that erosion and deposition are likely to be highly dynamic. The dependence of flow volume change on both the channel topography and the flow history may thus complicate efforts to predict debris-flow inundation areas by simple flow routing. We then apply a 2d numerical model of debris-flow fan evolution to explore the key controls on debris-flow routing and topographic development over sequences of multiple flows. We find that fan topographic roughness plays an important role in both channel development and fan surface stability. We also find that, while first-order fan shape is largely insensitive to the input flow sequence, second-order variables such as the pattern of surface exposure ages and the distribution of channel characteristics hold more promise as robust recorders of past flow conditions. Further work is needed to understand the degree to which the TLS-derived (and Illgraben-specific) relationship between bed elevation change and flow depth can be applied in different settings, and to elucidate the role played by coarse debris in controlling patterns of erosion and deposition.

  3. Production of radionuclides in artificial meteorites irradiated isotropically with 600 MeV protons

    NASA Technical Reports Server (NTRS)

    Michel, R.; Dragovitsch, P.; Englert, P.; Herpers, U.

    1986-01-01

    The understanding of the production of cosmogenic nuclides in small meteorites (R is less than 40 cm) still is not satisfactory. The existing models for the calculation of depth dependent production rates do not distinguish between the different types of nucleons reacting in a meteorite. They rather use general depth dependent particle fluxes to which cross sections have to be adjusted to fit the measured radionuclide concentrations. Some of these models can not even be extended to zero meteorite sizes without logical contradictions. Therefore, a series of three thick target irradiations was started at the 600 MeV proton beam of the CERN isochronuous cyclotron in order to study the interactions of small stony meteorites with galactic protons. The homogeneous 4 pi irradiation technique used provides a realistic meteorite model which allows a direct comparison of the measured depth profiles with those in real meteorites. Moreover, by the simultaneous measurement of thin target production cross sections one can differentiate between the contributions of primary and secondary nucleons over the entire volume of the artificial meteorite.

  4. Hospice Value-Based Purchasing Program: A Model Design.

    PubMed

    Nowak, Bryan P

    2016-12-01

    With the implementation of the Affordable Care Act, the U.S. government committed to a transition in payment policy for health care services linking reimbursement to improved health outcomes rather than the volume of services provided. To accomplish this goal, the Department of Health and Human Services is designing and implementing new payment models intended to improve the quality of health care while reducing its cost. Collectively, these novel payment models and programs have been characterized under the moniker of value-based purchasing (VBP), and although many of these models retain a fundamental fee-for-service (FFS) structure, they are seen as essential tools in the evolution away from volume-based health care financing toward a health system that provides "better care, smarter spending, and healthier people." In 2014, approximately 20% of Medicare provider FFS payments were linked to a VBP program. The Department of Health and Human Services has committed to a four-year plan to link 90% of Medicare provider FFS payments to value-based purchasing by 2018. To achieve this goal, all items and services currently reimbursed under Medicare FFS programs will need to be evaluated in the context of VBP. To this end, the Medicare Hospice benefit appears to be appropriate for inclusion in a model of VBP. This policy analysis proposes an adaptable model for a VBP program for the Medicare Hospice benefit linking payment to quality and efficiency in a manner consistent with statutory requirements established in the Affordable Care Act. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  5. The Uniformed Services Employment and Reemployment Rights Act (USERRA) Guide. Volume II.

    DTIC Science & Technology

    1998-06-01

    Deployment Guide JA 274 Uniformed Services Former Spouses’ Protection Act - Outline and References JA 275 Model Tax Assistance Program JA 276 Preventive...uniformed services. (b) It is the sense of Congress that the Federal Government should be a model employer in carrying out the provisions of this...Under CSRS .-Section 8351(b) of title 5, United States Code, is amended by adding at the end the following: A-29 (11) In applying section 8432b to

  6. Real-time out-of-plane artifact subtraction tomosynthesis imaging using prior CT for scanning beam digital x-ray system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca

    2014-11-01

    Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the prior CT volume. Conclusions: Their proposed prior CT-augmented OPAST reconstruction algorithm improves lung nodule visibility and depth resolution for the SBDX system.« less

  7. The Seismic Velocity In Gas-charged Magma

    NASA Astrophysics Data System (ADS)

    Sturton, S.; Neuberg, J. W.

    2001-12-01

    Long-period and hybrid events, seen at the Soufrière Hills Volcano, Montserrat, show dominant low frequency content suggesting the seismic wavefield is formed as a result of interface waves at the boundary between a fluid and a solid medium. This wavefield will depend on the impedance contrast between the two media and therefore the difference in seismic velocity. For a gas-charged magma, increasing pressure with depth reduces the volume of gas exsolved, increasing the seismic velocity with depth in the conduit. The seismic radiation pattern along the conduit can then be modelled. Where single events merge into tremor, gliding lines can sometimes be seen in the spectra and indicate either changes in the seismic parameters with time or varying triggering rates of single events.The differential equation describing the time dependence of bubble growth by diffusion is solved numerically for a stationary magma column undergoing a decompression event. The volume of gas is depth dependent and increases with time as the bubbles grow and expand. It is used to calculate the depth and time dependence of the density, pressure and seismic velocity. The effect of different viscosities associated with different magma types and concentration of water in the melt on the rate of bubble growth is explored. Crystal growth, which increases the concentration of water in the melt, affects the amount of gas that can be exsolved.

  8. Volume conductor model of transcutaneous electrical stimulation with kilohertz signals

    PubMed Central

    Medina, Leonel E.; Grill, Warren M.

    2014-01-01

    Objective Incorporating high-frequency components in transcutaneous electrical stimulation (TES) waveforms may make it possible to stimulate deeper nerve fibers since the impedance of tissue declines with increasing frequency. However, the mechanisms of high-frequency TES remain largely unexplored. We investigated the properties of TES with frequencies beyond those typically used in neural stimulation. Approach We implemented a multilayer volume conductor model including dispersion and capacitive effects, coupled to a cable model of a nerve fiber. We simulated voltage- and current-controlled transcutaneous stimulation, and quantified the effects of frequency on the distribution of potentials and fiber excitation. We also quantified the effects of a novel transdermal amplitude modulated signal (TAMS) consisting of a non-zero offset sinusoidal carrier modulated by a square-pulse train. Main results The model revealed that high-frequency signals generated larger potentials at depth than did low frequencies, but this did not translate into lower stimulation thresholds. Both TAMS and conventional rectangular pulses activated more superficial fibers in addition to the deeper, target fibers, and at no frequency did we observe an inversion of the strength-distance relationship. Current regulated stimulation was more strongly influenced by fiber depth, whereas voltage regulated stimulation was more strongly influenced by skin thickness. Finally, our model reproduced the threshold-frequency relationship of experimentally measured motor thresholds. Significance The model may be used for prediction of motor thresholds in TES, and contributes to the understanding of high-frequency TES. PMID:25380254

  9. Volume conductor model of transcutaneous electrical stimulation with kilohertz signals

    NASA Astrophysics Data System (ADS)

    Medina, Leonel E.; Grill, Warren M.

    2014-12-01

    Objective. Incorporating high-frequency components in transcutaneous electrical stimulation (TES) waveforms may make it possible to stimulate deeper nerve fibers since the impedance of tissue declines with increasing frequency. However, the mechanisms of high-frequency TES remain largely unexplored. We investigated the properties of TES with frequencies beyond those typically used in neural stimulation. Approach. We implemented a multilayer volume conductor model including dispersion and capacitive effects, coupled to a cable model of a nerve fiber. We simulated voltage- and current-controlled transcutaneous stimulation, and quantified the effects of frequency on the distribution of potentials and fiber excitation. We also quantified the effects of a novel transdermal amplitude modulated signal (TAMS) consisting of a non-zero offset sinusoidal carrier modulated by a square-pulse train. Main results. The model revealed that high-frequency signals generated larger potentials at depth than did low frequencies, but this did not translate into lower stimulation thresholds. Both TAMS and conventional rectangular pulses activated more superficial fibers in addition to the deeper, target fibers, and at no frequency did we observe an inversion of the strength-distance relationship. Current regulated stimulation was more strongly influenced by fiber depth, whereas voltage regulated stimulation was more strongly influenced by skin thickness. Finally, our model reproduced the threshold-frequency relationship of experimentally measured motor thresholds. Significance. The model may be used for prediction of motor thresholds in TES, and contributes to the understanding of high-frequency TES.

  10. Volcanic ash deposition, eelgrass beds, and inshore habitat loss from the 1920s to the 1990s at Chignik, Alaska

    NASA Astrophysics Data System (ADS)

    Zimmermann, Mark; Ruggerone, Gregory T.; Freymueller, Jeffrey T.; Kinsman, Nicole; Ward, David H.; Hogrefe, Kyle R.

    2018-03-01

    We quantified the shallowing of the seafloor in five of six bays examined in the Chignik region of the Alaska Peninsula, confirming National Ocean Service observations that 1990s hydrographic surveys were shallower than previous surveys from the 1920s. Castle Bay, Chignik Lagoon, Hook Bay, Kujulik Bay and Mud Bay lost volume as calculated from Mean Lower Low Water (Chart Datum) to the deepest depths and four of these sites lost volume from Mean High Water to the deepest depths. Calculations relative to each datum were made because tidal datum records exhibited an increase in tidal range in this region from the 1920s to the 1990s. Our analysis showed that Mud Bay is quickly disappearing while Chignik Lagoon is being reduced to narrow channels. Anchorage Bay was the only site that increased in depth over time, perhaps due to erosion. Volcanoes dominate the landscape of the Chignik area. They have blanketed the region in deep ash deposits before the time frame of this study, and some have had smaller ash-producing eruptions during the time frame of this study. Remobilization of land-deposited ash and redeposition in marine areas - in some locations facilitated by extensive eelgrass (Zostera marina) beds (covering 54% of Chignik Lagoon and 68% of Mud Bay in 2010) - is the most likely cause of shallowing in the marine environment. Loss of shallow water marine habitat may alter future abundance and distribution of several fish, invertebrate and avian species.

  11. Mechanical Strength of the Proximal Femur After Arthroscopic Osteochondroplasty for Femoroacetabular Impingement: Finite Element Analysis and 3-Dimensional Image Analysis.

    PubMed

    Oba, Masatoshi; Kobayashi, Naomi; Inaba, Yutaka; Choe, Hyonmin; Ike, Hiroyuki; Kubota, So; Saito, Tomoyuki

    2018-06-21

    To examine the influence of femoral neck resection on the mechanical strength of the proximal femur in actual surgery. Eighteen subjects who received arthroscopic cam resection for cam-type femoroacetabular impingement (FAI) were included. Finite element analyses (FEAs) were performed to calculate changes in simulative fracture load between pre- and postoperative femur models. The finite element femur models were constructed from computed tomographic images; thus, the models represented the shape of the original femur, including the bone resection site. Three-dimensional image analysis of the bone resection site was performed to identify morphometric factors that affect strength in the postoperative femur model. Four oblique sagittal planes running perpendicular to the femoral neck axis were used as reference planes to measure the bone resection site. At the transcervical reference plane, both the bone resection depth and the cross-sectional area at the resection site correlated strongly with postoperative changes in the simulated fracture load (R 2  = 0.6, P = .0001). However, only resection depth was significantly correlated with the simulated fracture load at the reference plane for the head-neck junction. The resected bone volume did not correlate with the postoperative changes in the simulated fracture load. The results of our FEA suggest that the bone resection depth measured at the head-neck junction and transcervical reference plane correlates with fracture risk after osteochondroplasty. By contrast, bone resection at more proximal areas did not have a significant effect on the postoperative femur model strength in our FEA. The total volume of resected bone was also not significantly correlated with postoperative changes in femur model strength. This biomechanical study using FEA suggest that there is a risk of femoral neck fracture after arthroscopic cam resection, particularly when the resected lesion is located distally. Copyright © 2018 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  12. On the Resolvability of Steam Assisted Gravity Drainage Reservoirs Using Time-Lapse Gravity Gradiometry

    NASA Astrophysics Data System (ADS)

    Elliott, E. Judith; Braun, Alexander

    2017-11-01

    Unconventional heavy oil resource plays are important contributors to oil and gas production, as well as controversial for posing environmental hazards. Monitoring those reservoirs before, during, and after operations would assist both the optimization of economic benefits and the mitigation of potential environmental hazards. This study investigates how gravity gradiometry using superconducting gravimeters could resolve depletion areas in steam assisted gravity drainage (SAGD) reservoirs. This is achieved through modelling of a SAGD reservoir at 1.25 and 5 years of operation. Specifically, the density change structure identified from geological, petrological, and seismic observations is forward modelled for gravity and gradients. Three main parameters have an impact on the resolvability of bitumen depletion volumes and are varied through a suitable parameter space: well pair separation, depth to the well pairs, and survey grid sampling. The results include a resolvability matrix, which identifies reservoirs that could benefit from time-lapse gravity gradiometry monitoring. After 1.25 years of operation, during the rising phase, the resolvable maximum reservoir depth ranges between the surface and 230 m, considering a well pair separation between 80 and 200 m. After 5 years of production, during the spreading phase, the resolvability of depletion volumes around single well pairs is greatly compromised as the depletion volume is closer to the surface, which translates to a larger portion of the gravity signal. The modelled resolvability matrices were derived from visual inspection and spectral analysis of the gravity gradient signatures and can be used to assess the applicability of time-lapse gradiometry to monitor reservoir density changes.

  13. Evidence for the contemporary magmatic system beneath Long Valley Caldera from local earthquake tomography and receiver function analysis

    USGS Publications Warehouse

    Seccia, D.; Chiarabba, C.; De Gori, P.; Bianchi, I.; Hill, D.P.

    2011-01-01

    We present a new P wave and S wave velocity model for the upper crust beneath Long Valley Caldera obtained using local earthquake tomography and receiver function analysis. We computed the tomographic model using both a graded inversion scheme and a traditional approach. We complement the tomographic I/P model with a teleseismic receiver function model based on data from broadband seismic stations (MLAC and MKV) located on the SE and SW margins of the resurgent dome inside the caldera. The inversions resolve (1) a shallow, high-velocity P wave anomaly associated with the structural uplift of a resurgent dome; (2) an elongated, WNW striking low-velocity anomaly (8%–10 % reduction in I/P) at a depth of 6 km (4 km below mean sea level) beneath the southern section of the resurgent dome; and (3) a broad, low-velocity volume (–5% reduction in I/P and as much as 40% reduction in I/S) in the depth interval 8–14 km (6–12 km below mean sea level) beneath the central section of the caldera. The two low-velocity volumes partially overlap the geodetically inferred inflation sources that drove uplift of the resurgent dome associated with caldera unrest between 1980 and 2000, and they likely reflect the ascent path for magma or magmatic fluids into the upper crust beneath the caldera.

  14. STRATEGIC PROVIDER BEHAVIOR UNDER GLOBAL BUDGET PAYMENT WITH PRICE ADJUSTMENT IN TAIWAN†,‡

    PubMed Central

    CHEN, BRADLEY; FAN, VICTORIA Y.

    2017-01-01

    Global budget payment is one of the most effective strategies for cost containment, but its impacts on provider behavior have not been explored in detail. This study examines the theoretical and empirical role of global budget payment on provider behavior. The study proposes that global budget payment with price adjustment is a form of common-pool resources. A two-product game theoretic model is derived, and simulations demonstrate that hospitals are expected to expand service volumes, with an emphasis on products with higher price–marginal cost ratios. Next, the study examines the early effects of Taiwan’s global budget payment system using a difference-in-difference strategy and finds that Taiwanese hospitals exhibited such behavior, where the pursuit of individual interests led to an increase in treatment intensities. Furthermore, hospitals significantly increased inpatient service volume for regional hospitals and medical centers. In contrast, local hospitals, particularly for those without teaching status designation, faced a negative impact on service volume, as larger hospitals were better positioned to induce demand and pulled volume away from their smaller counterparts through more profitable services and products such as radiology and pharmaceuticals. PMID:25132007

  15. Strategic Provider Behavior Under Global Budget Payment with Price Adjustment in Taiwan.

    PubMed

    Chen, Bradley; Fan, Victoria Y

    2015-11-01

    Global budget payment is one of the most effective strategies for cost containment, but its impacts on provider behavior have not been explored in detail. This study examines the theoretical and empirical role of global budget payment on provider behavior. The study proposes that global budget payment with price adjustment is a form of common-pool resources. A two-product game theoretic model is derived, and simulations demonstrate that hospitals are expected to expand service volumes, with an emphasis on products with higher price-marginal cost ratios. Next, the study examines the early effects of Taiwan's global budget payment system using a difference-in-difference strategy and finds that Taiwanese hospitals exhibited such behavior, where the pursuit of individual interests led to an increase in treatment intensities. Furthermore, hospitals significantly increased inpatient service volume for regional hospitals and medical centers. In contrast, local hospitals, particularly for those without teaching status designation, faced a negative impact on service volume, as larger hospitals were better positioned to induce demand and pulled volume away from their smaller counterparts through more profitable services and products such as radiology and pharmaceuticals. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Forecasted Flood Depth Grids Providing Early Situational Awareness to FEMA during the 2017 Atlantic Hurricane Season

    NASA Astrophysics Data System (ADS)

    Jones, M.; Longenecker, H. E., III

    2017-12-01

    The 2017 hurricane season brought the unprecedented landfall of three Category 4 hurricanes (Harvey, Irma and Maria). FEMA is responsible for coordinating the federal response and recovery efforts for large disasters such as these. FEMA depends on timely and accurate depth grids to estimate hazard exposure, model damage assessments, plan flight paths for imagery acquisition, and prioritize response efforts. In order to produce riverine or coastal depth grids based on observed flooding, the methodology requires peak crest water levels at stream gauges, tide gauges, high water marks, and best-available elevation data. Because peak crest data isn't available until the apex of a flooding event and high water marks may take up to several weeks for field teams to collect for a large-scale flooding event, final observed depth grids are not available to FEMA until several days after a flood has begun to subside. Within the last decade NOAA's National Weather Service (NWS) has implemented the Advanced Hydrologic Prediction Service (AHPS), a web-based suite of accurate forecast products that provide hydrograph forecasts at over 3,500 stream gauge locations across the United States. These forecasts have been newly implemented into an automated depth grid script tool, using predicted instead of observed water levels, allowing FEMA access to flood hazard information up to 3 days prior to a flooding event. Water depths are calculated from the AHPS predicted flood stages and are interpolated at 100m spacing along NHD hydrolines within the basin of interest. A water surface elevation raster is generated from these water depths using an Inverse Distance Weighted interpolation. Then, elevation (USGS NED 30m) is subtracted from the water surface elevation raster so that the remaining values represent the depth of predicted flooding above the ground surface. This automated process requires minimal user input and produced forecasted depth grids that were comparable to post-event observed depth grids and remote sensing-derived flood extents for the 2017 hurricane season. These newly available forecasted models were used for pre-event response planning and early estimated hazard exposure counts, allowing FEMA to plan for and stand up operations several days sooner than previously possible.

  17. A new petrological and geophysical investigation of the present-day plumbing system of Mount Vesuvius

    NASA Astrophysics Data System (ADS)

    Pommier, A.; Tarits, P.; Hautot, S.; Pichavant, M.; Scaillet, B.; Gaillard, F.

    2010-07-01

    A model of the electrical resistivity of Mt. Vesuvius has been elaborated to investigate the present structure of the volcanic edifice. The model is based on electrical conductivity measurements in the laboratory, on geophysical information, in particular, magnetotelluric (MT) data, and on petrological and geochemical constraints. Both 1-D and 3-D simulations explored the effect of depth, volume and resistivity of either one or two reservoirs in the structure. For each configuration tested, modeled MT transfer functions were compared to field transfer functions from field magnetotelluric studies. The field electrical data are reproduced with a shallow and very conductive layer (˜0.5 km depth, 1.2 km thick, 5 ohm.m resistive) that most likely corresponds to a saline brine present beneath the volcano. Our results are also compatible with the presence of cooling magma batches at shallow depths (<3-4 km depth). The presence of a deeper body at ˜8 km depth, as suggested by seismic studies, is consistent with the observed field transfer functions if such a body has an electrical resistivity > ˜100 ohm.m. According to a petro-physical conductivity model, such a resistivity value is in agreement either with a low-temperature, crystal-rich magma chamber or with a small quantity of hotter magma interconnected in the resistive surrounding carbonates. However, the low quality of MT field data at long periods prevent from placing strong constraints on a potential deep magma reservoir. A comparison with seismic velocity values tends to support the second hypothesis. Our findings would be consistent with a deep structure (8-10 km depth) made of a tephriphonolitic magma at 1000°C, containing 3.5 wt%H2O, 30 vol.% crystals, and interconnected in carbonates in proportions ˜45% melt -55% carbonates.

  18. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    USGS Publications Warehouse

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  19. Why joints are more abundant than faults. A conceptual model to estimate their ratio in layered carbonate rocks

    NASA Astrophysics Data System (ADS)

    Caputo, Riccardo

    2010-09-01

    It is a commonplace field observation that extension fractures are more abundant than shear fractures. The questions of how much more abundant, and why, are posed in this paper and qualitative estimates of their ratio within a rock volume are made on the basis of field observations and mechanical considerations. A conceptual model is also proposed to explain the common range of ratios between extension and shear fractures, here called the j/ f ratio. The model considers three major genetic stress components originated from overburden, pore-fluid pressure and tectonics and assumes that some of the remote genetic stress components vary with time ( i.e. stress-rates are included). Other important assumptions of the numerical model are that: i) the strength of the sub-volumes is randomly attributed following a Weibull probabilistic distribution, ii) all fractures heal after a given time, thus simulating the cementation process, and therefore iii) both extensional jointing and shear fracturing could be recurrent events within the same sub-volume. As a direct consequence of these assumptions, the stress tensor at any point varies continuously in time and these variations are caused by both remote stresses and local stress drops associated with in-situ and neighbouring fracturing events. The conceptual model is implemented in a computer program to simulate layered carbonate rock bodies undergoing brittle deformation. The numerical results are obtained by varying the principal parameters, like depth ( viz. confining pressure), tensile strength, pore-fluid pressure and shape of the Weibull distribution function, in a wide range of values, therefore simulating a broad spectrum of possible mechanical and lithological conditions. The quantitative estimates of the j/ f ratio confirm the general predominance of extensional failure events during brittle deformation in shallow crustal rocks and provide useful insights for better understanding the role played by the different parameters. For example, as a general trend it is observed that the j/ f ratio is inversely proportional to depth ( viz. confining pressure) and directly proportional to pore-fluid pressure, while the stronger is the rock, the wider is the range of depths showing a finite value of the j/ f ratio and in general the deeper are the conditions where extension fractures can form. Moreover, the wider is the strength variability of rocks ( i.e. the lower is the m parameter of the Weibull probabilistic distribution function), the wider is the depth range where both fractures can form providing a finite value of the j/ f ratio. Natural case studies from different geological and tectonic settings are also used to test the conceptual model and the numerical results showing a good agreement between measured and predicted j/ f ratios.

  20. Service-oriented infrastructure for scientific data mashups

    NASA Astrophysics Data System (ADS)

    Baru, C.; Krishnan, S.; Lin, K.; Moreland, J. L.; Nadeau, D. R.

    2009-12-01

    An important challenge in informatics is the development of concepts and corresponding architecture and tools to assist scientists with their data integration tasks. A typical Earth Science data integration request may be expressed, for example, as “For a given region (i.e. lat/long extent, plus depth), return a 3D structural model with accompanying physical parameters of density, seismic velocities, geochemistry, and geologic ages, using a cell size of 10km.” Such requests create “mashups” of scientific data. Currently, such integration is hand-crafted and depends heavily upon a scientist’s intimate knowledge of how to process, interpret, and integrate data from individual sources. In most case, the ultimate “integration” is performed by overlaying output images from individual processing steps using image manipulation software such as, say, Adobe Photoshop—leading to “Photoshop science”, where it is neither easy to repeat the integration steps nor to share the data mashup. As a result, scientists share only the final images and not the mashup itself. A more capable information infrastructure is needed to support the authoring and sharing of scientific data mashups. The infrastructure must include services for data discovery, access, and transformation and should be able to create mashups that are interactive, allowing users to probe and manipulate the data and follow its provenance. We present an architectural framework based on a service-oriented architecture for scientific data mashups in a distributed environment. The framework includes services for Data Access, Data Modeling, and Data Interaction. The Data Access services leverage capabilities for discovery and access to distributed data resources provided by efforts such as GEON and the EarthScope Data Portal, and services for federated metadata catalogs under development by projects like the Geosciences Information Network (GIN). The Data Modeling services provide 2D, 3D, and 4D modeling services based on standards such as WFS, WMS, WCS, and GeoSciML that allow integration of disparate data in a distributed, Web-based environment. Along these lines, we introduce the notion of a Web Volume Service (WVS) for modeling and manipulating 3D data. The Data Interaction Services provide services for rich interactions with the integrated 3D data. To provide efficient interactions with large-scale data in a distributed environment the architecture must include capabilities for caching and reuse of data, use of multi-level indexing, and the ability to orchestrate and coordinate execution of data processing and transformation routines as part of the data access and integration steps. The data mashup infrastructure is based on a service-oriented architecture. A range of alternatives are available for implementing these mashup services in a scalable fashion, using the cloud computing paradigm. We will describe the tradeoffs of each approach and provide an evaluation of which options are best suited to which types of services. We will describe security, privacy, performance, and price/performance issues and considerations in implementing services on dedicated servers versus private as well as public clouds, including systems such as Amazon Web Services.

  1. Open-source Software for Demand Forecasting of Clinical Laboratory Test Volumes Using Time-series Analysis.

    PubMed

    Mohammed, Emad A; Naugler, Christopher

    2017-01-01

    Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. This tool will allow anyone with historic test volume data to model future demand.

  2. Open-source Software for Demand Forecasting of Clinical Laboratory Test Volumes Using Time-series Analysis

    PubMed Central

    Mohammed, Emad A.; Naugler, Christopher

    2017-01-01

    Background: Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. Method: In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. Results: This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. Conclusion: This tool will allow anyone with historic test volume data to model future demand. PMID:28400996

  3. The Economic Value of Remote Sensing of Earth Resources from Space: An ERTS Overview and the Value of Continuity of Service. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Hazelrigg, G. A., Jr.; Heiss, K. P.

    1974-01-01

    An overview of the ERTS program is given to determine the magnitude of the benefits that can be reasonably expected to flow from an Earth Resources Survey (ERS) Program, and to assess the benefits foregone in the event of a one or two-year gap in ERS services. An independent evaluation of the benefits attributable to ERS-derived information in key application areas is presented. These include two case studies in agriculture-distribution, production and import/export, and one study in water management. The cost-effectiveness of satellites in an ERS system is studied by means of a land cover case study. The annual benefits achieveable from an ERS system are measured by the in-depth case studies to be in the range of $430 to $746 million. Benefits foregone in the event of a one-year gap in ERS service are estimated to be $147 to $220 million and $274 to $420 million for a two-year gap in ERS service.

  4. Airport offsite passenger service facilities : an option for improving landside access. Volume II, Access characteristics and travel demand.

    DOT National Transportation Integrated Search

    2009-01-01

    Offsite airport facilities provide ground transportation, baggage and passenger check in, and other transportation services to departing air passengers from a remote location. The purpose of this study was to develop models to determine the airports ...

  5. Finite volume model for two-dimensional shallow environmental flow

    USGS Publications Warehouse

    Simoes, F.J.M.

    2011-01-01

    This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.

  6. Elements of an improved model of debris‐flow motion

    USGS Publications Warehouse

    Iverson, Richard M.

    2009-01-01

    A new depth‐averaged model of debris‐flow motion describes simultaneous evolution of flow velocity and depth, solid and fluid volume fractions, and pore‐fluid pressure. Non‐hydrostatic pore‐fluid pressure is produced by dilatancy, a state‐dependent property that links the depth‐averaged shear rate and volumetric strain rate of the granular phase. Pore‐pressure changes caused by shearing allow the model to exhibit rate‐dependent flow resistance, despite the fact that the basal shear traction involves only rate‐independent Coulomb friction. An analytical solution of simplified model equations shows that the onset of downslope motion can be accelerated or retarded by pore‐pressure change, contingent on whether dilatancy is positive or negative. A different analytical solution shows that such effects will likely be muted if downslope motion continues long enough, because dilatancy then evolves toward zero, and volume fractions and pore pressure concurrently evolve toward steady states.

  7. Lunar crater volumes - Interpretation by models of impact cratering and upper crustal structure

    NASA Technical Reports Server (NTRS)

    Croft, S. K.

    1978-01-01

    Lunar crater volumes can be divided by size into two general classes with distinctly different functional dependence on diameter. Craters smaller than approximately 12 km in diameter are morphologically simple and increase in volume as the cube of the diameter, while craters larger than about 20 km are complex and increase in volume at a significantly lower rate implying shallowing. Ejecta and interior volumes are not identical and their ratio, Schroeters Ratio (SR), increases from about 0.5 for simple craters to about 1.5 for complex craters. The excess of ejecta volume causing the increase, can be accounted for by a discontinuity in lunar crust porosity at 1.5-2 km depth. The diameter range of significant increase in SR corresponds with the diameter range of transition from simple to complex crater morphology. This observation, combined with theoretical rebound calculation, indicates control of the transition diameter by the porosity structure of the upper crust.

  8. SU-E-T-562: Scanned Percent Depth Dose Curve Discrepancy for Photon Beams with Physical Wedge in Place (Varian IX) Using Different Sensitive Volume Ion Chambers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, H; Sarkar, V; Rassiah-Szegedi, P

    2014-06-01

    Purpose: To investigate and report the discrepancy of scanned percent depth dose (PDD) for photon beams with physical wedge in place when using ion chambers with different sensitive volumes. Methods/Materials: PDD curves of open fields and physical wedged fields (15, 30, 45, and 60 degree wedge) were scanned for photon beams (6MV and 10MV, Varian iX) with field size of 5x5 and 10x10 cm using three common scanning chambers with different sensitive volumes - PTW30013 (0.6cm3), PTW23323 (0.1cm3) and Exradin A16 (0.007cm3). The scanning system software used was OmniPro version 6.2, and the scanning water tank was the Scanditronix Wellhoffermore » RFA 300.The PDD curves from the three chambers were compared. Results: Scanned PDD curves of the same energy beams for open fields were almost identical between three chambers, but the wedged fields showed non-trivial differences. The largest differences were observed between chamber PTW30013 and Exradin A16. The differences increased as physical wedge angle increased. The differences also increased with depth, and were more pronounced for 6MV beam. Similar patterns were shown for both 5x5 and 10x10 cm field sizes. For open fields, all PDD values agreed with each other within 1% at 10cm depth and within 1.62% at 20 cm depth. For wedged fields, the difference of PDD values between PTW30013 and A16 reached 4.09% at 10cm depth, and 5.97% at 20 cm depth for 6MV with 60 degree physical wedge. Conclusion: We observed a significant difference in scanned PDD curves of photon beams with physical wedge in place obtained when using different sensitive volume ion chambers. The PDD curves scanned with the smallest sensitive volume ion chamber showed significant difference from larger chamber results, beyond 10cm depth. We believe this to be caused by varying response to beam hardening by the wedges.« less

  9. SCS-CN parameter determination using rainfall-runoff data in heterogeneous watersheds. The two-CN system approach

    NASA Astrophysics Data System (ADS)

    Soulis, K. X.; Valiantzas, J. D.

    2011-10-01

    The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN values can be estimated by being selected from tables. However, it is more accurate to estimate the CN value from measured rainfall-runoff data (assumed available) in a watershed. Previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. They suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the novel hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of the inevitable presence of soil-cover complex spatial variability along watersheds is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behavior of the CN-rainfall function produced by the proposed two-CN system concept is approached theoretically, it is analyzed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous original method based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.

  10. Social Responsibility and Sustainability: Multidisciplinary Perspectives through Service Learning. Service Learning for Civic Engagement Series

    ERIC Educational Resources Information Center

    McDonald, Tracy, Ed.

    2011-01-01

    This concluding volume in the series presents the work of faculty who have been moved to make sustainability the focus of their work, and to use service learning as one method of teaching sustainability to their students. The chapters in the opening section of this book-- Environmental Awareness--offer models for opening students to the awareness…

  11. Unsteady jet in designing innovative drug delivery system

    NASA Astrophysics Data System (ADS)

    Wang, Cong; Mazur, Paul; Cosse, Julia; Rider, Stephanie; Gharib, Morteza

    2014-11-01

    Micro-needle injections, a promising pain-free drug delivery method, is constrained by its limited penetration depth. This deficiency can be overcome by implementing fast unsteady jet that can penetrate sub-dermally. The development of a faster liquid jet would increase the penetration depth and delivery volume of micro-needles. In this preliminary work, the nonlinear transient behavior of an elastic tube balloon in providing fast discharge is analyzed. A physical model that combines the Mooney Rivlin Material model and Young-Lapalce's Law was developed and used to investigate the fast discharging dynamic phenomenon. A proof of concept prototype was constructed to demonstrate the feasibility of a simple thumb-sized delivery system to generate liquid jet with desired speed in the range of 5-10 m/s. This work is supported by ZCUBE Corporation.

  12. Water volume and sediment accumulation in Lake Linganore, Frederick County, Maryland, 2009

    USGS Publications Warehouse

    Sekellick, Andrew J.; Banks, S.L.

    2010-01-01

    To assist in understanding sediment and phosphorus loadings and the management of water resources, a bathymetric survey was conducted at Lake Linganore in Frederick County, Maryland in June 2009 by the U.S. Geological Survey, in cooperation with the City of Frederick and Frederick County, Maryland. Position data and water-depth data were collected using a survey grade echo sounder and a differentially corrected global positioning system. Data were compiled and edited using geographic information system software. A three-dimensional triangulated irregular network model of the lake bottom was created to calculate the volume of stored water in the reservoir. Large-scale topographic maps of the valley prior to inundation in 1972 were provided by the City of Frederick and digitized. The two surfaces were compared and a sediment volume was calculated. Cartographic representations of both water depth and sediment accumulation were produced along with an area/capacity table. An accuracy assessment was completed on the resulting bathymetric model. Vertical accuracy at the 95-percent confidence level for the collected data, the bathymetric surface model, and the bathymetric contour map was calculated to be 0.95 feet, 1.53 feet, and 3.63 feet, respectively. The water storage volume of Lake Linganore was calculated to be 1,860 acre-feet at full pool elevation. Water volume in the reservoir has decreased by 350 acre-feet (about 16 percent) in the 37 years since the dam was constructed. The total calculated volume of sediment deposited in the lake since 1972 is 313 acre-feet. This represents an average rate of sediment accumulation of 8.5 acre-feet per year since Linganore Creek was impounded. A sectional analysis of sediment distribution indicates that the most upstream third of Lake Linganore contains the largest volume of sediment whereas the section closest to the dam contains the largest amount of water. In comparison to other Maryland Piedmont reservoirs, Lake Linganore was found to have one of the lowest sedimentation rates at 0.26 cubic yards per year per acre of drainage area. Sedimentation rates in other comparable Maryland reservoirs were Prettyboy Reservoir (filling at a rate of 2.26 cubic yards per year per acre), Loch Raven Reservoir (filling at a rate of 0.88 cubic yards per year per acre) and Piney Run Reservoir (filling at a negligible rate).

  13. A physically-based method for predicting peak discharge of floods caused by failure of natural and constructed earthen dams

    USGS Publications Warehouse

    Walder, J.S.

    1997-01-01

    We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/ D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?? > 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.

  14. Epos TCS Satellite Data

    NASA Astrophysics Data System (ADS)

    Manunta, Michele; Mandea, Mioara; Fernández-Turiel, José Luis; Stramondo, Salvatore; Wright, Tim; Walter, Thomas; Bally, Philippe; Casu, Francesco; Zeni, Giovanni; Buonanno, Sabatino; Zinno, Ivana; Tizzani, Pietro; Castaldo, Raffaele; Ostanciaux, Emilie; Diament, Michel; Hooper, Andy; Maccaferri, Francesco; Lanari, Riccardo

    2016-04-01

    TCS Satellite Data is devoted to provide Earth Observation (EO) services, transversal with respect to the large EPOS community, suitable to be used in several application scenarios. In particular, the main goal is to contribute with mature services that have already well demonstrated their effectiveness and relevance in investigating the physical processes controlling earthquakes, volcanic eruptions and unrest episodes as well as those driving tectonics and Earth surface dynamics. The TCS Satellite Data will provide two kinds of services: satellite products/services, and Value-added satellite products/services. The satellite products/services are composed of three (EPOSAR, GDM and COMET) well-identified and partly already operational elements for delivering Level 1 products. Such services will be devoted to the generation of SAR interferograms, DTM and ground displacement maps through the exploitation of different advanced EO techniques for InSAR and optical data analysis. The Value-added satellite products/services are composed of 4 elements (EPOSAR, 3D-Def, Mod and COMET) of Level 2 and 3 products. Such services integrate satellite and in situ measurements and observations to retrieve information on source mechanism, such as the geometry (spatial location, depth, volume changes) and the physical parameters of the deformation sources, through the exploitation of modelling approaches. The TCS Satellite Data will provide products in two different processing and delivery modes: 1- surveillance mode - routinely product generation; 2- on demand mode - product generation performed on demand by the user. Concerning the surveillance mode, the goal is providing continuous satellite measurements in areas of particular interest from a geophysical perspective (supersites). The objective is the detection of displacement patterns changing along time and their geophysical explanation. This is a valid approach for inter-seismic movements and volcanic unrest, post-seismic and post-eruptive displacements, urban subsidence, coastal movements. The on demand mode will allow users to process available satellite data-stack by selecting the scenes and the area of interest, and properly setting some processing parameters or to perform modelling analyses. This processing mode will guarantee the possibility to analyse areas of interest for the users, thus exploiting as much as possible the global coverage strategy of satellites, as well as performing user-driven processing, benefiting from the knowledge of the characteristics of the particular investigated area and/or deformation phenomenon.

  15. Approach for scene reconstruction from the analysis of a triplet of still images

    NASA Astrophysics Data System (ADS)

    Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle

    1997-03-01

    Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.

  16. Individual Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2015-01-01

    Besides providing position, navigation, and timing (PNT) to terrestrial users, GPS is currently used to provide for precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis control of Earth orbiting satellites. With additional Global Navigation Satellite Systems (GNSS) coming into service (GLONASS, Beidou, and Galileo), it will be possible to provide these services by using other GNSS constellations. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to 70,000 km. This paper will report a similar analysis of the performance of each of the additional GNSS and compare them with GPS alone. The Space Service Volume, defined as the volume between 3,000 km altitude and geosynchronous altitude, as compared with the Terrestrial Service Volume between the surface and 3,000 km. In the Terrestrial Service Volume, GNSS performance will be similar to performance on the Earth's surface. The GPS system has established signal requirements for the Space Service Volume. A separate paper presented at the conference covers the use of multiple GNSS in the Space Service Volume.

  17. The petroleum explorationist's guide to contracts used in oil and gas operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosburg, L.G. Jr.

    This volume provides articles and current sample forms of contract negotiation and drafting. The contents include: An introduction to oil and gas contracts; Effective deal negotiation; General principles of contract law and negotiation; Problems and pitfalls in support agreements; Sample support agreements; Basic concept of farmout agreements; Farmout negotiation checklist; ''Area of mutual interest'' provisions, Problems and pitfalls in 'contract (minimum) depth' and 'farmout (earned) depth' provisions; Options in interests assigned and reserved; Sample 'AMI' provision; ''conventional'' and ''revenue ruling 77-176'' farmout agreement forms; the AAPL model from operating agreement; 1982 revisions to the model form; Side-by-side comparisons of themore » 1956, 1977 and 1982 model forms, 1984 (1985) COPAS accounting procedure; Tax consequences of oil and gas exploration and development, revenue ruling 77-176; Use of tax partnerships and present assignments; Also materials on gas balancing agreements: Seismic options; and Structuring considerations.« less

  18. Event-based stormwater management pond runoff temperature model

    NASA Astrophysics Data System (ADS)

    Sabouri, F.; Gharabaghi, B.; Sattar, A. M. A.; Thompson, A. M.

    2016-09-01

    Stormwater management wet ponds are generally very shallow and hence can significantly increase (about 5.4 °C on average in this study) runoff temperatures in summer months, which adversely affects receiving urban stream ecosystems. This study uses gene expression programming (GEP) and artificial neural networks (ANN) modeling techniques to advance our knowledge of the key factors governing thermal enrichment effects of stormwater ponds. The models developed in this study build upon and compliment the ANN model developed by Sabouri et al. (2013) that predicts the catchment event mean runoff temperature entering the pond as a function of event climatic and catchment characteristic parameters. The key factors that control pond outlet runoff temperature, include: (1) Upland Catchment Parameters (catchment drainage area and event mean runoff temperature inflow to the pond); (2) Climatic Parameters (rainfall depth, event mean air temperature, and pond initial water temperature); and (3) Pond Design Parameters (pond length-to-width ratio, pond surface area, pond average depth, and pond outlet depth). We used monitoring data for three summers from 2009 to 2011 in four stormwater management ponds, located in the cities of Guelph and Kitchener, Ontario, Canada to develop the models. The prediction uncertainties of the developed ANN and GEP models for the case study sites are around 0.4% and 1.7% of the median value. Sensitivity analysis of the trained models indicates that the thermal enrichment of the pond outlet runoff is inversely proportional to pond length-to-width ratio, pond outlet depth, and directly proportional to event runoff volume, event mean pond inflow runoff temperature, and pond initial water temperature.

  19. Effects of depth and chest volume on cardiac function during breath-hold diving.

    PubMed

    Marabotti, Claudio; Scalzini, Alessandro; Cialoni, Danilo; Passera, Mirko; Ripoli, Andrea; L'Abbate, Antonio; Bedini, Remo

    2009-07-01

    Cardiac response to breath-hold diving in human beings is primarily characterized by the reduction of both heart rate and stroke volume. By underwater Doppler-echocardiography we observed a "restrictive/constrictive" left ventricular filling pattern compatible with the idea of chest squeeze and heart compression during diving. We hypothesized that underwater re-expansion of the chest would release heart constriction and normalize cardiac function. To this aim, 10 healthy male subjects (age 34.2 +/- 10.4) were evaluated by Doppler-echocardiography during breath-hold immersion at a depth of 10 m, before and after a single maximal inspiration from a SCUBA device. During the same session, all subjects were also studied at surface (full-body immersion) and at 5-m depth in order to better characterize the relationship of echo-Doppler pattern with depth. In comparison to surface immersion, 5-m deep diving was sufficient to reduce cardiac output (P = 0.042) and increase transmitral E-peak velocity (P < 0.001). These changes remained unaltered at a 10-m depth. Chest expansion at 10 m decreased left ventricular end-systolic volume (P = 0.024) and increased left ventricular stroke volume (P = 0.024). In addition, it decreased transmitral E-peak velocity (P = 0.012) and increased deceleration time of E-peak (P = 0.021). In conclusion the diving response, already evident during shallow diving (5 m) did not progress during deeper dives (10 m). The rapid improvement in systolic and diastolic function observed after lung volume expansion is congruous with the idea of a constrictive effect on the heart exerted by chest squeeze.

  20. Satellite services system analysis study. Volume 1, part 2: Executive summary

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The early mission model was developed through a survey of the potential user market. Service functions were defined and a group of design reference missions were selected which represented needs for each of the service functions. Servicing concepts were developed through mission analysis and STS timeline constraint analysis. The hardware needs for accomplishing the service functions were identified with emphasis being placed on applying equipment in the current NASA inventory and that in advanced stages of planning. A more comprehensive service model was developed based on the NASA and DoD mission models segregated by mission class. The number of service events of each class were estimated based on average revisit and service assumptions. Service Kits were defined as collections of equipment applicable to performing one or more service functions. Preliminary design was carrid out on a selected set of hardware needed for early service missions. The organization and costing of the satellie service systems were addressed.

  1. Design of a modulated orthovoltage stereotactic radiosurgery system.

    PubMed

    Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S

    2017-07-01

    To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.

  2. Isolated effect of geometry on mitral valve function for in silico model development.

    PubMed

    Siefert, Andrew William; Rabbah, Jean-Pierre Michel; Saikrishnan, Neelakantan; Kunzelman, Karyn Susanne; Yoganathan, Ajit Prithivaraj

    2015-01-01

    Computational models for the heart's mitral valve (MV) exhibit several uncertainties that may be reduced by further developing these models using ground-truth data-sets. This study generated a ground-truth data-set by quantifying the effects of isolated mitral annular flattening, symmetric annular dilatation, symmetric papillary muscle (PM) displacement and asymmetric PM displacement on leaflet coaptation, mitral regurgitation (MR) and anterior leaflet strain. MVs were mounted in an in vitro left heart simulator and tested under pulsatile haemodynamics. Mitral leaflet coaptation length, coaptation depth, tenting area, MR volume, MR jet direction and anterior leaflet strain in the radial and circumferential directions were successfully quantified at increasing levels of geometric distortion. From these data, increase in the levels of isolated PM displacement resulted in the greatest mean change in coaptation depth (70% increase), tenting area (150% increase) and radial leaflet strain (37% increase) while annular dilatation resulted in the largest mean change in coaptation length (50% decrease) and regurgitation volume (134% increase). Regurgitant jets were centrally located for symmetric annular dilatation and symmetric PM displacement. Asymmetric PM displacement resulted in asymmetrically directed jets. Peak changes in anterior leaflet strain in the circumferential direction were smaller and exhibited non-significant differences across the tested conditions. When used together, this ground-truth data-set may be used to parametrically evaluate and develop modelling assumptions for both the MV leaflets and subvalvular apparatus. This novel data may improve MV computational models and provide a platform for the development of future surgical planning tools.

  3. Summary of 1968-1970 multidisciplinary accident investigation reports. Volume 2

    DOT National Transportation Integrated Search

    1972-08-01

    In June 1971, Volume 1 of a two-volume series summarizing the causal factors, conclusions and recommendations which emanated from various in-depth accident reports was published. This first volume contained a listing of these factors according to tea...

  4. Zircon Age Distributions Provide Magma Fluxes in the Earth's Crust

    NASA Astrophysics Data System (ADS)

    Caricchi, L.; Simpson, G.; Schaltegger, U.

    2014-12-01

    Magma fluxes control the growth of continents, the frequency and magnitude of volcanic eruptions and are important for the genesis of magmatic ore deposits. A significant part of the magma produced in the Earth's mantle solidifies at depth and this limits our capability of determining magma fluxes, which, in turn, compromises our ability to establish a link between global heat transfer and large-scale geological processes. Using thermal modelling in combination with high precision zircon dating we show that populations of zircon ages provide an accurate mean to retrieve magma fluxes. The characteristics of zircon age populations vary significantly and systematically as function of the flux and total volume of magma accumulated at depth. This new approach provides results that are identical to independent determinations of magma fluxes and volumes of magmatic systems. The analysis of existing age population datasets by our method highlights that porphyry-type deposits, plutons and large eruptions each require magma input over different timescales at characteristic average fluxes.

  5. Space Operations Center system analysis study extension. Volume 4, book 1: SOC system analysis report

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The Space Operations Center (SOC) orbital space station missions are analyzed. Telecommunications missions, space science, Earth sensing, and space testing missions, research and applications missions, defense missions, and satellite servicing missions are modeled and mission needs discussed. The satellite servicing missions are analyzed in detail, including construction and servicing equipment requirements, mission needs and benefits, differential drag characteristics of co-orbiting satellites, and satellite servicing transportation requirements.

  6. Physicians in private practice: reasons for being a social franchise member.

    PubMed

    Huntington, Dale; Mundy, Gary; Hom, Nang Mo; Li, Qingfeng; Aung, Tin

    2012-08-01

    Evidence is emerging on the cost-effectiveness, quality and health coverage of social franchises. But little is known about the motivations of providers to join or remain within a social franchise network, or the impact that franchise membership has on client volumes or revenue earnings. (i) Uncontrolled facility based of a random sample of 230 franchise members to assess self-reported motivations; (ii) A 24 month prospective cohort study of 3 cohorts of physicians who had been in the franchise for 4 years, 2 years and new members to track monthly case load and revenue generated. The most common reasons for joining the franchise were access to high quality and cheap drugs (96.1%) and feelings of social responsibility, (95.2%). The effects of joining the franchise on the volume of family planning services is shown in the 2009 cohort where the average monthly service volume increased from 18.5 per physician to 70.6 per physician during their first 2 years in the franchise, (p<0.01). These gains are sustained during the 3rd and 4th year of franchise membership, as the 2007 cohort reported increases of monthly average family planning service volume from 71.2 per physician to 102.8 per physician (p<0.01). The net income of cohort 2009 increased significantly (p=0.024) during their first two years in the franchise. The results for cohorts 2007 and 2005 also show a generalized trend in increasing income. The findings show how franchise membership impacts the volume of franchise and non-franchised services. The increases in client volumes translated directly into increases in earnings among the franchise members, an unanticipated effect for providers who joined in order to better serve the poor. This finding has implications for the social franchise business model that relies upon subsidized medical products to reduce financial barriers for the poor. The increases in out of pocket payments for health care services that were not price controlled by the franchise is a concern. As the field of social franchises continues to mature its business models towards more sustainable and cost recovery management practices, attention should be given towards avoiding commercialization of services.

  7. Physicians in private practice: reasons for being a social franchise member

    PubMed Central

    2012-01-01

    Background Evidence is emerging on the cost-effectiveness, quality and health coverage of social franchises. But little is known about the motivations of providers to join or remain within a social franchise network, or the impact that franchise membership has on client volumes or revenue earnings. Methods (i) Uncontrolled facility based of a random sample of 230 franchise members to assess self-reported motivations; (ii) A 24 month prospective cohort study of 3 cohorts of physicians who had been in the franchise for 4 years, 2 years and new members to track monthly case load and revenue generated. Results The most common reasons for joining the franchise were access to high quality and cheap drugs (96.1%) and feelings of social responsibility, (95.2%). The effects of joining the franchise on the volume of family planning services is shown in the 2009 cohort where the average monthly service volume increased from 18.5 per physician to 70.6 per physician during their first 2 years in the franchise, (p<0.01). These gains are sustained during the 3rd and 4th year of franchise membership, as the 2007 cohort reported increases of monthly average family planning service volume from 71.2 per physician to 102.8 per physician (p<0.01). The net income of cohort 2009 increased significantly (p=0.024) during their first two years in the franchise. The results for cohorts 2007 and 2005 also show a generalized trend in increasing income. Conclusions The findings show how franchise membership impacts the volume of franchise and non-franchised services. The increases in client volumes translated directly into increases in earnings among the franchise members, an unanticipated effect for providers who joined in order to better serve the poor. This finding has implications for the social franchise business model that relies upon subsidized medical products to reduce financial barriers for the poor. The increases in out of pocket payments for health care services that were not price controlled by the franchise is a concern. As the field of social franchises continues to mature its business models towards more sustainable and cost recovery management practices, attention should be given towards avoiding commercialization of services. PMID:22849434

  8. The Fate of Volatiles in Subaqueous Explosive Eruptions: An Analysis of Steam Condensation in the Water Column

    NASA Astrophysics Data System (ADS)

    Cahalan, R. C.; Dufek, J.

    2015-12-01

    A model has been developed to determine the theoretical limits of steam survival in a water column during a subaqueous explosive eruption. Understanding the role of steam dynamics in particle transport and the evolution of the thermal budget is critical to addressing the first order questions of subaqueous eruption mechanics. Ash transport in subaqueous eruptions is initially coupled to the fate of volatile transport. The survival of steam bubbles to the water surface could enable non-wetted ash transport from the vent to a subaerial ash cloud. Current eruption models assume a very simple plume mixing geometry, that cold water mixes with the plume immediately after erupting, and that the total volume of steam condenses in the initial phase of mixing. This limits the survival of steam to within tens of meters above the vent. Though these assumptions may be valid, they are unproven, and the calculations based on them do not take into account any kinetic constraints on condensation. The following model has been developed to evaluate the limits of juvenile steam survival in a subaqueous explosive eruption. This model utilizes the analytical model for condensation of steam injected into a sub-cooled pool produced in Park et al. (2007). Necessary parameterizations require an iterative internal calculation of the steam saturation temperature and vapor density for each modeled time step. The contribution of volumetric expansion due to depressurization of a rising bubble is calculated and used in conjunction with condensation rate to calculate the temporal evolution of bubble volume and radius. Using steam bubble volume with the BBO equation for Lagrangian transport in a fluid, the bubble rise velocity is calculated and used to evaluate the rise distance. The steam rise model proves a useful tool to compare the effects of steam condensation, volumetric expansion, volume flux, and water depth on the dynamics of juvenile steam. The modeled results show that a sufficiently high volatile flux could lead to the survival of steam bubbles from >1km depths to the ocean surface, though low to intermediate fluxes lead to fairly rapid condensation. Building on this result we also present the results of simulations of multiphase steam jets and consider the likelihood of collapse inside a vapor envelope.

  9. U.S. POSTAL SERVICE. Moving Forward on Financial and Transformation Challenges

    DTIC Science & Technology

    2002-05-13

    recovers. The Service’s basic business model, Page 3 GAO-02-694T which assumes that rising mail volume will cover rising costs and mitigate rate increases...in the ratemaking and new products areas, it will be important that any additional flexibility be coupled with an appropriate level of transparency...the Service’s basic business model is not sustainable and that much larger declines in mail volume may be in the offing if mailers increasingly shift

  10. Evolution of dike opening during the March 2011 Kamoamoa fissure eruption, Kīlauea Volcano, Hawai`i

    USGS Publications Warehouse

    Lundgren, Paul; Poland, Michael; Miklius, Asta; Orr, Tim R.; Yun, Sang-Ho; Fielding, Eric; Liu, Zhen; Tanaka, Akiko; Szeliga, Walter; Hensley, Scott; Owen, Susan

    2013-01-01

    The 5–9 March 2011 Kamoamoa fissure eruption along the east rift zone of Kīlauea Volcano, Hawai`i, followed months of pronounced inflation at Kīlauea summit. We examine dike opening during and after the eruption using a comprehensive interferometric synthetic aperture radar (InSAR) data set in combination with continuous GPS data. We solve for distributed dike displacements using a whole Kīlauea model with dilating rift zones and possibly a deep décollement. Modeled surface dike opening increased from nearly 1.5 m to over 2.8 m from the first day to the end of the eruption, in agreement with field observations of surface fracturing. Surface dike opening ceased following the eruption, but subsurface opening in the dike continued into May 2011. Dike volumes increased from 15, to 16, to 21 million cubic meters (MCM) after the first day, eruption end, and 2 months following, respectively. Dike shape is distinctive, with a main limb plunging from the surface to 2–3 km depth in the up-rift direction toward Kīlauea's summit, and a lesser projection extending in the down-rift direction toward Pu`u `Ō`ō at 2 km depth. Volume losses beneath Kīlauea summit (1.7 MCM) and Pu`u `Ō`ō (5.6 MCM) crater, relative to dike plus erupted volume (18.3 MCM), yield a dike to source volume ratio of 2.5 that is in the range expected for compressible magma without requiring additional sources. Inflation of Kīlauea's summit in the months before the March 2011 eruption suggests that the Kamoamoa eruption resulted from overpressure of the volcano's magmatic system.

  11. Evolution of dike opening during the March 2011 Kamoamoa fissure eruption, Kīlauea Volcano, Hawai`i

    NASA Astrophysics Data System (ADS)

    Lundgren, Paul; Poland, Michael; Miklius, Asta; Orr, Tim; Yun, Sang-Ho; Fielding, Eric; Liu, Zhen; Tanaka, Akiko; Szeliga, Walter; Hensley, Scott; Owen, Susan

    2013-03-01

    5-9 March 2011 Kamoamoa fissure eruption along the east rift zone of Kīlauea Volcano, Hawai`i, followed months of pronounced inflation at Kīlauea summit. We examine dike opening during and after the eruption using a comprehensive interferometric synthetic aperture radar (InSAR) data set in combination with continuous GPS data. We solve for distributed dike displacements using a whole Kīlauea model with dilating rift zones and possibly a deep décollement. Modeled surface dike opening increased from nearly 1.5 m to over 2.8 m from the first day to the end of the eruption, in agreement with field observations of surface fracturing. Surface dike opening ceased following the eruption, but subsurface opening in the dike continued into May 2011. Dike volumes increased from 15, to 16, to 21 million cubic meters (MCM) after the first day, eruption end, and 2 months following, respectively. Dike shape is distinctive, with a main limb plunging from the surface to 2-3 km depth in the up-rift direction toward Kīlauea's summit, and a lesser projection extending in the down-rift direction toward Pu`u `Ō`ō at 2 km depth. Volume losses beneath Kīlauea summit (1.7 MCM) and Pu`u `Ō`ō (5.6 MCM) crater, relative to dike plus erupted volume (18.3 MCM), yield a dike to source volume ratio of 2.5 that is in the range expected for compressible magma without requiring additional sources. Inflation of Kīlauea's summit in the months before the March 2011 eruption suggests that the Kamoamoa eruption resulted from overpressure of the volcano's magmatic system.

  12. The Lateral Decubitus Breast Boost: Description, Rationale, and Efficacy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ludwig, Michelle S., E-mail: mludwig@mdanderson.or; McNeese, Marsha D.; Buchholz, Thomas A.

    2010-01-15

    Purpose: To describe and evaluate the modified lateral decubitus boost, a breast irradiation technique. Patients are repositioned and resimulated for electron boost to minimize the necessary depth for the electron beam and optimize target volume coverage. Methods and Materials: A total of 2,606 patients were treated with post-lumpectomy radiation at our institution between January 1, 2000, and February 1, 2008. Of these, 231 patients underwent resimulation in the lateral decubitus position with electron boost. Distance from skin to the maximal depth of target volume was measured in both the original and boost plans. Age, body mass index (BMI), boost electronmore » energy, and skin reaction were evaluated. Results: Resimulation in the lateral decubitus position reduced the distance from skin to maximal target volume depth in all patients. Average depth reduction by repositioning was 2.12 cm, allowing for an average electron energy reduction of approximately 7 MeV. Mean skin entrance dose was reduced from about 90% to about 85% (p < 0.001). Only 14 patients (6%) experienced moist desquamation in the boost field at the end of treatment. Average BMI of these patients was 30.4 (range, 17.8-50.7). BMI greater than 30 was associated with more depth reduction by repositioning and increased risk of moist desquamation. Conclusions: The lateral decubitus position allows for a decrease in the distance from the skin to the target volume depth, improving electron coverage of the tumor bed while reducing skin entrance dose. This is a well-tolerated regimen for a patient population with a high BMI or deep tumor location.« less

  13. Hypsometry, volume and physiography of the Arctic Ocean and their paleoceanographic implications

    NASA Astrophysics Data System (ADS)

    Jakobsson, M.; Macnab, R.; Grantz, A.; Kristoffersen, Y.

    2003-04-01

    Recent analyses of the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model include: Hypsometry (the distribution of surface area at various depths); ocean volume distribution; and physiographic provinces [Jakobsson 2002; Jakobsson et al., in press]. The present paper summarizes the main results from these recent studies and expands on the paleoceanographic implications for the Arctic Ocean, which in this work is defined as the broad continental shelves of the Barents, Kara, Laptev, East Siberian and Chukchi Seas, the White Sea and the narrow continental shelves of the Beaufort Sea, the Arctic continental margins off the Canadian Arctic Archipelago and northern Greenland. This, the World's smallest ocean, is a virtually land-locked ocean that makes up merely 2.6 % of the area, and 1.0 % of the volume, of the entire World Ocean. The continental shelf area, from the coastline out to the shelf break, comprises as much as 52.9 % of the total area in the Arctic Ocean, which is significantly larger in comparison to the rest of the world oceans where the proportion of shelves, from the coastline out to the foot of the continental slope, only ranges between about 9.1 % and 17.7 %. In Jakobsson [2002], the seafloor area and water volume were calculated for different depths starting from the present sea level and progressing in increments of 10 m to a depth of 500 m, and in increments of 50 m from 550 m down to the deepest depth within each of the analyzed Arctic Ocean seas. Hypsometric curves expressed as simple histograms of the frequencies in different depth bins were presented, along with depth plotted against cumulative area for each of the analyzed seas. The derived hypsometric curves show that most of the Arctic Ocean shelf seas besides the Barents Sea, Beaufort Sea and the shelf off northern Greenland have a similar shape with the largest seafloor area between 0 and 50 m. The East Siberian and Laptev seas, in particular, show area distributions concentrated in this shallow depth range, and together with the Chukchi Sea they form a large flat shallow shelf province comprising as much as 22 % of the entire Arctic Ocean area, but only 1 % of the volume. Given this vast shelf area it may be speculated that the Arctic Ocean circulation is more sensitive to eustatic sea level changes compared to the other world oceans. For example, during the LGM when the sea level was ca 120 m lower than today most, if not all, of the Arctic Ocean shelf region could not play a role in the ocean circulation. Besides being the world's smallest ocean with the by far largest shelf area in proportion to its size, the Arctic Ocean is unique in terms of its physiographic setting. The Fram Strait is the only real break in the barrier of vast continental shelves enclosing the Arctic Ocean. The second largest physiographic province after the continental shelves consists of ridges, which is in contrast to the rest of the World's oceans where abyssal plains dominate. As much as 15.8 % of the area is underlain by ridges indicating the profound effect they have on ocean circulation. Jakobsson, M., Grantz, A., Kristoffersen, Y., and Macnab, R., in press, Physiographic Provinces of the Arctic Ocean, GSA Bulletin. Jakobsson, M., 2002, Hypsometry and volume of the Arctic Ocean and its constituent’s seas, Geochemistry Geophysics Geosystems, v. 3, no. 2.

  14. MRT letter: Guided filtering of image focus volume for 3D shape recovery of microscopic objects.

    PubMed

    Mahmood, Muhammad Tariq

    2014-12-01

    In this letter, a shape from focus (SFF) method is proposed that utilizes the guided image filtering to enhance the image focus volume efficiently. First, image focus volume is computed using a conventional focus measure. Then each layer of image focus volume is filtered using guided filtering. In this work, the all-in-focus image, which can be obtained from the initial focus volume, is used as guidance image. Finally, improved depth map is obtained from the filtered image focus volume by maximizing the focus measure along the optical axis. The proposed SFF method is efficient and provides better depth maps. The improved performance is highlighted by conducting several experiments using image sequences of simulated and real microscopic objects. The comparative analysis demonstrates the effectiveness of the proposed SFF method. © 2014 Wiley Periodicals, Inc.

  15. Lithology of the basement underlying the Campi Flegrei caldera: Volcanological and petrological constraints

    NASA Astrophysics Data System (ADS)

    D'Antonio, Massimo

    2011-02-01

    A geologically reasonable working hypothesis is proposed for the lithology of the basement underlying the Campi Flegrei caldera in the ca. 4-8 km depth range. In most current geophysical modeling, this portion of crust is interpreted as composed of Meso-Cenozoic carbonate rocks, underlain by a ca. 1 km thick sill of partially molten rock, thought to be a main magma reservoir. Shallower magma reservoirs likely occur in the 3-4 km depth range. However, the lack of carbonate lithics in any Campi Flegrei caldera volcanic rocks does not support the hypothesis of a limestone basement. Considering the major caldera-forming eruptions, which generated widespread and voluminous ignimbrites during late Quaternary times, including the Campanian Ignimbrite and Neapolitan Yellow Tuff eruptions, the total volume of trachytic to phonolitic ejected magma is conservatively estimated at not less than 350 km 3. Results of least-squared mass-balance calculations suggest that this evolved magma formed through fractional crystallization from at least 2500 km 3 of parent shoshonitic magma, in turn derived from even more voluminous, more mafic, K-basaltic magma. Calculations suggest that shoshonitic magma, likely emplaced at ca. 8 km depth, must have crystallized about 2100 km 3 of solid material, dominated by alkali-feldspar and plagioclase, with a slightly lower amount of mafic minerals, during its route toward shallower magma reservoirs, before feeding the Campi Flegrei large-volume eruptions. The calculated volume of cumulate material, likely syenitic in composition at least in its upper portions, is more than enough to completely fill the basement volume in the 4-8 km depth range beneath the Campi Flegrei caldera, estimated at ca. 1250 km 3. Thus, it is proposed that the basement underlying the Campi Flegrei caldera below 4 km is composed mostly of crystalline igneous rocks, as for many large calderas worldwide. Syenite sensu lato would meet physical properties requirements for geophysical data interpretations, explain some geochemical and isotopic features of the past 15 ka volcanics, and justify the carbon isotopic composition of fumaroles at the Campi Flegrei caldera. This implies that Meso-Cenozoic limestones, if still present today beneath the Campi Flegrei caldera, no longer constitute significant portions of its basement.

  16. Volume of home- and community-based services and time to nursing-home placement.

    PubMed

    Sands, Laura P; Xu, Huiping; Thomas, Joseph; Paul, Sudeshna; Craig, Bruce A; Rosenman, Marc; Doebbeling, Caroline C; Weiner, Michael

    2012-01-01

    The purpose of this study was to determine whether the volume of Home- and Community-Based Services (HCBS) that target Activities of Daily Living disabilities, such as attendant care, homemaking services, and home-delivered meals, increases recipients' risk of transitioning from long-term care provided through HCBS to long-term care provided in a nursing home. Data are from the Indiana Medicaid enrollment, claims, and Insite databases. Insite is the software system that was developed for collecting and reporting data for In-Home Service Programs. Enrollees in Indiana Medicaid's Aged and Disabled Waiver program were followed forward from time of enrollment to assess the association between the volume of attendant care, homemaking services, home-delivered meals, and related covariates, and the risk for nursing-home placement. An extension of the Cox proportional hazard model was computed to determine the cumulative hazard of nursing-home placement in the presence of death as a competing risk. Of the 1354 Medicaid HCBS recipients followed in this study, 17% did not receive any attendant care, homemaking services, or home-delivered meals. Among recipients who survived through 24 months after enrollment, one in five transitioned from HCBS to a nursing-home. Risk for nursing-home placement was significantly lower for each five-hour increment in personal care (HR=0.95, 95% CI=0.92-0.98) and homemaking services (HR=0.87, 95% CI=0.77-0.99). Future policies and practices that are focused on optimizing long-term care outcomes should consider that a greater volume of HCBS for an individual is associated with reduced risk of nursing-home placement.

  17. Joint Ecosystem Modeling (JEM) ecological model documentation volume 1: Estuarine prey fish biomass availability v1.0.0

    USGS Publications Warehouse

    Romañach, Stephanie S.; Conzelmann, Craig; Daugherty, Adam; Lorenz, Jerome L.; Hunnicutt, Christina; Mazzotti, Frank J.

    2011-01-01

    Estuarine fish serve as an important prey base in the Greater Everglades ecosystem for key fauna such as wading birds, crocodiles, alligators, and piscivorous fishes. Human-made changes to freshwater flow across the Greater Everglades have resulted in less freshwater flow into the fringing estuaries and coasts. These changes in freshwater input have altered salinity patterns and negatively affected primary production of the estuarine fish prey base. Planned restoration projects should affect salinity and water depth both spatially and temporally and result in an increase in appropriate water conditions in areas occupied by estuarine fish. To assist in restoration planning, an ecological model of estuarine prey fish biomass availability was developed as an evaluation tool to aid in the determination of acceptable ranges of salinity and water depth. Comparisons of model output to field data indicate that the model accurately predicts prey biomass in the estuarine regions of the model domain. This model can be used to compare alternative restoration plans and select those that provide suitable conditions.

  18. Real - time Optimization of Distributed Energy Storage System Operation Strategy Based on Peak Load Shifting

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Lu, Guangqi; Li, Xiaoyu; Zhang, Yichi; Yun, Zejian; Bian, Di

    2018-01-01

    To take advantage of the energy storage system (ESS) sufficiently, the factors that the service life of the distributed energy storage system (DESS) and the load should be considered when establishing optimization model. To reduce the complexity of the load shifting of DESS in the solution procedure, the loss coefficient and the equal capacity ratio distribution principle were adopted in this paper. Firstly, the model was established considering the constraint conditions of the cycles, depth, power of the charge-discharge of the ESS, the typical daily load curves, as well. Then, dynamic programming method was used to real-time solve the model in which the difference of power Δs, the real-time revised energy storage capacity Sk and the permission error of depth of charge-discharge were introduced to optimize the solution process. The simulation results show that the optimized results was achieved when the load shifting in the load variance was not considered which means the charge-discharge of the energy storage system was not executed. In the meantime, the service life of the ESS would increase.

  19. P- and S-wave velocity models incorporating the Cascadia subduction zone for 3D earthquake ground motion simulations—Update for Open-File Report 2007–1348

    USGS Publications Warehouse

    Stephenson, William J.; Reitman, Nadine G.; Angster, Stephen J.

    2017-12-20

    In support of earthquake hazards studies and ground motion simulations in the Pacific Northwest, threedimensional (3D) P- and S-wave velocity (VP and VS , respectively) models incorporating the Cascadia subduction zone were previously developed for the region encompassed from about 40.2°N. to 50°N. latitude, and from about 122°W. to 129°W. longitude (fig. 1). This report describes updates to the Cascadia velocity property volumes of model version 1.3 ([V1.3]; Stephenson, 2007), herein called model version 1.6 (V1.6). As in model V1.3, the updated V1.6 model volume includes depths from 0 kilometers (km) (mean sea level) to 60 km, and it is intended to be a reference for researchers who have used, or are planning to use, this model in their earth science investigations. To this end, it is intended that the VP and VS property volumes of model V1.6 will be considered a template for a community velocity model of the Cascadia region as additional results become available. With the recent and ongoing development of the National Crustal Model (NCM; Boyd and Shah, 2016), we envision any future versions of this model will be directly integrated with that effort

  20. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    NASA Astrophysics Data System (ADS)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.

  1. The Impact of Hospital Size on CMS Hospital Profiling.

    PubMed

    Sosunov, Eugene A; Egorova, Natalia N; Lin, Hung-Mo; McCardle, Ken; Sharma, Vansh; Gelijns, Annetine C; Moskowitz, Alan J

    2016-04-01

    The Centers for Medicare & Medicaid Services (CMS) profile hospitals using a set of 30-day risk-standardized mortality and readmission rates as a basis for public reporting. These measures are affected by hospital patient volume, raising concerns about uniformity of standards applied to providers with different volumes. To quantitatively determine whether CMS uniformly profile hospitals that have equal performance levels but different volumes. Retrospective analysis of patient-level and hospital-level data using hierarchical logistic regression models with hospital random effects. Simulation of samples including a subset of hospitals with different volumes but equal poor performance (hospital effects=+3 SD in random-effect logistic model). A total of 1,085,568 Medicare fee-for-service patients undergoing 1,494,993 heart failure admissions in 4930 hospitals between July 1, 2005 and June 30, 2008. CMS methodology was used to determine the rank and proportion (by volume) of hospitals reported to perform "Worse than US National Rate." Percent of hospitals performing "Worse than US National Rate" was ∼40 times higher in the largest (fifth quintile by volume) compared with the smallest hospitals (first quintile). A similar gradient was seen in a cohort of 100 hospitals with simulated equal poor performance (0%, 0%, 5%, 20%, and 85% in quintiles 1 to 5) effectively leaving 78% of poor performers undetected. Our results illustrate the disparity of impact that the current CMS method of hospital profiling has on hospitals with higher volumes, translating into lower thresholds for detection and reporting of poor performance.

  2. Penetration Depth and Defect Image Contrast Formation in Grazing-Incidence X-ray Topography of 4H-SiC Wafers

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Guo, Jianqiu; Goue, Ouloide Yannick; Kim, Jun Gyu; Raghothamachar, Balaji; Dudley, Michael; Chung, Gill; Sanchez, Edward; Manning, Ian

    2018-02-01

    Synchrotron x-ray topography in grazing-incidence geometry is useful for discerning defects at different depths below the crystal surface, particularly for 4H-SiC epitaxial wafers. However, the penetration depths measured from x-ray topographs are much larger than theoretical values. To interpret this discrepancy, we have simulated the topographic contrast of dislocations based on two of the most basic contrast formation mechanisms, viz. orientation and kinematical contrast. Orientation contrast considers merely displacement fields associated with dislocations, while kinematical contrast considers also diffraction volume, defined as the effective misorientation around dislocations and the rocking curve width for given diffraction vector. Ray-tracing simulation was carried out to visualize dislocation contrast for both models, taking into account photoelectric absorption of the x-ray beam inside the crystal. The results show that orientation contrast plays the key role in determining both the contrast and x-ray penetration depth for different types of dislocation.

  3. An acute care surgery rotation contributes significant general surgical operative volume to residency training compared with other rotations.

    PubMed

    Stanley, Matthew D; Davenport, Daniel L; Procter, Levi D; Perry, Jacob E; Kearney, Paul A; Bernard, Andrew C

    2011-03-01

    Surgical resident rotations on trauma services are criticized for little operative experience and heavy workloads. This has resulted in diminished interest in trauma surgery among surgical residents. Acute care surgery (ACS) combines trauma and emergency/elective general surgery, enhancing operative volume and balancing operative and nonoperative effort. We hypothesize that a mature ACS service provides significant operative experience. A retrospective review was performed of ACGME case logs of 14 graduates from a major, academic, Level I trauma center program during a 3-year period. Residency Review Committee index case volumes during the fourth and fifth years of postgraduate training (PGY-4 and PGY-5) ACS rotations were compared with other service rotations: in total and per resident week on service. Ten thousand six hundred fifty-four cases were analyzed for 14 graduates. Mean cases per resident was 432 ± 57 in PGY-4, 330 ± 40 in PGY-5, and 761 ± 67 for both years combined. Mean case volume on ACS for both years was 273 ± 44, which represented 35.8% (273 of 761) of the total experience and exceeded all other services. Residents averaged 8.9 cases per week on the ACS service, which exceeded all other services except private general surgery, gastrointestinal/minimally invasive surgery, and pediatric surgery rotations. Disproportionately more head/neck, small and large intestine, gastric, spleen, laparotomy, and hernia cases occurred on ACS than on other services. Residents gain a large operative experience on ACS. An ACS model is viable in training, provides valuable operative experience, and should not be considered a drain on resident effort. Valuable ACS rotation experiences as a resident may encourage graduates to pursue ACS as a career. Copyright © 2011 by Lippincott Williams & Wilkins

  4. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  5. Service Level Decision-making in Rural Physiotherapy: Development of Conceptual Models.

    PubMed

    Adams, Robyn; Jones, Anne; Lefmann, Sophie; Sheppard, Lorraine

    2016-06-01

    Understanding decision-making about health service provision is increasingly important in an environment of increasing demand and constrained resources. Multiple factors are likely to influence decisions about which services will be provided, yet workforce is the most noted factor in the rural physiotherapy literature. This paper draws together results obtained from exploration of service level decision-making (SLDM) to propose 'conceptual' models of rural physiotherapy SLDM. A prioritized qualitative approach enabled exploration of participant perspectives about rural physiotherapy decision-making. Stakeholder perspectives were obtained through surveys and in-depth interviews. Interviews were transcribed verbatim and reviewed by participants. Participant confidentiality was maintained by coding both participants and sites. A system theory-case study heuristic provided a framework for exploration across sites within the investigation area: a large area of one Australian state with a mix of regional, rural and remote communities. Thirty-nine surveys were received from participants in 11 communities. Nineteen in-depth interviews were conducted with physiotherapists and key decision-makers. Results reveal the complexity of factors influencing rural physiotherapy service provision and the value of a systems approach when exploring decision-making about rural physiotherapy service provision. Six key features were identified that formed the rural physiotherapy SLDM system: capacity and capability; contextual influences; layered decision-making; access issues; value and beliefs; and tensions and conflict. Rural physiotherapy SLDM is not a one-dimensional process but results from the complex interaction of clusters of systems issues. Decision-making about physiotherapy service provision is influenced by both internal and external factors. Similarities in influencing factors and the iterative nature of decision-making emerged, which enabled linking physiotherapy SLDM with clinical decision-making and placing both within the broader healthcare context. The conceptual models provide a way of thinking about decisions informing rural physiotherapy service provision. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Tree Sampling as a Method to Assess Vapor Intrusion Potential at a Site Characterized by VOC-Contaminated Groundwater and Soil.

    PubMed

    Wilson, Jordan L; Limmer, Matthew A; Samaranayake, V A; Schumacher, John G; Burken, Joel G

    2017-09-19

    Vapor intrusion (VI) by volatile organic compounds (VOCs) in the built environment presents a threat to human health. Traditional VI assessments are often time-, cost-, and labor-intensive; whereas traditional subsurface methods sample a relatively small volume in the subsurface and are difficult to collect within and near structures. Trees could provide a similar subsurface sample where roots act as the "sampler' and are already onsite. Regression models were developed to assess the relation between PCE concentrations in over 500 tree-core samples with PCE concentrations in over 50 groundwater and 1000 soil samples collected from a tetrachloroethylene- (PCE-) contaminated Superfund site and analyzed using gas chromatography. Results indicate that in planta concentrations are significantly and positively related to PCE concentrations in groundwater samples collected at depths less than 20 m (adjusted R 2 values greater than 0.80) and in soil samples (adjusted R 2 values greater than 0.90). Results indicate that a 30 cm diameter tree characterizes soil concentrations at depths less than 6 m over an area of 700-1600 m 2 , the volume of a typical basement. These findings indicate that tree sampling may be an appropriate method to detect contamination at shallow depths at sites with VI.

  7. Tree sampling as a method to assess vapor intrusion potential at a site characterized by VOC-contaminated groundwater and soil

    USGS Publications Warehouse

    Wilson, Jordan L.; Limmer, Matthew A.; Samaranayake, V. A.; Schumacher, John G.; Burken, Joel G.

    2017-01-01

    Vapor intrusion (VI) by volatile organic compounds (VOCs) in the built environment presents a threat to human health. Traditional VI assessments are often time-, cost-, and labor-intensive; whereas traditional subsurface methods sample a relatively small volume in the subsurface and are difficult to collect within and near structures. Trees could provide a similar subsurface sample where roots act as the “sampler’ and are already onsite. Regression models were developed to assess the relation between PCE concentrations in over 500 tree-core samples with PCE concentrations in over 50 groundwater and 1000 soil samples collected from a tetrachloroethylene- (PCE-) contaminated Superfund site and analyzed using gas chromatography. Results indicate that in planta concentrations are significantly and positively related to PCE concentrations in groundwater samples collected at depths less than 20 m (adjusted R2 values greater than 0.80) and in soil samples (adjusted R2 values greater than 0.90). Results indicate that a 30 cm diameter tree characterizes soil concentrations at depths less than 6 m over an area of 700–1600 m2, the volume of a typical basement. These findings indicate that tree sampling may be an appropriate method to detect contamination at shallow depths at sites with VI.

  8. Promoting Community Renewal through Civic Literacy and Service Learning. New Directions for Community Colleges, Number 93.

    ERIC Educational Resources Information Center

    Parsons, Michael H., Ed.; Lisman, C. David, Ed.

    1996-01-01

    Based on the idea that community colleges have a critical role in enhancing civic literacy through community-based programming and service learning, this volume provides descriptions of theoretical frameworks and practical models for incorporating community renewal into the college mission. The following articles are provided: (1) "Service…

  9. Quantifying Contaminant Mass for the Feasibility Study of the DuPont Chambers Works FUSRAP Site - 13510

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Carl; Rahman, Mahmudur; Johnson, Ann

    2013-07-01

    The U.S. Army Corps of Engineers (USACE) - Philadelphia District is conducting an environmental restoration at the DuPont Chambers Works in Deepwater, New Jersey under the Formerly Utilized Sites Remedial Action Program (FUSRAP). Discrete locations are contaminated with natural uranium, thorium-230 and radium-226. The USACE is proposing a preferred remedial alternative consisting of excavation and offsite disposal to address soil contamination followed by monitored natural attenuation to address residual groundwater contamination. Methods were developed to quantify the error associated with contaminant volume estimates and use mass balance calculations of the uranium plume to estimate the removal efficiency of the proposedmore » alternative. During the remedial investigation, the USACE collected approximately 500 soil samples at various depths. As the first step of contaminant mass estimation, soil analytical data was segmented into several depth intervals. Second, using contouring software, analytical data for each depth interval was contoured to determine lateral extent of contamination. Six different contouring algorithms were used to generate alternative interpretations of the lateral extent of the soil contamination. Finally, geographical information system software was used to produce a three dimensional model in order to present both lateral and vertical extent of the soil contamination and to estimate the volume of impacted soil for each depth interval. The average soil volume from all six contouring methods was used to determine the estimated volume of impacted soil. This method also allowed an estimate of a standard deviation of the waste volume estimate. It was determined that the margin of error for the method was plus or minus 17% of the waste volume, which is within the acceptable construction contingency for cost estimation. USACE collected approximately 190 groundwater samples from 40 monitor wells. It is expected that excavation and disposal of contaminated soil will remove the contaminant source zone and significantly reduce contaminant concentrations in groundwater. To test this assumption, a mass balance evaluation was performed to estimate the amount of dissolved uranium that would remain in the groundwater after completion of soil excavation. As part of this evaluation, average groundwater concentrations for the pre-excavation and post-excavation aquifer plume area were calculated to determine the percentage of plume removed during excavation activities. In addition, the volume of the plume removed during excavation dewatering was estimated. The results of the evaluation show that approximately 98% of the aqueous uranium would be removed during the excavation phase. The USACE expects that residual levels of contamination will remain in groundwater after excavation of soil but at levels well suited for the selection of excavation combined with monitored natural attenuation as a preferred alternative. (authors)« less

  10. 39 CFR 3010.24 - Treatment of volume associated with negotiated service agreements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 39 Postal Service 1 2011-07-01 2011-07-01 false Treatment of volume associated with negotiated... volume associated with negotiated service agreements. (a) Mail volumes sent at rates under negotiated service agreements are to be included in the calculation of percentage change in rates as though they paid...

  11. Formation of South Pole-Aitken Basin as the Result of an Oblique Impact: Implications for Melt Volume and Source of Exposed Materials

    NASA Technical Reports Server (NTRS)

    Petro, N. E.

    2012-01-01

    The South Pole-Aitken Basin (SPA) is the largest, deepest, and oldest identified basin on the Moon and contains surfaces that are unique due to their age, composition, and depth of origin in the lunar crust [1-3] (Figure 1). SPA has been a target of interest as an area for robotic sample return in order to determine the age of the basin and the composition and origin of its interior [3-6]. As part of the investigation into the origin of SPA materials there have been several efforts to estimate the likely provenance of regolith material in central SPA [5, 6]. These model estimates suggest that, despite the formation of basins and craters following SPA, the regolith within SPA is dominated by locally derived material. An assumption inherent in these models has been that the locally derived material is primarily SPA impact-melt as opposed to local basement material (e.g. unmelted lower crust). However, the definitive identification of SPA derived impact melt on the basin floor, either by remote sensing [2, 7] or via photogeology [8] is extremely difficult due to the number of subsequent impacts and volcanic activity [3, 4]. In order to identify where SPA produced impact melt may be located, it is important to constrain both how much melt would have been produced in a basin forming impact and the likely source of such melted material. Models of crater and basin formation [9, 10] present clear rationale for estimating the possible volumes and sources of impact melt produced during SPA formation. However, if SPA formed as the result of an oblique impact [11, 12], the volume and depth of origin of melted material could be distinct from similar material in a vertical impact [13].

  12. Service Modeling for Service Engineering

    NASA Astrophysics Data System (ADS)

    Shimomura, Yoshiki; Tomiyama, Tetsuo

    Intensification of service and knowledge contents within product life cycles is considered crucial for dematerialization, in particular, to design optimal product-service systems from the viewpoint of environmentally conscious design and manufacturing in advanced post industrial societies. In addition to the environmental limitations, we are facing social limitations which include limitations of markets to accept increasing numbers of mass-produced artifacts and such environmental and social limitations are restraining economic growth. To attack and remove these problems, we need to reconsider the current mass production paradigm and to make products have more added values largely from knowledge and service contents to compensate volume reduction under the concept of dematerialization. Namely, dematerialization of products needs to enrich service contents. However, service was mainly discussed within marketing and has been mostly neglected within traditional engineering. Therefore, we need new engineering methods to look at services, rather than just functions, called "Service Engineering." To establish service engineering, this paper proposes a modeling technique of service.

  13. Ejecta from large craters on the moon - Comments on the geometric model of McGetchin et al

    NASA Technical Reports Server (NTRS)

    Pike, R. J.

    1974-01-01

    Amendments to a quantitative scheme developed by T. R. McGetchin et al. (1973) for predicting the distribution of ejecta from lunar basins yield substantially thicker estimates of ejecta, deposited at the basin rim-crest and at varying ranges beyond, than does the original model. Estimates of the total volume of material ejected from a basin, illustrated by Imbrium, also are much greater. Because many uncertainties affect any geometric model developed primarily from terrestrial analogs of lunar craters, predictions of ejecta thickness and volume on the moon may range within at least an order of magnitude. These problems are exemplified by the variability of T, thickness of ejecta at the rim-crest of terrestrial experimental craters. The proportion of T to crater rim-height depends critically upon scaled depth-of-burst and whether the explosive is nuclear or chemical.

  14. SToRM: A Model for 2D environmental hydraulics

    USGS Publications Warehouse

    Simões, Francisco J. M.

    2017-01-01

    A two-dimensional (depth-averaged) finite volume Godunov-type shallow water model developed for flow over complex topography is presented. The model, SToRM, is based on an unstructured cell-centered finite volume formulation and on nonlinear strong stability preserving Runge-Kutta time stepping schemes. The numerical discretization is founded on the classical and well established shallow water equations in hyperbolic conservative form, but the convective fluxes are calculated using auto-switching Riemann and diffusive numerical fluxes. Computational efficiency is achieved through a parallel implementation based on the OpenMP standard and the Fortran programming language. SToRM’s implementation within a graphical user interface is discussed. Field application of SToRM is illustrated by utilizing it to estimate peak flow discharges in a flooding event of the St. Vrain Creek in Colorado, U.S.A., in 2013, which reached 850 m3/s (~30,000 f3 /s) at the location of this study.

  15. Wear model simulating clinical abrasion on composite filling materials.

    PubMed

    Johnsen, Gaute Floer; Taxt-Lamolle, Sébastien F; Haugen, Håvard J

    2011-01-01

    The aim of this study was to establish a wear model for testing composite filling materials with abrasion properties closer to a clinical situation. In addition, the model was used to evaluate the effect of filler volume and particle size on surface roughness and wear resistance. Each incisor tooth was prepared with nine identical standardized cavities with respect to depth, diameter, and angle. Generic composite of 3 different filler volumes and 3 different particle sizes held together with the same resin were randomly filled in respective cavities. A multidirectional wet-grinder with molar cusps as antagonist wore the surface of the incisors containing the composite fillings in a bath of human saliva at a constant temperature of 37°C. The present study suggests that the most wear resistant filling materials should consist of medium filling content (75%) and that particles size is not as critical as earlier reported.

  16. A depth-averaged debris-flow model that includes the effects of evolving dilatancy: II. Numerical predictions and experimental tests.

    USGS Publications Warehouse

    George, David L.; Iverson, Richard M.

    2014-01-01

    We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.

  17. Cerebrospinal Fluid Metabolomics After Natural Product Treatment in an Experimental Model of Cerebral Ischemia.

    PubMed

    Huan, Tao; Xian, Jia Wen; Leung, Wing Nang; Li, Liang; Chan, Chun Wai

    2016-11-01

    Cerebrospinal fluid (CSF) is an important biofluid for diagnosis of and research on neurological diseases. However, in-depth metabolomic profiling of CSF remains an analytical challenge due to the small volume of samples, particularly in small animal models. In this work, we report the application of a high-performance chemical isotope labeling (CIL) liquid chromatography-mass spectrometry (LC-MS) workflow for CSF metabolomics in Gastrodia elata and Uncaria rhynchophylla water extract (GUW)-treated experimental cerebral ischemia model of rat. The GUW is a commonly used Traditional Chinese Medicine (TCM) for hypertension and brain disease. This study investigated the amine- and phenol-containing biomarkers in the CSF metabolome. After GUW treatment for 7 days, the neurological deficit score was significantly improved with infarct volume reduction, while the integrity of brain histological structure was preserved. Over 1957 metabolites were quantified in CSF by dansylation LC-MS. The analysis of this comprehensive list of metabolites suggests that metabolites associated with oxidative stress, inflammatory response, and excitotoxicity change during GUW-induced alleviation of ischemic injury. This work is significant in that (1) it shows CIL LC-MS can be used for in-depth profiling of the CSF metabolome in experimental ischemic stroke and (2) identifies several potential molecular targets (that might mediate the central nervous system) and associate with pharmacodynamic effects of some frequently used TCMs.

  18. Aftershocks of the western Argentina (Caucete) earthquake of 23 November 1977: some tectonic implications

    USGS Publications Warehouse

    Langer, C.J.; Bollinger, G.A.

    1988-01-01

    An aftershock survey, using a network of eight portable and two permanent seismographs, was conducted for the western Argentina (Caucete) earthquake (MS 7.3) of November 23, 1977. Monitoring began December 6, almost 2 weeks after the main shock and continued for 11 days. The data set includes 185 aftershock hypocenters that range in the depth from near surface to more than 30 km. The spatial distribution of those events occupied a volume of about 100 km long ??50 km wide ??30 km thick. The volumnar nature of the aftershock distribution is interpreted to be a result of a bimodal distribution of foci that define east- and west-dipping planar zones. Efforts to select which of those zones was associated with the causal faulting include special attention to the determination of the mainshock focal depth and dislocation theory modeling of the coseismic surface deformation in the epicentral region. Our focal depth (25-35 km) and modeling studies lead us to prefer an east-dipping plane as causal. A previous interpretation by other investigators used a shallower focal depth (17 km) and similar modeling calculations in choosing a west-dipping plane. Our selection of the east-dipping plane is physically more appealing because it places fault initiation at the base of the crustal seismogenic layer (rather than in the middle of that layer) which requires fault propagation to be updip (rather than downdip). ?? 1988.

  19. A global reference model of Moho depths based on WGM2012

    NASA Astrophysics Data System (ADS)

    Zhou, D.; Li, C.

    2017-12-01

    The crust-mantle boundary (Moho discontinuity) represents the largest density contrast in the lithosphere, which can be detected by Bouguer gravity anomaly. We present our recent inversion of global Moho depths from World Gravity Map 2012. Because oceanic lithospheres increase in density as they cool, we perform thermal correction based on the plate cooling model. We adopt a temperature Tm=1300°C at the bottom of lithosphere. The plate thickness is tested by varying by 5 km from 90 to 140 km, and taken as 130 km that gives a best-fit crustal thickness constrained by seismic crustal thickness profiles. We obtain the residual Bouguer gravity anomalies by subtracting the thermal correction from WGM2012, and then estimate Moho depths based on the Parker-Oldenburg algorithm. Taking the global model Crust1.0 as a priori constraint, we adopt Moho density contrasts of 0.43 and 0.4 g/cm3 , and initial mean Moho depths of 37 and 20 km in the continental and oceanic domains, respectively. The number of iterations in the inversion is set to be 150, which is large enough to obtain an error lower than a pre-assigned convergence criterion. The estimated Moho depths range between 0 76 km, and are averaged at 36 and 15 km in continental and oceanic domain, respectively. Our results correlate very well with Crust1.0 with differences mostly within ±5.0 km. Compared to the low resolution of Crust1.0 in oceanic domain, our results have a much larger depth range reflecting diverse structures such as ridges, seamounts, volcanic chains and subduction zones. Base on this model, we find that young(<5 Ma) oceanic crust thicknesses show dependence on spreading rates: (1) From ultraslow (<4mm/yr) to slow (4 45mm/yr) spreading ridges, the thicknesses increase dramatically; (2)From slow to fast (45 95mm/yr) spreading ridges , the thickness decreases slightly; (3) For the super-fast ridges (>95mm/yr) we observe relatively thicker crust. Conductive cooling of lithosphere may constrain the melting of the mantle at ultraslow spreading centers. Lower mantle temperatures indicated by deeper Curie depths at slow and fast spreading ridges may decrease the volume of magmatism and crustal thickness. This new global model of gravity-derived Moho depth, combined with geochemical and Curie point depth, can be used to investigate thermal evolution of lithosphere.

  20. Time dependent model of magma intrusion in and around Miyake and Kozu Islands, Central Japan in June August, 2000

    NASA Astrophysics Data System (ADS)

    Murase, Masayuki; Irwan, Meilano; Kariya, Shinichi; Tabei, Takao; Okuda, Takashi; Miyajima, Rikio; Oikawa, Jun; Watanabe, Hidefumi; Kato, Teruyuki; Nakao, Shigeru; Ukawa, Motoo; Fujita, Eisuke; Okayama, Muneo; Kimata, Fumiaki; Fujii, Naoyuki

    2006-02-01

    A time-dependent model of magma intrusion is presented for the Miyake-Kozu Island area in central Japan based on global positioning system (GPS) measurements at 28 sites recorded between June 27 and August 27, 2000. A model derived from a precise hypocenter distribution map indicates the presence of three dikes between Miyake and Kozu Islands. Other dike intrusion models, including a dike with aseismic creep and a dike associated with a deep deflation source are also discussed. The optimal parameters for each model are estimated using a genetic algorithm (GA) approach. Using Akaike's information criteria (AIC), the three-dike model is shown to provide the best solution for the observed deformation. Volume changes in spherical inflation and deflation sources, as well as three dikes, are calculated for seven discretized periods after GA optimization of the dike geometry. The optimization suggests a concentration of dike expansion near Miyake Island in the period from June 27 to July 1 associated with large deflation at a depth of about 7 km below Miyake volcano, indicating magma supply from depth below Miyake Island. In the period from July 9 to August 10, a huge dike intrusion near Kozu Island is inferred, accompanied by expansion of the lower parts of a central dike, suggesting magma supply from depth in the region between Miyake and Kozu Islands.

  1. Private sector, for-profit health providers in low and middle income countries: can they reach the poor at scale?

    PubMed

    Tung, Elizabeth; Bennett, Sara

    2014-06-24

    The bottom of the pyramid concept suggests that profit can be made in providing goods and services to poor people, when high volume is combined with low margins. To-date there has been very limited empirical evidence from the health sector concerning the scope and potential for such bottom of the pyramid models. This paper analyzes private for-profit (PFP) providers currently offering services to the poor on a large scale, and assesses the future prospects of bottom of the pyramid models in health. We searched published and grey literature and databases to identify PFP companies that provided more than 40,000 outpatient visits per year, or who covered 15% or more of a particular type of service in their country. For each included provider, we searched for additional information on location, target market, business model and performance, including quality of care. Only 10 large scale PFP providers were identified. The majority of these were in South Asia and most provided specialized services such as eye care. The characteristics of the business models of these firms were found to be similar to non-profit providers studied by other analysts (such as Bhattacharya 2010). They pursued social rather than traditional marketing, partnerships with government, low cost/high volume services and cross-subsidization between different market segments. There was a lack of reliable data concerning these providers. There is very limited evidence to support the notion that large scale bottom of the pyramid models in health offer good prospects for extending services to the poor in the future. In order to be successful PFP providers often require partnerships with government or support from social health insurance schemes. Nonetheless, more reliable and independent data on such schemes is needed.

  2. Private sector, for-profit health providers in low and middle income countries: can they reach the poor at scale?

    PubMed Central

    2014-01-01

    Background The bottom of the pyramid concept suggests that profit can be made in providing goods and services to poor people, when high volume is combined with low margins. To-date there has been very limited empirical evidence from the health sector concerning the scope and potential for such bottom of the pyramid models. This paper analyzes private for-profit (PFP) providers currently offering services to the poor on a large scale, and assesses the future prospects of bottom of the pyramid models in health. Methods We searched published and grey literature and databases to identify PFP companies that provided more than 40,000 outpatient visits per year, or who covered 15% or more of a particular type of service in their country. For each included provider, we searched for additional information on location, target market, business model and performance, including quality of care. Results Only 10 large scale PFP providers were identified. The majority of these were in South Asia and most provided specialized services such as eye care. The characteristics of the business models of these firms were found to be similar to non-profit providers studied by other analysts (such as Bhattacharya 2010). They pursued social rather than traditional marketing, partnerships with government, low cost/high volume services and cross-subsidization between different market segments. There was a lack of reliable data concerning these providers. Conclusions There is very limited evidence to support the notion that large scale bottom of the pyramid models in health offer good prospects for extending services to the poor in the future. In order to be successful PFP providers often require partnerships with government or support from social health insurance schemes. Nonetheless, more reliable and independent data on such schemes is needed. PMID:24961496

  3. A combined triggering-propagation modeling approach for the assessment of rainfall induced debris flow susceptibility

    NASA Astrophysics Data System (ADS)

    Stancanelli, Laura Maria; Peres, David Johnny; Cancelliere, Antonino; Foti, Enrico

    2017-07-01

    Rainfall-induced shallow slides can evolve into debris flows that move rapidly downstream with devastating consequences. Mapping the susceptibility to debris flow is an important aid for risk mitigation. We propose a novel practical approach to derive debris flow inundation maps useful for susceptibility assessment, that is based on the integrated use of DEM-based spatially-distributed hydrological and slope stability models with debris flow propagation models. More specifically, the TRIGRS infiltration and infinite slope stability model and the FLO-2D model for the simulation of the related debris flow propagation and deposition are combined. An empirical instability-to-debris flow triggering threshold calibrated on the basis of observed events, is applied to link the two models and to accomplish the task of determining the amount of unstable mass that develops as a debris flow. Calibration of the proposed methodology is carried out based on real data of the debris flow event occurred on 1 October 2009, in the Peloritani mountains area (Italy). Model performance, assessed by receiver-operating-characteristics (ROC) indexes, evidences fairly good reproduction of the observed event. Comparison with the performance of the traditional debris flow modeling procedure, in which sediment and water hydrographs are inputed as lumped at selected points on top of the streams, is also performed, in order to assess quantitatively the limitations of such commonly applied approach. Results show that the proposed method, besides of being more process-consistent than the traditional hydrograph-based approach, can potentially provide a more accurate simulation of debris-flow phenomena, in terms of spatial patterns of erosion and deposition as well on the quantification of mobilized volumes and depths, avoiding overestimation of debris flow triggering volume and, thus, of maximum inundation flow depths.

  4. Hydrogeomorphic and ecological control on carbonate dissolution in a patterned landscape in South Florida

    NASA Astrophysics Data System (ADS)

    Dong, X.; Heffernan, J. B.; Murray, A. B.; Cohen, M. J.; Martin, J. B.

    2016-12-01

    The evolution of the critical zone both shapes and reflects hydrologic, geochemical, and ecological processes. These interactions are poorly understood in karst landscapes with highly soluble bedrock. In this study, we used the regular-dispersed wetland basins of Big Cypress National Preserve in Florida as a focal case to model the hydrologic, geochemical, and biological mechanisms that affect soil development in karst landscapes. We addressed two questions: (1) What is the minimum timescale for wetland basin development, and (2) do changes in soil depth feed back on dissolution processes and if so by what mechanism? We developed an atmosphere-water-soil model with coupled water-solute transport, incorporating major ion equilibria and kinetic non-equilibrium chemistry, and biogenic acid production via roots distributed through the soil horizon. Under current Florida climate, weathering to a depth of 2 m (a typical depth of wetland basins) would take 4000 6000 yrs, suggesting that landscape pattern could have origins as recent as the most recent stabilization of sea level. Our model further illustrates that interactions between ecological and hydrologic processes influence the rate and depth-dependence of weathering. Absent inundation, dissolution rate decreased exponentially with distance from the bedrock to groundwater table. Inundation generally increased bedrock dissolution, but surface water chemistry and residence time produced complex and non-linear effects on dissolution rate. Biogenic acidity accelerated the dissolution rate by 50 and 1,000 times in inundated and exposed soils. Phase portrait analysis indicated that exponential decreases in bedrock dissolution rate with soil depth could produce stable basin depths. Negative feedback between hydro-period and total basin volume could stabilize the basin radius, but the lesser strength of this mechanism may explain the coalescence of wetland basins observed in some parts of the Big Cypress Landscape.

  5. Anchorage Behaviors of Frictional Tieback Anchors in Silty Sand

    NASA Astrophysics Data System (ADS)

    Hsu, Shih-Tsung; Hsiao, Wen-Ta; Chen, Ke-Ting; Hu, Wen-Chi; Wu, Ssu-Yi

    2017-06-01

    Soil anchors are extensively used in geotechnical applications, most commonly serve as tieback walls in deep excavations. To investigate the anchorage mechanisms of this tieback anchor, a constitutive model that considers both strain hardening and softening and volume dilatancy entitled SHASOVOD model, and FLAC3D software are used to perform 3-D numerical analyses. The results from field anchor tests are compared with those calculated by numerical analyses to enhance the applicability of the numerical method. After the calibration, this research carried out the parameter studies by numerical analyses. The numerical results reveal that whether the yield of soil around an anchor develops to ground surface and/or touches the diaphragm wall depending on the overburden depth H and the embedded depth Z of an anchor, this study suggests the minimum overburden and embedded depths to avoid the yield of soils develop to ground surface and/or touch the diaphragm wall. When the embedded depth, overburden depth or fixed length of an anchor increases, the anchorage capacity also increases. Increasing fixed length should be the optimum method to increase the anchorage capacity for fixed length less than 20m. However, when the fixed length of an anchor exceeds 30 m, the increasing rate of anchorage capacity per fixed length decreases, and progressive yield occurs obviously between the fixed length and surrounding soil.

  6. geomIO: A tool for geodynamicists to turn 2D cross-sections into 3D geometries

    NASA Astrophysics Data System (ADS)

    Baumann, Tobias; Bauville, Arthur

    2016-04-01

    In numerical deformation models, material properties are usually defined on elements (e.g., in body-fitted finite elements), or on a set of Lagrangian markers (Eulerian, ALE or mesh-free methods). In any case, geometrical constraints are needed to assign different material properties to the model domain. Whereas simple geometries such as spheres, layers or cuboids can easily be programmed, it quickly gets complex and time-consuming to create more complicated geometries for numerical model setups, especially in three dimensions. geomIO (geometry I/O, http://geomio.bitbucket.org/) is a MATLAB-based library that has two main functionalities. First, it can be used to create 3D volumes based on series of 2D vector drawings similar to a CAD program; and second, it uses these 3D volumes to assign material properties to the numerical model domain. The drawings can conveniently be created using the open-source vector graphics software Inkscape. Adobe Illustrator is also partially supported. The drawings represent a series of cross-sections in the 3D model domain, for example, cross-sectional interpretations of seismic tomography. geomIO is then used to read the drawings and to create 3D volumes by interpolating between the cross-sections. In the second part, the volumes are used to assign material phases to markers inside the volumes. Multiple volumes can be created at the same time and, depending on the order of assignment, unions or intersections can be built to assign additional material phases. geomIO also offers the possibility to create 3D temperature structures for geodynamic models based on depth dependent parameterisations, for example the half space cooling model. In particular, this can be applied to geometries of subducting slabs of arbitrary shape. Yet, geomIO is held very general, and can be used for a variety of applications. We present examples of setup generation from pictures of micro-scale tectonics and lithospheric scale setups of 3D present-day model geometries.

  7. Environmental Limits of Tall Shrubs in Alaska’s Arctic National Parks

    PubMed Central

    Swanson, David K.

    2015-01-01

    We sampled shrub canopy volume (height times area) and environmental factors (soil wetness, soil depth of thaw, soil pH, mean July air temperature, and typical date of spring snow loss) on 471 plots across five National Park Service units in northern Alaska. Our goal was to determine the environments where tall shrubs thrive and use this information to predict the location of future shrub expansion. The study area covers over 80,000 km2 and has mostly tundra vegetation. Large canopy volumes were uncommon, with volumes over 0.5 m3/m2 present on just 8% of plots. Shrub canopy volumes were highest where mean July temperatures were above 10.5°C and on weakly acid to neutral soils (pH of 6 to 7) with deep summer thaw (>80 cm) and good drainage. On many sites, flooding helped maintain favorable soil conditions for shrub growth. Canopy volumes were highest where the typical snow loss date was near 20 May; these represent sites that are neither strongly wind-scoured in the winter nor late to melt from deep snowdrifts. Individual species varied widely in the canopy volumes they attained and their response to the environmental factors. Betula sp. shrubs were the most common and quite tolerant of soil acidity, cold July temperatures, and shallow thaw depths, but they did not form high-volume canopies under these conditions. Alnus viridis formed the largest canopies and was tolerant of soil acidity down to about pH 5, but required more summer warmth (over 12°C) than the other species. The Salix species varied widely from S. pulchra, tolerant of wet and moderately acid soils, to S. alaxensis, requiring well-drained soils with near neutral pH. Nearly half of the land area in ARCN has mean July temperatures of 10.5 to 12.5°C, where 2°C of warming would bring temperatures into the range needed for all of the potential tall shrub species to form large canopies. However, limitations in the other environmental factors would probably prevent the formation of large shrub canopies on at least half of the land area with newly favorable temperatures after 2°C of warming. PMID:26379243

  8. Environmental Limits of Tall Shrubs in Alaska's Arctic National Parks.

    PubMed

    Swanson, David K

    2015-01-01

    We sampled shrub canopy volume (height times area) and environmental factors (soil wetness, soil depth of thaw, soil pH, mean July air temperature, and typical date of spring snow loss) on 471 plots across five National Park Service units in northern Alaska. Our goal was to determine the environments where tall shrubs thrive and use this information to predict the location of future shrub expansion. The study area covers over 80,000 km2 and has mostly tundra vegetation. Large canopy volumes were uncommon, with volumes over 0.5 m3/m2 present on just 8% of plots. Shrub canopy volumes were highest where mean July temperatures were above 10.5°C and on weakly acid to neutral soils (pH of 6 to 7) with deep summer thaw (>80 cm) and good drainage. On many sites, flooding helped maintain favorable soil conditions for shrub growth. Canopy volumes were highest where the typical snow loss date was near 20 May; these represent sites that are neither strongly wind-scoured in the winter nor late to melt from deep snowdrifts. Individual species varied widely in the canopy volumes they attained and their response to the environmental factors. Betula sp. shrubs were the most common and quite tolerant of soil acidity, cold July temperatures, and shallow thaw depths, but they did not form high-volume canopies under these conditions. Alnus viridis formed the largest canopies and was tolerant of soil acidity down to about pH 5, but required more summer warmth (over 12°C) than the other species. The Salix species varied widely from S. pulchra, tolerant of wet and moderately acid soils, to S. alaxensis, requiring well-drained soils with near neutral pH. Nearly half of the land area in ARCN has mean July temperatures of 10.5 to 12.5°C, where 2°C of warming would bring temperatures into the range needed for all of the potential tall shrub species to form large canopies. However, limitations in the other environmental factors would probably prevent the formation of large shrub canopies on at least half of the land area with newly favorable temperatures after 2°C of warming.

  9. Inference of viscosity jump at 670 km depth and lower mantle viscosity structure from GIA observations

    NASA Astrophysics Data System (ADS)

    Nakada, Masao; Okuno, Jun'ichi; Irie, Yoshiya

    2018-03-01

    A viscosity model with an exponential profile described by temperature (T) and pressure (P) distributions and constant activation energy (E_{{{um}}}^{{*}} for the upper mantle and E_{{{lm}}}^* for the lower mantle) and volume (V_{{{um}}}^{{*}} and V_{{{lm}}}^*) is employed in inferring the viscosity structure of the Earth's mantle from observations of glacial isostatic adjustment (GIA). We first construct standard viscosity models with an average upper-mantle viscosity ({\\bar{η }_{{{um}}}}) of 2 × 1020 Pa s, a typical value for the oceanic upper-mantle viscosity, satisfying the observationally derived three GIA-related observables, GIA-induced rate of change of the degree-two zonal harmonic of the geopotential, {\\dot{J}_2}, and differential relative sea level (RSL) changes for the Last Glacial Maximum sea levels at Barbados and Bonaparte Gulf in Australia and for RSL changes at 6 kyr BP for Karumba and Halifax Bay in Australia. Standard viscosity models inferred from three GIA-related observables are characterized by a viscosity of ˜1023 Pa s in the deep mantle for an assumed viscosity at 670 km depth, ηlm(670), of (1 - 50) × 1021 Pa s. Postglacial RSL changes at Southport, Bermuda and Everglades in the intermediate region of the North American ice sheet, largely dependent on its gross melting history, have a crucial potential for inference of a viscosity jump at 670 km depth. The analyses of these RSL changes based on the viscosity models with {\\bar{η }_{{{um}}}} ≥ 2 × 1020 Pa s and lower-mantle viscosity structures for the standard models yield permissible {\\bar{η }_{{{um}}}} and ηlm (670) values, although there is a trade-off between the viscosity and ice history models. Our preferred {\\bar{η }_{{{um}}}} and ηlm (670) values are ˜(7 - 9) × 1020 and ˜1022 Pa s, respectively, and the {\\bar{η }_{{{um}}}} is higher than that for the typical value of oceanic upper mantle, which may reflect a moderate laterally heterogeneous upper-mantle viscosity. The mantle viscosity structure adopted in this study depends on temperature distribution and activation energy and volume, and it is difficult to discuss the impact of each quantity on the inferred lower-mantle viscosity model. We conclude that models of smooth depth variation in the lower-mantle viscosity following η ( z ) ∝ {{ exp}}[ {( {E_{{{lm}}}^* + P( z )V_{{{lm}}}^*} )/{{R}}T( z )} ] with constant E_{{{lm}}}^* and V_{{{lm}}}^* are consistent with the GIA observations.

  10. Quantification of in vitro produced wear sites on composite resins using contact profilometry and CCD microscopy: a methodological investigation.

    PubMed

    Koottathape, Natthavoot; Takahashi, Hidekazu; Finger, Wernerj; Kanehira, Masafumi; Iwasaki, Naohiko; Aoyagi, Yujin

    2012-06-01

    Although attritive and abrasive wear of recent composite resins has been substantially reduced, in vitro wear testing with reasonably simulating devices and quantitative determination of resulting wear is still needed. Three-dimensional scanning methods are frequently used for this purpose. The aim of this trial was to compare maximum depth of wear and volume loss of composite samples, evaluated with a contact profilometer and a non-contact CCD camera imaging system, respectively. Twenty-three random composite specimens with wear traces produced in a ball-on-disc sliding device, using poppy seed slurry and PMMA suspension as third-body media, were evaluated with the contact profilometer (TalyScan 150, Taylor Hobson LTD, Leicester, UK) and with the digital CCD microscope (VHX1000, KEYENCE, Osaka, Japan). The target parameters were maximum depth of the wear and volume loss.Results - The individual time of measurement needed with the non-contact CCD method was almost three hours less than that with the contact method. Both, maximum depth of wear and volume loss data, recorded with the two methods were linearly correlated (r(2) > 0.97; p < 0.01). The contact scanning method and the non-contact CCD method are equally suitable for determination of maximum depth of wear and volume loss of abraded composite resins.

  11. A transitional volume beneath the Sannio-Irpinia border region (southern Apennines): Different tectonic styles at different depths

    NASA Astrophysics Data System (ADS)

    De Matteo, Ada; Massa, Bruno; Milano, Girolamo; D'Auria, Luca

    2018-01-01

    In this paper we investigate the border between the Sannio and Irpinia seismogenic regions, a sector of the southern Apennine chain considered among the most active seismic areas of the Italian peninsula, to shed further light on its complex seismotectonic setting. We integrated recent seismicity with literature data. A detailed analysis of the seismicity that occurred in the 2013-2016 time interval was performed. The events were relocated, after manual re-picking, using different approaches. To retrieve information about the stress field active in the area, inversion of Fault Plane Solutions was also carried out. Hypocentral distribution of the relocated events (ML ≤ 3.5), whose depth is included between 5 and 25 km with the deepest ones located in the NW sector of the study area, shows a different pattern between the northern sector and the southern one. The computed Fault Plane Solutions can be grouped in three depth ranges: < 12 km, dominated by normal dip-slip kinematics; 12-18 km, characterized by normal dip-slip and strike-slip kinematics; > 18 km, dominated by strike-slip kinematics. Stress field inversion across the whole area shows that we are dealing with an heterogeneous set of data, apparently governed by distinct stress fields. We built an upper crustal model profile through integration of geological data, well logs and seismic tomographic profiles. Our results suggest the co-existence of different tectonic styles at distinct crustal depths: the upper crust seems to be affected mostly by normal faulting, whereas strike-slip faulting prevails in the intermediate and lower crust. We infer about the existence of a transitional volume, located between 12 and 18 km depth, between the Sannio and Irpinia regions, acting as a vertical transfer zone.

  12. Statistical characteristics of storm interevent time, depth, and duration for eastern New Mexico, Oklahoma, and Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.; Cleveland, Theodore G.; Fang, Xing; Thompson, David B.

    2006-01-01

    The design of small runoff-control structures, from simple floodwater-detention basins to sophisticated best-management practices, requires the statistical characterization of rainfall as a basis for cost-effective, risk-mitigated, hydrologic engineering design. The U.S. Geological Survey, in cooperation with the Texas Department of Transportation, has developed a framework to estimate storm statistics including storm interevent times, distributions of storm depths, and distributions of storm durations for eastern New Mexico, Oklahoma, and Texas. The analysis is based on hourly rainfall recorded by the National Weather Service. The database contains more than 155 million hourly values from 774 stations in the study area. Seven sets of maps depicting ranges of mean storm interevent time, mean storm depth, and mean storm duration, by county, as well as tables listing each of those statistics, by county, were developed. The mean storm interevent time is used in probabilistic models to assess the frequency distribution of storms. The Poisson distribution is suggested to model the distribution of storm occurrence, and the exponential distribution is suggested to model the distribution of storm interevent times. The four-parameter kappa distribution is judged as an appropriate distribution for modeling the distribution of both storm depth and storm duration. Preference for the kappa distribution is based on interpretation of L-moment diagrams. Parameter estimates for the kappa distributions are provided. Separate dimensionless frequency curves for storm depth and duration are defined for eastern New Mexico, Oklahoma, and Texas. Dimension is restored by multiplying curve ordinates by the mean storm depth or mean storm duration to produce quantile functions of storm depth and duration. Minimum interevent time and location have slight influence on the scale and shape of the dimensionless frequency curves. Ten example problems and solutions to possible applications are provided.

  13. Dry granular avalanche impact force on a rigid wall: Analytic shock solution versus discrete element simulations

    NASA Astrophysics Data System (ADS)

    Albaba, Adel; Lambert, Stéphane; Faug, Thierry

    2018-05-01

    The present paper investigates the mean impact force exerted by a granular mass flowing down an incline and impacting a rigid wall of semi-infinite height. First, this granular flow-wall interaction problem is modeled by numerical simulations based on the discrete element method (DEM). These DEM simulations allow computing the depth-averaged quantities—thickness, velocity, and density—of the incoming flow and the resulting mean force on the rigid wall. Second, that problem is described by a simple analytic solution based on a depth-averaged approach for a traveling compressible shock wave, whose volume is assumed to shrink into a singular surface, and which coexists with a dead zone. It is shown that the dead-zone dynamics and the mean force on the wall computed from DEM can be reproduced reasonably well by the analytic solution proposed over a wide range of slope angle of the incline. These results are obtained by feeding the analytic solution with the thickness, the depth-averaged velocity, and the density averaged over a certain distance along the incline rather than flow quantities taken at a singular section before the jump, thus showing that the assumption of a shock wave volume shrinking into a singular surface is questionable. The finite length of the traveling wave upstream of the grains piling against the wall must be considered. The sensitivity of the model prediction to that sampling length remains complicated, however, which highlights the need of further investigation about the properties and the internal structure of the propagating granular wave.

  14. Modelling the Effects of Magma Properties, Pressure and Conduit Dimensions on the Seismic Signature

    NASA Astrophysics Data System (ADS)

    Sturton, S.; Neuberg, J.

    2002-12-01

    A finite-difference scheme is used to model the seismic radiation pattern for a fluid filled conduit surrounded by a solid medium. Seismic waves travel slower than the acoustic velocity inside the conduit and the propagation velocity is frequency dependent. At the ends of the conduit the waves are partly reflected back along the conduit and also leak into the solid medium. The seismometer signal obtained is therefore composed of a series of events released from the ends of the conduit. Each signal can be characterised by the repeat time of the events and the dispersion seen within each event. These characteristics are dependent on the seismic parameters and the conduit dimensions. For a gas-charged magma, increasing the pressure with depth reduces the volume of gas exsolved, thereby increasing the seismic velocity lower in the conduit. From the volume of gas exsolved, profiles of seismic parameters within the conduit and their evolution with time can be obtained. The differences between a varying velocity with depth and a constant velocity with depth are seen in the synthetic seismograms and spectrograms. At Soufriere Hills Volcano, Montserrat, single hybrid events merge into tremor and occasionally gliding lines are observed in the spectra indicating changes in the seismic parameters with time or varying triggering rates of single events. The synthetic seismograms are compared to the observational data and used to constrain the magnitude of pressure changes necessary to produce the gliding lines. Further constraints are obtained from the dispersion patterns in both the synthetic seismograms and the observed data.

  15. Continental emergence and growth on a cooling earth

    NASA Astrophysics Data System (ADS)

    Vlaar, N. J.

    2000-07-01

    Isostasy considerations are connected to a 1-D model of mantle differentiation due to pressure release partial melting to obtain a model for the evolution of the relative sea level with respect to the continent during the earth secular cooling. In this context, a new mechanism is derived for the selective exhumation of exposed ancient cratons. The model results in a quantitative scenario for sea-level fall due to the changing thicknesses of the oceanic basaltic crust and its harzburgite residual layer as a function of falling mantle temperature. It is also shown that the buoyancy of the harzburgite root of a stabilized continental craton has an important effect on sea-level and on the isostatic readjustment and exhumation of exposed continental surface during the earth's secular cooling. The model does not depend on the usual assumption of constant continental freeboard and crustal thickness and its application is not restricted to the post-Archaean. It predicts large-scale continental emergence near the end of the Archaean and the early Proterozoic. This provides an explanation for reported late Archaean emergence and the subsequent formation of late Archaean cratonic platforms and early Proterozoic sedimentary basins. For a period of secular cooling of 3.8 Ga, corresponding to the length of the geological record, the model predicts a fall of the ocean floor of some 4 km or more. For a constant ocean depth, this implies a sea-level fall of the same magnitude. A formula is derived that allows for an increasing ocean depth due to either the changing ratio of continental with respect to oceanic area, or to a possible increase of the oceanic volume during the geological history. Increasing ocean depth results in a later emergence of submarine ancient geological formations compared to the case when ocean depth is constant. Selective exhumation is studied for the case of constant ocean depth. It is shown that for this case, early exposed continental crust can be exhumed to a lower crustal depth, which explains the relative vertical displacement of low-grade- with respect to high-grade terrain. Increasing ocean depth is not expected to result in diminished exhumation.

  16. Numerical Modelling and Analysis of Hydrostatic Thrust Air Bearings for High Loading Capacities and Low Air Consumption

    NASA Astrophysics Data System (ADS)

    Yu, Yunluo; Pu, Guang; Jiang, Kyle

    2017-12-01

    The paper presents a numerical simulation study on hydrostatic thrust air bearings to assess the load capacity, compressed air consumptions, and the dynamic response. Finite Difference Method (FDM) and Finite Volume Method (FVM) are combined to solve the non-linear Reynolds equation to find the pressure distribution of the air bearing gas film and the total loading capacity of the bearing. The influence of design parameters on air film gap characteristics, including the air film thickness, supplied pressure, depth of the groove and external load, are investigated based on the proposed FDM model. The simulation results show that the thrust air bearings with a groove have a higher load capacity and air consumption than without a groove, and the load capacity and air consumption both increase with the depth of the groove. Bearings without the groove are better damped than those with the grooves, and the stability of thrust bearing decreases when the groove depth increases. The stability of the thrust bearings is also affected by their loading.

  17. Fluctuating water depths affect American alligator (Alligator mississippiensis) body condition in the Everglades, Florida, USA

    USGS Publications Warehouse

    Brandt, Laura A.; Beauchamp, Jeffrey S.; Jeffery, Brian M.; Cherkiss, Michael S.; Mazzotti, Frank J.

    2016-01-01

    Successful restoration of wetland ecosystems requires knowledge of wetland hydrologic patterns and an understanding of how those patterns affect wetland plant and animal populations.Within the Everglades, Florida, USA restoration, an applied science strategy including conceptual ecological models linking drivers to indicators is being used to organize current scientific understanding to support restoration efforts. A key driver of the ecosystem affecting the distribution and abundance of organisms is the timing, distribution, and volume of water flows that result in water depth patterns across the landscape. American alligators (Alligator mississippiensis) are one of the ecological indicators being used to assess Everglades restoration because they are a keystone species and integrate biological impacts of hydrological operations through all life stages. Alligator body condition (the relative fatness of an animal) is one of the metrics being used and targets have been set to allow us to track progress. We examined trends in alligator body condition using Fulton’s K over a 15 year period (2000–2014) at seven different wetland areas within the Everglades ecosystem, assessed patterns and trends relative to restoration targets, and related those trends to hydrologic variables. We developed a series of 17 a priori hypotheses that we tested with an information theoretic approach to identify which hydrologic factors affect alligator body condition. Alligator body condition was highest throughout the Everglades during the early 2000s and is approximately 5–10% lower now (2014). Values have varied by year, area, and hydrology. Body condition was positively correlated with range in water depth and fall water depth. Our top model was the “Current” model and included variables that describe current year hydrology (spring depth, fall depth, hydroperiod, range, interaction of range and fall depth, interaction of range and hydroperiod). Across all models, interaction between range and fall water depth was the most important variable (relative weight of 1.0) followed by spring and fall water depths (0.99), range (0.96), hydroperiod (0.95) and interaction between range and hydroperiod (0.95). Our work provides additional evidence that restoring a greater range in annual water depths is important for improvement of alligator body condition and ecosystem function. This information can be incorporated into both planning and operations to assist in reaching Everglades restoration goals.

  18. Hydrological analysis of single and dual storage systems for stormwater harvesting.

    PubMed

    Brodie, I M

    2008-01-01

    As stormwater flows are intermittent, the requirement to store urban runoff is important to the design of a stormwater re-use scheme. In many urban areas, the space available to provide storage is limited and thus the need to optimise the storage volume becomes critical. This paper will highlight the advantages and disadvantages of two different approaches of providing storage: 1) a single shallow storage (0.5 m depth) in which stormwater capture and a balanced release to supply users is provided by the one unit; and 2) a dual system in which the functions of stormwater capture and supply release are provided by two separate deeper storage units (2 m depth). The comparison between the two strategies is supported by water balance modelling assessing the supply reliability and storage volume requirements for both options. Above a critical volumetric capacity, the supply yield of a dual storage system is higher than that from a single storage of equal volume mainly because of a smaller assumed footprint. The single storage exhibited greater evaporation loss and is more susceptible to algae blooms due to long water residence times. Results of the comparison provide guidance to the design of more efficient storages associated with stormwater harvesting systems. Copyright IWA Publishing 2008.

  19. Composition of the Home Care Service Package: Predictors of Type, Volume, and Mix of Services Provided to Poor and Frail Older People.

    ERIC Educational Resources Information Center

    Diwan, Sadhna; And Others

    1997-01-01

    Examined type of service, volume, and mix of services provided to 270 poor and frail elders. Predictors of use and volume of service differed depending on type of service. The most prescribed service combinations included nursing, home health, and homemaker; homemaker only; nursing and home health; nursing and homemaker; and nurse only. (RJM)

  20. The cost determinants of routine infant immunization services: a meta-regression analysis of six country studies.

    PubMed

    Menzies, Nicolas A; Suharlim, Christian; Geng, Fangli; Ward, Zachary J; Brenzel, Logan; Resch, Stephen C

    2017-10-06

    Evidence on immunization costs is a critical input for cost-effectiveness analysis and budgeting, and can describe variation in site-level efficiency. The Expanded Program on Immunization Costing and Financing (EPIC) Project represents the largest investigation of immunization delivery costs, collecting empirical data on routine infant immunization in Benin, Ghana, Honduras, Moldova, Uganda, and Zambia. We developed a pooled dataset from individual EPIC country studies (316 sites). We regressed log total costs against explanatory variables describing service volume, quality, access, other site characteristics, and income level. We used Bayesian hierarchical regression models to combine data from different countries and account for the multi-stage sample design. We calculated output elasticity as the percentage increase in outputs (service volume) for a 1% increase in inputs (total costs), averaged across the sample in each country, and reported first differences to describe the impact of other predictors. We estimated average and total cost curves for each country as a function of service volume. Across countries, average costs per dose ranged from $2.75 to $13.63. Average costs per child receiving diphtheria, tetanus, and pertussis ranged from $27 to $139. Within countries costs per dose varied widely-on average, sites in the highest quintile were 440% more expensive than those in the lowest quintile. In each country, higher service volume was strongly associated with lower average costs. A doubling of service volume was associated with a 19% (95% interval, 4.0-32) reduction in costs per dose delivered, (range 13% to 32% across countries), and the largest 20% of sites in each country realized costs per dose that were on average 61% lower than those for the smallest 20% of sites, controlling for other factors. Other factors associated with higher costs included hospital status, provision of outreach services, share of effort to management, level of staff training/seniority, distance to vaccine collection, additional days open per week, greater vaccination schedule completion, and per capita gross domestic product. We identified multiple features of sites and their operating environment that were associated with differences in average unit costs, with service volume being the most influential. These findings can inform efforts to improve the efficiency of service delivery and better understand resource needs.

  1. Independent evaluation of the SNODAS snow depth product using regional scale LiDAR-derived measurements

    NASA Astrophysics Data System (ADS)

    Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.

    2014-06-01

    Repeated Light Detection and Ranging (LiDAR) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 LiDAR-derived dataset of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically-based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the coterminous United States. Independent validation data is scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation dataset with substantial geographic coverage. Within twelve distinctive 500 m × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 LiDAR acquisitions. This supplied a dataset for constraining the uncertainty of upscaled LiDAR estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled LiDAR snow depths were then compared to the SNODAS-estimates over the entire study area for the dates of the LiDAR flights. The remotely-sensed snow depths provided a more spatially continuous comparison dataset and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between LiDAR observations and SNODAS estimates were most drastic, suggesting natural processes specific to these regions as causal influences on model uncertainty.

  2. Independent evaluation of the SNODAS snow depth product using regional-scale lidar-derived measurements

    NASA Astrophysics Data System (ADS)

    Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.

    2015-01-01

    Repeated light detection and ranging (lidar) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 lidar-derived data set of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the conterminous United States. Independent validation data are scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation data set with substantial geographic coverage. Within 12 distinctive 500 × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 lidar acquisitions. This supplied a data set for constraining the uncertainty of upscaled lidar estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled lidar snow depths were then compared to the SNODAS estimates over the entire study area for the dates of the lidar flights. The remotely sensed snow depths provided a more spatially continuous comparison data set and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between lidar observations and SNODAS estimates were most drastic, providing insight into the causal influences of natural processes on model uncertainty.

  3. Isolated Effect of Geometry on Mitral Valve Function for In-Silico Model Development

    PubMed Central

    Siefert, Andrew William; Rabbah, Jean-Pierre Michel; Saikrishnan, Neelakantan; Kunzelman, Karyn Susanne; Yoganathan, Ajit Prithivaraj

    2013-01-01

    Computational models for the heart’s mitral valve (MV) exhibit several uncertainties which may be reduced by further developing these models using ground-truth data sets. The present study generated a ground-truth data set by quantifying the effects of isolated mitral annular flattening, symmetric annular dilatation, symmetric papillary muscle displacement, and asymmetric papillary muscle displacement on leaflet coaptation, mitral regurgitation (MR), and anterior leaflet strain. MVs were mounted in an in vitro left heart simulator and tested under pulsatile hemodynamics. Mitral leaflet coaptation length, coaptation depth, tenting area, MR volume, MR jet direction, and anterior leaflet strain in the radial and circumferential directions were successfully quantified for increasing levels of geometric distortion. From these data, increasing levels of isolated papillary muscle displacement resulted in the greatest mean change in coaptation depth (70% increase), tenting area (150% increase), and radial leaflet strain (37% increase) while annular dilatation resulted in the largest mean change in coaptation length (50% decrease) and regurgitation volume (134% increase). Regurgitant jets were centrally located for symmetric annular dilatation and symmetric papillary muscle displacement. Asymmetric papillary muscle displacement resulted in asymmetrically directed jets. Peak changes in anterior leaflet strain in the circumferential direction were smaller and exhibited non-significant differences across the tested conditions. When used together, this ground-truth data may be used to parametrically evaluate and develop modeling assumptions for both the MV leaflets and subvalvular apparatus. This novel data may improve MV computational models and provide a platform for the development of future surgical planning tools. PMID:24059354

  4. Comparison: Discovery on WSMOLX and miAamics/jABC

    NASA Astrophysics Data System (ADS)

    Kubczak, Christian; Vitvar, Tomas; Winkler, Christian; Zaharia, Raluca; Zaremba, Maciej

    This chapter compares the solutions to the SWS-Challenge discovery problems provided by DERI Galway and the joint solution from the Technical University of Dortmund and University of Postdam. The two approaches are described in depth in Chapters 10 and 13. The discovery scenario raises problems associated with making service discovery an automated process. It requires fine-grained specifications of search requests and service functionality including support for fetching dynamic information during the discovery process (e.g., shipment price). Both teams utilize semantics to describe services, service requests and data models in order to enable search at the required fine-grained level of detail.

  5. Estimating implementation and operational costs of an integrated tiered CD4 service including laboratory and point of care testing in a remote health district in South Africa.

    PubMed

    Cassim, Naseem; Coetzee, Lindi M; Schnippel, Kathryn; Glencross, Deborah K

    2014-01-01

    An integrated tiered service delivery model (ITSDM) has been proposed to provide 'full-coverage' of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing < 30-40 samples from 8-10 health-clinics; Tier-3/Community laboratories servicing ∼ 50 health-clinics, processing < 150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing < 300 or > 600 samples/day and serving > 100 or > 200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of > 24-48 hours. Full service coverage with TAT < 6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured 'full service coverage' and < 24 hour LTR-TAT for the district at $7.42 per-test. Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼ 12-24-hour LTR-TAT, is ∼ $2 more than existing referred services per-test, but 2-4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4 services.

  6. Satellite provided customer premise services: A forecast of potential domestic demand through the year 2000. Volume 3: Appendices

    NASA Technical Reports Server (NTRS)

    Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.

    1983-01-01

    Voice applications, data applications, video applications, impacted baseline forecasts, market distribution, potential CPS (customers premises services) user classes, net long haul forecasts, CPS cost analysis, overall satellite forecast, CPS satellite market, Ka-band CPS satellite forecast, nationwide traffic distribution model, and intra-urban topology are discussed.

  7. Satellite provided customer premise services: A forecast of potential domestic demand through the year 2000. Volume 3: Appendices

    NASA Astrophysics Data System (ADS)

    Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.

    1983-08-01

    Voice applications, data applications, video applications, impacted baseline forecasts, market distribution, potential CPS (customers premises services) user classes, net long haul forecasts, CPS cost analysis, overall satellite forecast, CPS satellite market, Ka-band CPS satellite forecast, nationwide traffic distribution model, and intra-urban topology are discussed.

  8. Combined Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.; Miller, James J.

    2015-01-01

    Besides providing position, navigation, and timing (PNT) services to traditional terrestrial and airborne users, GPS is also being increasingly used as a tool to enable precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis attitude control of Earth orbiting satellites. With additional Global Navigation Satellite System (GNSS) constellations being replenished and coming into service (GLONASS, Beidou, and Galileo), it will become possible to benefit from greater signal availability and robustness by using evolving multi-constellation receivers. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to seventy thousand kilometers. This paper will report a similar analysis of the signal coverage of GPS in the space domain; however, the analyses will also consider signal coverage from each of the additional GNSS constellations noted earlier to specifically demonstrate the expected benefits to be derived from using GPS in conjunction with other foreign systems. The Space Service Volume is formally defined as the volume of space between three thousand kilometers altitude and geosynchronous altitude circa 36,000 km, as compared with the Terrestrial Service Volume between 3,000 km and the surface of the Earth. In the Terrestrial Service Volume, GNSS performance is the same as on or near the Earth's surface due to satellite vehicle availability and geometry similarities. The core GPS system has thereby established signal requirements for the Space Service Volume as part of technical Capability Development Documentation (CDD) that specifies system performance. Besides the technical discussion, we also present diplomatic efforts to extend the GPS Space Service Volume concept to other PNT service providers in an effort to assure that all space users will benefit from the enhanced interoperability of GNSS services in the space domain. A separate paper presented at the conference covers the individual GNSS performance parameters for respective Space Service Volumes.

  9. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    NASA Astrophysics Data System (ADS)

    Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu

    2018-06-01

    A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.

  10. Coupling groundwater and riparian vegetation models to assess effects of reservoir releases

    USGS Publications Warehouse

    Springer, Abraham E.; Wright, Julie M.; Shafroth, Patrick B.; Stromberg, Juliet C.; Patten, Duncan T.

    1999-01-01

    Although riparian areas in the arid southwestern United States are critical for maintaining species diversity, their extent and health have been declining since Euro‐American settlement. The purpose of this study was to develop a methodology to evaluate the potential for riparian vegetation restoration and groundwater recharge. A numerical groundwater flow model was coupled with a conceptual riparian vegetation model to predict hydrologic conditions favorable to maintaining riparian vegetation downstream of a reservoir. A Geographic Information System (GIS) was used for this one‐way coupling. Constant and seasonally varying releases from the dam were simulated using volumes anticipated to be permitted by a regional water supplier. Simulations indicated that seasonally variable releases would produce surface flow 5.4–8.5 km below the dam in a previously dry reach. Using depth to groundwater simulations from the numerical flow model with conceptual models of depths to water necessary for maintenance of riparian vegetation, the GIS analysis predicted a 5‐ to 6.5‐fold increase in the area capable of sustaining riparian vegetation.

  11. Partial-depth lock-release flows

    NASA Astrophysics Data System (ADS)

    Khodkar, M. A.; Nasr-Azadani, M. M.; Meiburg, E.

    2017-06-01

    We extend the vorticity-based modeling concept for stratified flows introduced by Borden and Meiburg [Z. Borden and E. Meiburg, J. Fluid Mech. 726, R1 (2013), 10.1017/jfm.2013.239] to unsteady flow fields that cannot be rendered quasisteady by a change of reference frames. Towards this end, we formulate a differential control volume balance for the conservation of mass and vorticity in the fully unsteady parts of the flow, which we refer to as the differential vorticity model. We furthermore show that with the additional assumptions of locally uniform parallel flow within each layer, the unsteady vorticity modeling approach reproduces the familiar two-layer shallow-water equations. To evaluate its accuracy, we then apply the vorticity model approach to partial-depth lock-release flows. Consistent with the shallow water analysis of Rottman and Simpson [J. W. Rottman and J. E. Simpson, J. Fluid Mech. 135, 95 (1983), 10.1017/S0022112083002979], the vorticity model demonstrates the formation of a quasisteady gravity current front, a fully unsteady expansion wave, and a propagating bore that is present only if the lock depth exceeds half the channel height. When this bore forms, it travels with a velocity that does not depend on the lock height and the interface behind it is always at half the channel depth. We demonstrate that such a bore is energy conserving. The differential vorticity model gives predictions for the height and velocity of the gravity current and the bore, as well as for the propagation velocities of the edges of the expansion fan, as a function of the lock height. All of these predictions are seen to be in good agreement with the direct numerical simulation data and, where available, with experimental results. An energy analysis shows lock-release flows to be energy conserving only for the case of a full lock, whereas they are always dissipative for partial-depth locks.

  12. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less

  13. A case study of a team-based, quality-focused compensation model for primary care providers.

    PubMed

    Greene, Jessica; Hibbard, Judith H; Overton, Valerie

    2014-06-01

    In 2011, Fairview Health Services began replacing their fee-for-service compensation model for primary care providers (PCPs), which included an annual pay-for-performance bonus, with a team-based model designed to improve quality of care, patient experience, and (eventually) cost containment. In-depth interviews and an online survey of PCPs early after implementation of the new model suggest that it quickly changed the way many PCPs practiced. Most PCPs reported a shift in orientation toward quality of care, working more collaboratively with their colleagues and focusing on their full panel of patients. The majority reported that their quality of care had improved because of the model and that their colleagues' quality had to. The comprehensive change did, however, result in lower fee-for-service billing and reductions in PCP satisfaction. While Fairview's compensation model is still a work in progress, their early experiences can provide lessons for other delivery systems seeking to reform PCP compensation.

  14. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.

  15. Acceptance of Swedish e-health services.

    PubMed

    Jung, Mary-Louise; Loria, Karla

    2010-11-16

    To investigate older people's acceptance of e-health services, in order to identify determinants of, and barriers to, their intention to use e-health. Based on one of the best-established models of technology acceptance, Technology Acceptance Model (TAM), in-depth exploratory interviews with twelve individuals over 45 years of age and of varying backgrounds are conducted. This investigation could find support for the importance of usefulness and perceived ease of use of the e-health service offered as the main determinants of people's intention to use the service. Additional factors critical to the acceptance of e-health are identified, such as the importance of the compatibility of the services with citizens' needs and trust in the service provider. Most interviewees expressed positive attitudes towards using e-health and find these services useful, convenient, and easy to use. E-health services are perceived as a good complement to traditional health care service delivery, even among older people. These people, however, need to become aware of the e-health alternatives that are offered to them and the benefits they provide.

  16. Acceptance of Swedish e-health services

    PubMed Central

    Jung, Mary-Louise; Loria, Karla

    2010-01-01

    Objective: To investigate older people’s acceptance of e-health services, in order to identify determinants of, and barriers to, their intention to use e-health. Method: Based on one of the best-established models of technology acceptance, Technology Acceptance Model (TAM), in-depth exploratory interviews with twelve individuals over 45 years of age and of varying backgrounds are conducted. Results: This investigation could find support for the importance of usefulness and perceived ease of use of the e-health service offered as the main determinants of people’s intention to use the service. Additional factors critical to the acceptance of e-health are identified, such as the importance of the compatibility of the services with citizens’ needs and trust in the service provider. Most interviewees expressed positive attitudes towards using e-health and find these services useful, convenient, and easy to use. Conclusion: E-health services are perceived as a good complement to traditional health care service delivery, even among older people. These people, however, need to become aware of the e-health alternatives that are offered to them and the benefits they provide. PMID:21289860

  17. Refugee Resettlement: Models in Action.

    ERIC Educational Resources Information Center

    Lanphier, Michael

    1983-01-01

    Analyzes service delivery, resource allocation, sponsorship, and other practices of programs for Indochinese refugee resettlement in France, Canada, and the United States, according to a model of refugee resettlement that considers two major variables: (1) volume of refugee intake (large or moderate) and (2) type of adaptation emphasis (economic…

  18. The effect of volume phase changes, mass transport, sunlight penetration, and densification on the thermal regime of icy regoliths

    NASA Technical Reports Server (NTRS)

    Fanale, Fraser P.; Salvail, James R.; Matson, Dennis L.; Brown, Robert H.

    1990-01-01

    The present quantitative modeling of convective, condensational, and sublimational effects on porous ice crust volumes subjected to solar radiation encompasses the effect of such insolation's penetration of visible bandpass-translucent light, but opaque to the IR bandpass. Quasi-steady-state temperatures, H2O mass fluxes, and ice mass-density change rates are computed as functions of time of day and ice depth. When the effects of latent heat and mass transport are included in the model, the enhancement of near-surface temperature due to the 'solid-state greenhouse effect' is substantially diminished. When latent heat, mass transport, and densification effects are considered, however, a significant solid-state greenhouse effect is shown to be compatible with both morphological evidence for high crust strengths and icy shell decoupling from the lithosphere.

  19. [Spectrometric assessment of thyroid depth within the radioiodine test].

    PubMed

    Rink, T; Bormuth, F-J; Schroth, H-J; Braun, S; Zimny, M

    2005-01-01

    Aim of this study is the validation of a simple method for evaluating the depth of the target volume within the radioiodine test by analyzing the emitted iodine-131 energy spectrum. In a total of 250 patients (102 with a solitary autonomous nodule, 66 with multifocal autonomy, 29 with disseminated autonomy, 46 with Graves' disease, 6 for reducing goiter volume and 1 with only partly resectable papillary thyroid carcinoma), simultaneous uptake measurements in the Compton scatter (210 +/- 110 keV) and photopeak (364-45/+55 keV) windows were performed over one minute 24 hours after application of the 3 MBq test dose, with subsequent calculation of the respective count ratios. Measurements with a water-filled plastic neck phantom were carried out to perceive the relationship between these quotients and the average source depth and to get a calibration curve for calculating the depth of the target volume in the 250 patients for comparison with the sonographic reference data. Another calibration curve was obtained by evaluating the results of 125 randomly selected patient measurements to calculate the source depth in the other half of the group. The phantom measurements revealed a highly significant correlation (r = 0,99) between the count ratios and the source depth. Using these calibration data, a good relationship (r = 0,81, average deviation 6 mm corresponding to 22%) between the spectrometric and the sonographic depths was obtained. When using the calibration curve resulting from the 125 patient measurements, the overage deviation in the other half of the group was only 3 mm (12%). There was no difference between the disease groups. The described method allows on easy to use depth correction of the uptake measurements providing good results.

  20. Suspended sediment concentration in the Lower Sea Scheldt (Belgium): long term trends and relation to mud disposal

    NASA Astrophysics Data System (ADS)

    Depreiter, Davy; van Holland, Gijsbert; Lanckriet, Thijs; Beirinckx, Kirsten; Vanlede, Joris; Maris, Tom

    2015-04-01

    In this presentation, results from different monitoring and research projects (OMES, MONEOS, Flexible Disposal and Marine-Fluvial mud ratio) will be integrated to increase the insight in the trends and relation between mud disposal and the increasing sediment concentrations (SSC) in the Lower Sea Scheldt. In the Scheldt Estuary, major projects have been carried out in the past decade, among which the third deepening of the navigation channel and the opening of the Deurganck dock. Maintenance dredging is carried out to guarantee a minimum navigation depth. A rising trend in the volume of mud dredged in the Lower Sea Scheldt is observed since 2006, the year after the opening of the Deurganck Dock. The trend is explained by increasing mud volumes dredged in this dock and on a nearby sill. This volume culminated in 2011 (4.8 million m³) when the depth of this dock was increased to its design depth. The dredged mud is disposed upstream, quickly to be resuspended. Near the mud disposal location, yearly averaged SSC (measured at 4.5 m above bed) tripled between 2005 and 2011 (108 to 348 mg/L), and SSC peaks increased even stronger. A multivariate regression model indicated a strong correlation between mud disposal volumes and timing and observed SSC. Mud disposal volumes and SSC where somewhat lower again after 2011. The SSC increase raises an alert with regard to the risk for a regime shift towards a hyperturbid system. Increasing SSC may indeed decrease the hydraulic resistance initiating a feedback mechanism that results in further increasing SSC values. It thus appears that more mud is being circulated: the Deurganck dock acts as mud sink, from which the mud is - after dredging and disposal - resuspended. The mud may have different sources: fluvial or marine influx. The increasing SSC might not only be related to the mud disposal, but also to changing tidal characteristics that enhance the influx of marine suspended sediments. To elucidate this, an analysis of the marine fraction in soil and suspended sediments has also been performed.

  1. Photoacoustic and ultrasound imaging of cancellous bone tissue.

    PubMed

    Yang, Lifeng; Lashkari, Bahman; Tan, Joel W Y; Mandelis, Andreas

    2015-07-01

    We used ultrasound (US) and photoacoustic (PA) imaging modalities to characterize cattle trabecular bones. The PA signals were generated with an 805-nm continuous wave laser used for optimally deep optical penetration depth. The detector for both modalities was a 2.25-MHz US transducer with a lateral resolution of ~1 mm at its focal point. Using a lateral pixel size much larger than the size of the trabeculae, raster scanning generated PA images related to the averaged values of the optical and thermoelastic properties, as well as density measurements in the focal volume. US backscatter yielded images related to mechanical properties and density in the focal volume. The depth of interest was selected by time-gating the signals for both modalities. The raster scanned PA and US images were compared with microcomputed tomography (μCT) images averaged over the same volume to generate similar spatial resolution as US and PA. The comparison revealed correlations between PA and US modalities with the mineral volume fraction of the bone tissue. Various features and properties of these modalities such as detectable depth, resolution, and sensitivity are discussed.

  2. Engineer Modeling Study. Volume II. Users Manual.

    DTIC Science & Technology

    1982-09-01

    Distribution Center, Digital Equip- ment Corporation, 1980). The following paragraphs briefly describe each of the major input sections...abbreviation 3. A sequence number for post-processing 4. Clock time 5. Order number pointer (six digits ) 6. Job number pointer (six digits ) 7. Unit number...KIT) Users Manual (Boeing Computer % Services, Inc., 1977). S VAX/VMS Users Manual. Volume 3A (Software Distribution Center, Digital Equipment

  3. Report of Baseline Data: Evaluation of the Child and Family Resource Program. Volume II.

    ERIC Educational Resources Information Center

    Affholter, Dennis; And Others

    This volume reports the baseline (1978) data to be used in the 6-year longitudinal evaluation of the Child and Family Resource Program (CFRP). The CFRP, funded in 11 sites across the country as a Head Start demonstration program, is intended to develop models for providing services to low-income families with children from birth to eight years.…

  4. Pack-and-Go Delivery Service: A Multi-Component Cost-Volume-Profit (CVP) Learning Resource

    ERIC Educational Resources Information Center

    Stout, David E.

    2014-01-01

    This educational case, in two parts (A and B), requires students to assume the role of a business consultant and to use Excel to develop a profit-planning or a cost-volume-profit (CVP) model for a package-delivery company opportunity currently being evaluated by a client. The name of the proposed business is Pack-and-Go, which would provide an…

  5. Australian mental health consumers contributions to the evaluation and improvement of recovery-oriented service provision.

    PubMed

    Marshal, Sarah L; Oades, Lindsay G; Growe, Trevor P

    2010-01-01

    One key component of recovery-oriented mental health services, typically overlooked, involves genuine collaboration between researchers and consumers to evaluate and improve services delivered within a recovery framework. Eighteen mental health consumers working with staff who had received training in the Collaborative Recovery Model (CRM) took part in in-depth focus group meetings, of approximately 2.5 hours each, to generate feedback to guide improvement of the CRM and its use in mental health services. Consumers identified clear avenues for improvement for the CRM both specific to the model and broadly applicable to recovery-oriented service provision. Findings suggest consumers want to be more engaged and empowered in the use of the CRM from the outset. Improved sampling procedures may have led to the identification of additional dissatisfied consumers. Collaboration with mental health consumers in the evaluation and improvement of recovery-oriented practice is crucial with an emphasis on rebuilding mental health services that are genuinely oriented to support recovery.

  6. Agriculture Supplies & Services. Volume 3 of 3.

    ERIC Educational Resources Information Center

    Kansas State Univ., Manhattan.

    The third of three volumes included in a secondary agricultural supplies and services curriculum guide, this volume contains twenty-five units of instruction in the area of agricultural mechanics. Among the unit topics included are (1) Farm Safety, (2) Ignition Systems; (3) Servicing Wheel Bearings, (4) Oxyacetylene Cutting, (5) Servicing the…

  7. MAC/GMC 4.0 User's Manual: Keywords Manual. Volume 2

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2002-01-01

    This document is the second volume in the three volume set of User's Manuals for the Micromechanics Analysis Code with Generalized Method of Cells Version 4.0 (MAC/GMC 4.0). Volume 1 is the Theory Manual, this document is the Keywords Manual, and Volume 3 is the Example Problem Manual. MAC/GMC 4.0 is a composite material and laminate analysis software program developed at the NASA Glenn Research Center. It is based on the generalized method of cells (GMC) micromechanics theory, which provides access to the local stress and strain fields in the composite material. This access grants GMC the ability to accommodate arbitrary local models for inelastic material behavior and various types of damage and failure analysis. MAC/GMC 4.0 has been built around GMC to provide the theory with a user-friendly framework, along with a library of local inelastic, damage, and failure models. Further, applications of simulated thermo-mechanical loading, generation of output results, and selection of architectures to represent the composite material have been automated in MAC/GMC 4.0. Finally, classical lamination theory has been implemented within MAC/GMC 4.0 wherein GMC is used to model the composite material response of each ply. Consequently, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. This volume describes the basic information required to use the MAC/GMC 4.0 software, including a 'Getting Started' section, and an in-depth description of each of the 22 keywords used in the input file to control the execution of the code.

  8. Model Offices: Flexible Options, Local Innovations.

    ERIC Educational Resources Information Center

    Perspective: Essays and Reviews of Issues in Employment Security and Employment and Training Programs, 1990

    1990-01-01

    This volume of an annual journal contains 17 articles that focus on model local offices of the employment security (ES) and training systems. The articles are arranged in three parts. Part I, on developing new initiatives, contains the following five articles: "A Public Employment Service for the 1990s" (Elizabeth Dole); "The…

  9. HYDRODYNAMIC SIMULATION OF THE UPPER POTOMAC ESTUARY.

    USGS Publications Warehouse

    Schaffranck, Raymond W.

    1986-01-01

    Hydrodynamics of the upper extent of the Potomac Estuary between Indian Head and Morgantown, Md. , are simulated using a two-dimensional model. The model computes water-surface elevations and depth-averaged velocities by numerically integrating finite-difference forms of the equations of mass and momentum conservation using the alternating direction implicit method. The fundamental, non-linear, unsteady-flow equations, upon which the model is formulated, include additional terms to account for Coriolis acceleration and meteorological influences. Preliminary model/prototype data comparisons show agreement to within 9% for tidal flow volumes and phase differences within the measured-data-recording interval. Use of the model to investigate the hydrodynamics and certain aspects of transport within this Potomac Estuary reach is demonstrated. Refs.

  10. High-porosity Cenozoic carbonate rocks of South Florida: Progressive loss of porosity with depth

    USGS Publications Warehouse

    Halley, Robert B.; Schmoker, James W.

    1983-01-01

    Porosity measurements by borehole gravity meter in subsurface Cenozoic carbonates of south Florida reveal an extremely porous mass of limestone and dolomite which is transitional in total pore volume between typical porosity values for modern carbonate sediments and ancient carbonate rocks. A persistent decrease of porosity with depth, similar to that of chalks of the Gulf Coast, occurs in these rocks. We make no attempt to differentiate depositional or diagenetic facies which produce scatter in the porosity-depth relationship; the dominant data trends thus are functions of carbonate rocks in general rather than of particular carbonate facies. Carbonate strata with less than 20% porosity are absent from the rocks studied here.Aquifers and aquicludes cannot be distinguished on the basis of porosity. Although aquifers are characterized by great permeability and well-developed vuggy and even cavernous porosity in some intervals, they are not exceptionally porous when compared to other Tertiary carbonate rocks in south Florida. Permeability in these strata is governed more by the spacial distribution of pore space and matrix than by the total volume of porosity present.Dolomite is as porous as, or slightly less porous than, limestones in these rocks. This observation places limits on any model proposed for dolomitization and suggests that dolomitization does not take place by a simple ion-for-ion replacement of magnesium for calcium. Dolomitization may be selective for less porous limestone, or it may involve the incorporation of significant amounts of carbonate as well as magnesium into the rock.The great volume of pore space in these rocks serves to highlight the inefficiency of early diagenesis in reducing carbonate porosity and to emphasize the importance of later porosity reduction which occurs during the burial or late near-surface history of limestones and dolomites.

  11. Pre-Employment Laboratory Training. General Agricultural Mechanics Volume I. Instructional Materials.

    ERIC Educational Resources Information Center

    Texas A and M Univ., College Station. Vocational Instructional Services.

    This course outline, the first volume of a two-volume set, consists of lesson plans for pre-employment laboratory training in general agricultural mechanics. Covered in the 12 lessons included in this volume are selecting tractors and engines, diagnosing engine conditions, servicing electrical systems, servicing cooling systems, servicing fuel and…

  12. Health services for reproductive tract infections among female migrant workers in industrial zones in Ha Noi, Viet Nam: an in-depth assessment

    PubMed Central

    2012-01-01

    Background Rural-to-urban migration involves a high proportion of females because job opportunities for female migrants have increased in urban industrial areas. Those who migrate may be healthier than those staying in the village and they may benefit from better health care services at destination, but the 'healthy' effect can be reversed at destination due to migration-related health risk factors. The study aimed to explore the need for health care services for reproductive tract infections (RTIs) among female migrants working in the Sai Dong industrial zone as well as their services utilization. Methods The cross sectional study employed a mixed method approach. A cohort of 300 female migrants was interviewed to collect quantitative data. Two focus groups and 20 in-depth interviews were conducted to collect qualitative data. We have used frequency and cross-tabulation techniques to analyze the quantitative data and the qualitative data was used to triangulate and to provide more in-depth information. Results The needs for health care services for RTI were high as 25% of participants had RTI syndromes. Only 21.6% of female migrants having RTI syndromes ever seek helps for health care services. Barriers preventing migrants to access services were traditional values, long working hours, lack of information, and high cost of services. Employers had limited interests in reproductive health of female migrants, and there was ineffective collaboration between the local health system and enterprises. These barriers were partly caused by lack of health promotion programs suitable for migrants. Most respondents needed more information on RTIs and preferred to receive these from their employers since they commonly work shifts - and spend most of their day time at work. Conclusion While RTIs are a common health problem among female migrant workers in industrial zones, female migrants had many obstacles in accessing RTI care services. The findings from this study will help to design intervention models for RTI among this vulnerable group such as communication for behavioural impact of RTI health care, fostered collaboration between local health care services and employer enterprises, and on-site service (e.g. local or enterprise health clinics) strengthening. PMID:22369718

  13. High-Resolution Assimilation of GRACE Terrestrial Water Storage Observations to Represent Local-Scale Water Table Depths

    NASA Astrophysics Data System (ADS)

    Stampoulis, D.; Reager, J. T., II; David, C. H.; Famiglietti, J. S.; Andreadis, K.

    2017-12-01

    Despite the numerous advances in hydrologic modeling and improvements in Land Surface Models, an accurate representation of the water table depth (WTD) still does not exist. Data assimilation of observations of the joint NASA and DLR mission, Gravity Recovery and Climate Experiment (GRACE) leads to statistically significant improvements in the accuracy of hydrologic models, ultimately resulting in more reliable estimates of water storage. However, the usually shallow groundwater compartment of the models presents a problem with GRACE assimilation techniques, as these satellite observations account for much deeper aquifers. To improve the accuracy of groundwater estimates and allow the representation of the WTD at fine spatial scales we implemented a novel approach that enables a large-scale data integration system to assimilate GRACE data. This was achieved by augmenting the Variable Infiltration Capacity (VIC) hydrologic model, which is the core component of the Regional Hydrologic Extremes Assessment System (RHEAS), a high-resolution modeling framework developed at the Jet Propulsion Laboratory (JPL) for hydrologic modeling and data assimilation. The model has insufficient subsurface characterization and therefore, to reproduce groundwater variability not only in shallow depths but also in deep aquifers, as well as to allow GRACE assimilation, a fourth soil layer of varying depth ( 1000 meters) was added in VIC as the bottom layer. To initialize a water table in the model we used gridded global WTD data at 1 km resolution which were spatially aggregated to match the model's resolution. Simulations were then performed to test the augmented model's ability to capture seasonal and inter-annual trends of groundwater. The 4-layer version of VIC was run with and without assimilating GRACE Total Water Storage anomalies (TWSA) over the Central Valley in California. This is the first-ever assimilation of GRACE TWSA for the determination of realistic water table depths, at fine scales that are required for local water management. In addition, Open Loop and GRACE-assimilation simulations of water table depth were compared to in-situ data over the state of California, derived from observation wells operated/maintained by the U.S. Geological Service.

  14. Developing a change model for peer worker interventions in mental health services: a qualitative research study.

    PubMed

    Gillard, S; Gibson, S L; Holley, J; Lucock, M

    2015-10-01

    A range of peer worker roles are being introduced into mental health services internationally. There is some evidence that attests to the benefits of peer workers for the people they support but formal trial evidence in inconclusive, in part because the change model underpinning peer support-based interventions is underdeveloped. Complex intervention evaluation guidance suggests that understandings of how an intervention is associated with change in outcomes should be modelled, theoretically and empirically, before the intervention can be robustly evaluated. This paper aims to model the change mechanisms underlying peer worker interventions. In a qualitative, comparative case study of ten peer worker initiatives in statutory and voluntary sector mental health services in England in-depth interviews were carried out with 71 peer workers, service users, staff and managers, exploring their experiences of peer working. Using a Grounded Theory approach we identified core processes within the peer worker role that were productive of change for service users supported by peer workers. Key change mechanisms were: (i) building trusting relationships based on shared lived experience; (ii) role-modelling individual recovery and living well with mental health problems; (iii) engaging service users with mental health services and the community. Mechanisms could be further explained by theoretical literature on role-modelling and relationship in mental health services. We were able to model process and downstream outcomes potentially associated with peer worker interventions. An empirically and theoretically grounded change model can be articulated that usefully informs the development, evaluation and planning of peer worker interventions.

  15. Mountain-Plains Handbook: The Design and Operation of a Residential Family Based Education Program. Appendix. Supplement II to Volume 5. Operational Support: Administrative Services Division.

    ERIC Educational Resources Information Center

    Anderson, Newell B.; And Others

    One of two supplements which accompany chapter 5 of "Mountain-Plains Handbook: The Design and Operation of a Residential, Family Oriented Career Education Model" (CE 014 630), this document contains specific information concerning the following components of the administrative services division: purchasing, property control, and…

  16. Establishing Preventive Services. Healthy Children 2010. Issues in Children's and Families' Lives, Vol. 9. The John & Kelly Hartman Series.

    ERIC Educational Resources Information Center

    Weissberg, Roger P., Ed.; Gullotta, Thomas P., Ed.; Hampton, Robert L., Ed.; Ryan, Bruce A., Ed.; Adams, Gerald R., Ed.

    Young people are facing greater risks to their current and future health and social development, as shown by involvement of younger and younger children in risk-taking behaviors. This volume emphasizes developmentally and contextually appropriate prevention service delivery models and identifies state-of-the-art, empirically based strategies to…

  17. Mountain-Plains Handbook: The Design and Operation of a Residential Family Based Education Program. Appendix. Supplement I to Volume 5. Operational Support: Administrative Services Division.

    ERIC Educational Resources Information Center

    Anderson, Newell B.; And Others

    One of two supplements which accompany chapter 5 of "Mountain-Plains Handbook: The Design and Operation of a Residential, Family Oriented Career Education Model" (CE 014 630), this document contains specific information concerning the reprographic and personnel components of the administrative services division. Several job descriptions…

  18. Atlas of depth-duration frequency of precipitation annual maxima for Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2004-01-01

    Ninety-six maps depicting the spatial variation of the depth-duration frequency of precipitation annual maxima for Texas are presented. The recurrence intervals represented are 2, 5, 10, 25, 50, 100, 250, and 500 years. The storm durations represented are 15 and 30 minutes; 1, 2, 3, 6, and 12 hours; and 1, 2, 3, 5, and 7 days. The maps were derived using geographically referenced parameter maps of probability distributions used in previously published research by the U.S. Geological Survey to model the magnitude and frequency of precipitation annual maxima for Texas. The maps in this report apply that research and update depth-duration frequency of precipitation maps available in earlier studies done by the National Weather Service.

  19. Navigation Performance of Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    This paper extends the results I reported at this year's ION International Technical Meeting on multi-constellation GNSS coverage by showing how the use of multi-constellation GNSS improves Geometric Dilution of Precision (GDOP). Originally developed to provide position, navigation, and timing for terrestrial users, GPS has found increasing use for in space for precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis attitude control of Earth orbiting satellites. With additional Global Navigation Satellite Systems (GNSS) coming into service (GLONASS, Galileo, and Beidou) and the development of Satellite Based Augmentation Services, it is possible to obtain improved precision by using evolving multi-constellation receiver. The Space Service Volume formally defined as the volume of space between three thousand kilometers altitude and geosynchronous altitude ((is) approximately 36,500 km), with the volume below three thousand kilometers defined as the Terrestrial Service Volume (TSV). The USA has established signal requirements for the Space Service Volume (SSV) as part of the GPS Capability Development Documentation (CDD). Diplomatic efforts are underway to extend Space service Volume commitments to the other Position, Navigation, and Timing (PNT) service providers in an effort to assure that all space users will benefit from the enhanced capabilities of interoperating GNSS services in the space domain.

  20. Development of a ROTC/Army Career Commitment Model. Volume 2, Appendices

    DTIC Science & Technology

    1975-11-01

    Biological Sciences 3. BusinE Administration 4. Genera, ’eaching and Social Service 5. Humanities, Law, Social and Behavioral Sciences 6. Fine Arts...Army do you intend to join? 1. Adjutant General’s Corps 9. Medical Service Corps 2. Air Defense Artillery 10. Military Intelligence 3. Armor 11...Chemical Corps 5. Corps of Engineers 6. Field Artillery 7. Finance Corps 8. Infantry 9. Medical Service Corps 10. Military Intelligence 11. Military Police

  1. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion.

    PubMed

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan

    2013-09-01

    Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can accurately measure the volume in pleural effusion patients, and a linear regression equation can be used to estimate the volume of the free pleural effusion.

  2. Evaluation of the Snow Simulations from the Community Land Model, Version 4 (CLM4)

    NASA Technical Reports Server (NTRS)

    Toure, Ally M.; Rodell, Matthew; Yang, Zong-Liang; Beaudoing, Hiroko; Kim, Edward; Zhang, Yongfei; Kwon, Yonghwan

    2015-01-01

    This paper evaluates the simulation of snow by the Community Land Model, version 4 (CLM4), the land model component of the Community Earth System Model, version 1.0.4 (CESM1.0.4). CLM4 was run in an offline mode forced with the corrected land-only replay of the Modern-Era Retrospective Analysis for Research and Applications (MERRA-Land) and the output was evaluated for the period from January 2001 to January 2011 over the Northern Hemisphere poleward of 30 deg N. Simulated snow-cover fraction (SCF), snow depth, and snow water equivalent (SWE) were compared against a set of observations including the Moderate Resolution Imaging Spectroradiometer (MODIS) SCF, the Interactive Multisensor Snow and Ice Mapping System (IMS) snow cover, the Canadian Meteorological Centre (CMC) daily snow analysis products, snow depth from the National Weather Service Cooperative Observer (COOP) program, and Snowpack Telemetry (SNOTEL) SWE observations. CLM4 SCF was converted into snow-cover extent (SCE) to compare with MODIS SCE. It showed good agreement, with a correlation coefficient of 0.91 and an average bias of -1.54 x 10(exp 2) sq km. Overall, CLM4 agreed well with IMS snow cover, with the percentage of correctly modeled snow-no snow being 94%. CLM4 snow depth and SWE agreed reasonably well with the CMC product, with the average bias (RMSE) of snow depth and SWE being 0.044m (0.19 m) and -0.010m (0.04 m), respectively. CLM4 underestimated SNOTEL SWE and COOP snow depth. This study demonstrates the need to improve the CLM4 snow estimates and constitutes a benchmark against which improvement of the model through data assimilation can be measured.

  3. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    PubMed

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  4. The cost structure of routine infant immunization services: a systematic analysis of six countries

    PubMed Central

    Geng, Fangli; Suharlim, Christian; Brenzel, Logan; Resch, Stephen C; Menzies, Nicolas A

    2017-01-01

    Abstract Little information exists on the cost structure of routine infant immunization services in low- and middle-income settings. Using a unique dataset of routine infant immunization costs from six countries, we estimated how costs were distributed across budget categories and programmatic activities, and investigated how the cost structure of immunization sites varied by country and site characteristics. The EPIC study collected data on routine infant immunization costs from 319 sites in Benin, Ghana, Honduras, Moldova, Uganda, Zambia, using a standardized approach. For each country, we estimated the economic costs of infant immunization by administrative level, budget category, and programmatic activity from a programme perspective. We used regression models to describe how costs within each category were related to site operating characteristics and efficiency level. Site-level costs (incl. vaccines) represented 77–93% of national routine infant immunization costs. Labour and vaccine costs comprised 14–69% and 13–69% of site-level cost, respectively. The majority of site-level resources were devoted to service provision (facility-based or outreach), comprising 48–78% of site-level costs across the six countries. Based on the regression analyses, sites with the highest service volume had a greater proportion of costs devoted to vaccines, with vaccine costs per dose relatively unaffected by service volume but non-vaccine costs substantially lower with higher service volume. Across all countries, more efficient sites (compared with sites with similar characteristics) had a lower cost share devoted to labour. The cost structure of immunization services varied substantially between countries and across sites within each country, and was related to site characteristics. The substantial variation observed in this sample suggests differences in operating model for otherwise similar sites, and further understanding of these differences could reveal approaches to improve efficiency and performance of immunization sites. PMID:28575193

  5. The cost structure of routine infant immunization services: a systematic analysis of six countries.

    PubMed

    Geng, Fangli; Suharlim, Christian; Brenzel, Logan; Resch, Stephen C; Menzies, Nicolas A

    2017-10-01

    Little information exists on the cost structure of routine infant immunization services in low- and middle-income settings. Using a unique dataset of routine infant immunization costs from six countries, we estimated how costs were distributed across budget categories and programmatic activities, and investigated how the cost structure of immunization sites varied by country and site characteristics. The EPIC study collected data on routine infant immunization costs from 319 sites in Benin, Ghana, Honduras, Moldova, Uganda, Zambia, using a standardized approach. For each country, we estimated the economic costs of infant immunization by administrative level, budget category, and programmatic activity from a programme perspective. We used regression models to describe how costs within each category were related to site operating characteristics and efficiency level. Site-level costs (incl. vaccines) represented 77-93% of national routine infant immunization costs. Labour and vaccine costs comprised 14-69% and 13-69% of site-level cost, respectively. The majority of site-level resources were devoted to service provision (facility-based or outreach), comprising 48-78% of site-level costs across the six countries. Based on the regression analyses, sites with the highest service volume had a greater proportion of costs devoted to vaccines, with vaccine costs per dose relatively unaffected by service volume but non-vaccine costs substantially lower with higher service volume. Across all countries, more efficient sites (compared with sites with similar characteristics) had a lower cost share devoted to labour. The cost structure of immunization services varied substantially between countries and across sites within each country, and was related to site characteristics. The substantial variation observed in this sample suggests differences in operating model for otherwise similar sites, and further understanding of these differences could reveal approaches to improve efficiency and performance of immunization sites. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.

  6. Long-term hydrological simulation based on the Soil Conservation Service curve number

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Singh, Vijay P.

    2004-05-01

    Presenting a critical review of daily flow simulation models based on the Soil Conservation Service curve number (SCS-CN), this paper introduces a more versatile model based on the modified SCS-CN method, which specializes into seven cases. The proposed model was applied to the Hemavati watershed (area = 600 km2) in India and was found to yield satisfactory results in both calibration and validation. The model conserved monthly and annual runoff volumes satisfactorily. A sensitivity analysis of the model parameters was performed, including the effect of variation in storm duration. Finally, to investigate the model components, all seven variants of the modified version were tested for their suitability.

  7. Highway Safety Program Manual: Volume 11: Emergency Medical Services.

    ERIC Educational Resources Information Center

    National Highway Traffic Safety Administration (DOT), Washington, DC.

    Volume 11 of the 19-volume Highway Safety Program Manual (which provides guidance to State and local governments on preferred highway safety practices) concentrates on emergency medical services. The purpose of the program, Federal authority in the area of medical services, and policies related to an emergency medical services (EMS) program are…

  8. Cone-Beam Computed Tomography Analysis of the Nasopharyngeal Airway in Nonsyndromic Cleft Lip and Palate Subjects.

    PubMed

    Al-Fahdawi, Mahmood Abd; Farid, Mary Medhat; El-Fotouh, Mona Abou; El-Kassaby, Marwa Abdelwahab

    2017-03-01

      To assess the nasopharyngeal airway volume, cross-sectional area, and depth in previously repaired nonsyndromic unilateral cleft lip and palate versus bilateral cleft lip and palate patients compared with noncleft controls using cone-beam computed tomography with the ultimate goal of finding whether cleft lip and palate patients are more liable to nasopharyngeal airway obstruction.   A retrospective analysis comparing bilateral cleft lip and palate, unilateral cleft lip and palate, and control subjects. Significance at P ≤ .05.   Cleft Care Center and the outpatient clinic that are both affiliated with our faculty.   Cone-beam computed tomography data were selected of 58 individuals aged 9 to 12 years: 14 with bilateral cleft lip and palate and 20 with unilateral cleft lip and palate as well as 24 age- and gender-matched noncleft controls.   Volume, depth, and cross-sectional area of nasopharyngeal airway were measured.   Patients with bilateral cleft lip and palate showed significantly larger nasopharyngeal airway volume than controls and patients with unilateral cleft lip and palate (P < .001). Patients with bilateral cleft lip and palate showed significantly larger cross-sectional area than those with unilateral cleft lip and palate (P < .001) and insignificant cross-sectional area compared with controls (P > .05). Patients with bilateral cleft lip and palate showed significantly larger depth than controls and those with unilateral cleft lip and palate (P < .001). Patients with unilateral cleft lip and palate showed insignificant nasopharyngeal airway volume, cross-sectional area, and depth compared with controls (P > .05).   Unilateral and bilateral cleft lip and palate patients did not show significantly less volume, cross-sectional area, or depth of nasopharyngeal airway than controls. From the results of this study we conclude that unilateral and bilateral cleft lip and palate patients at the studied age and stage of repaired clefts are not more prone to nasopharyngeal airway obstruction than controls.

  9. Numerical modeling of the effects of a free surface on the operating characteristics of Marine Hydrokinetic Turbines

    NASA Astrophysics Data System (ADS)

    Adamski, Samantha; Aliseda, Alberto

    2012-11-01

    Marine Hydrokinetic (MHK) turbines are a growing area of research in the renewable energy field because tidal currents are a highly predictable clean energy source. The presence of a free surface may influence the flow around the turbine and in the wake, critically affecting turbine performance and environmental effects through modification of wake physical variables. The characteristic Froude number that control these processes is still a matter of controversy, with the channel depth and turbine's depth, blade tip depth and diameter as potential candidates for a length scale used in literature. We use the Volume of Fluid model to track the free surface dynamics in a RANS simulation with a BEMT model of the turbine to understand the physics of the wake-free surface interactions. Pressure and flow rate boundary conditions for channel's inlet, outlet and air side have been tested in an effort to determine the optimum set of simulation conditions for MHK turbines in rivers or estuaries. Stability and accuracy in terms of power extraction and kinetic and potential energy budgets are considered. The goal of this research is to determine, quantitatively in non dimensional parameter space, the limit between negligible and significant free surface effects on MHK turbine analysis. Supported by DOE through the National Northwest Marine Renewable Energy Center.

  10. Predicting knee replacement damage in a simulator machine using a computational model with a consistent wear factor.

    PubMed

    Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J

    2008-02-01

    Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial "break in" period of the simulation.

  11. The LIFEspan model of transitional rehabilitative care for youth with disabilities: healthcare professionals' perspectives on service delivery.

    PubMed

    Hamdani, Yani; Proulx, Meghann; Kingsnorth, Shauna; Lindsay, Sally; Maxwell, Joanne; Colantonio, Angela; Macarthur, Colin; Bayley, Mark

    2014-01-01

    LIFEspan is a service delivery model of continuous coordinated care developed and implemented by a cross-organization partnership between a pediatric and an adult rehabilitation hospital. Previous work explored enablers and barriers to establishing the partnership service. This paper examines healthcare professionals' (HCPs') experiences of 'real world' service delivery aimed at supporting transitional rehabilitative care for youth with disabilities. This qualitative study - part of an ongoing mixed method longitudinal study - elicited HCPs' perspectives on their experiences of LIFEspan service delivery through in-depth interviews. Data were categorized into themes of service delivery activities, then interpreted from the lens of a service integration/coordination framework. Five main service delivery themes were identified: 1) addressing youth's transition readiness and capacities; 2) shifting responsibility for healthcare management from parents to youth; 3) determining services based on organizational resources; 4) linking between pediatric and adult rehabilitation services; and, 5) linking with multi-sector services. LIFEspan contributed to service delivery activities that coordinated care for youth and families and integrated inter-hospital services. However, gaps in service integration with primary care, education, social, and community services limited coordinated care to the rehabilitation sector. Recommendations are made to enhance service delivery using a systems/sector-based approach.

  12. Gravity profiles across the Uyaijah Ring structure, Kingdom of Saudi Arabia

    USGS Publications Warehouse

    Gettings, M.E.; Andreasen, G.E.

    1987-01-01

    The resulting structural model, based on profile fits to gravity responses of three-dimensional models and excess-mass calculations, gives a depth estimate to the base of the complex of 4.75 km. The contacts of the complex are inferred to be steeply dipping inward along the southwest margin of the structure. To the north and east, however, the basal contact of the complex dips more gently inward (about 30 degrees). The ring structure appears to be composed of three laccolith-shaped plutons; two are granitic in composition and make up about 85 percent of the volume of the complex, and one is granodioritic and comprises the remaining 15 percent. The source area for the plutons appears to be in the southwest quadrant of the Uyaijah ring structure. A northwest-trending shear zone cuts the northern half of the structure and contains mafic dikes that have a small but identifiable gravity-anomaly response. The structural model agrees with models derived from geological interpretation except that the estimated depth to which the structure extends is decreased considerably by the gravity results.

  13. Remote sensing of submerged aquatic vegetation in lower Chesapeake Bay - A comparison of Landsat MSS to TM imagery

    NASA Technical Reports Server (NTRS)

    Ackleson, S. G.; Klemas, V.

    1987-01-01

    Landsat MSS and TM imagery, obtained simultaneously over Guinea Marsh, VA, as analyzed and compares for its ability to detect submerged aquatic vegetation (SAV). An unsupervised clustering algorithm was applied to each image, where the input classification parameters are defined as functions of apparent sensor noise. Class confidence and accuracy were computed for all water areas by comparing the classified images, pixel-by-pixel, to rasterized SAV distributions derived from color aerial photography. To illustrate the effect of water depth on classification error, areas of depth greater than 1.9 m were masked, and class confidence and accuracy recalculated. A single-scattering radiative-transfer model is used to illustrate how percent canopy cover and water depth affect the volume reflectance from a water column containing SAV. For a submerged canopy that is morphologically and optically similar to Zostera marina inhabiting Lower Chesapeake Bay, dense canopies may be isolated by masking optically deep water. For less dense canopies, the effect of increasing water depth is to increase the apparent percent crown cover, which may result in classification error.

  14. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks.

    PubMed

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-11-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors' best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.

  15. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks

    PubMed Central

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-01-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well. PMID:27809285

  16. The ASTARTE Mass Transport Deposits data base - a web-based reference for submarine landslide research around Europe

    NASA Astrophysics Data System (ADS)

    Voelker, D.; De Martini, P. M.; Lastras, G.; Patera, A.; Hunt, J.; Terrinha, P.; Noiva, J.; Gutscher, M. A.; Migeon, S.

    2015-12-01

    EU project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe, Project number: 603839) aims at reaching a higher level of tsunami resilience in the North East Atlantic and Mediterranean (NEAM) region by a combination of field work, experimental work, numerical modeling and technical development. The project is a cooperative work of 26 institutes from 16 countries and links together the description of past tsunamigenic events, the characterization of tsunami sources, the calculation of the impact of such events, and the development of adequate resilience strategies (www.astarte.eu). Within ASTARTE a web-based data base on Mass Transport Deposit (MTD) in the NEAM areas is being created that claims to be the future reference source for this kind of research in Europe. The aim is to integrate every existing scientific reference on the topic and update on new entries every 3 months, hosting information and detailed data, that are crucial e.g for tsunami modeling. A relational database managed by ArcGIS for Desktop 10.3 software has been implemented to allow all partners to collaborate through a common platform for archiving and exchanging data and interpretations, such as MTD typology (slide, slump, debris, turbidite, etc), geometric characteristcs (location, depth, thickness, volume, slope, etc), but also age and dating method and eventually tsunamigenic potential. One of the final goals of the project is the sharing of the archived datasets through a web-based map service that will allow to visualize, question, analyze, and interpret all datasets. The interactive map service will be hosted by ArcGIS Online and will deploy the cloud capabilities of the portal. Any interested users will be able to access the online GIS resources through any Internet browser or ad hoc applications that run on desktop machines, smartphones, or tablets and will be able to use the analytical tools, key tasks, and workflows of the service.

  17. On the retrieval of sea ice thickness and snow depth using concurrent laser altimetry and L-band remote sensing data

    NASA Astrophysics Data System (ADS)

    Zhou, Lu; Xu, Shiming; Liu, Jiping; Wang, Bin

    2018-03-01

    The accurate knowledge of sea ice parameters, including sea ice thickness and snow depth over the sea ice cover, is key to both climate studies and data assimilation in operational forecasts. Large-scale active and passive remote sensing is the basis for the estimation of these parameters. In traditional altimetry or the retrieval of snow depth with passive microwave remote sensing, although the sea ice thickness and the snow depth are closely related, the retrieval of one parameter is usually carried out under assumptions over the other. For example, climatological snow depth data or as derived from reanalyses contain large or unconstrained uncertainty, which result in large uncertainty in the derived sea ice thickness and volume. In this study, we explore the potential of combined retrieval of both sea ice thickness and snow depth using the concurrent active altimetry and passive microwave remote sensing of the sea ice cover. Specifically, laser altimetry and L-band passive remote sensing data are combined using two forward models: the L-band radiation model and the isostatic relationship based on buoyancy model. Since the laser altimetry usually features much higher spatial resolution than L-band data from the Soil Moisture Ocean Salinity (SMOS) satellite, there is potentially covariability between the observed snow freeboard by altimetry and the retrieval target of snow depth on the spatial scale of altimetry samples. Statistically significant correlation is discovered based on high-resolution observations from Operation IceBridge (OIB), and with a nonlinear fitting the covariability is incorporated in the retrieval algorithm. By using fitting parameters derived from large-scale surveys, the retrievability is greatly improved compared with the retrieval that assumes flat snow cover (i.e., no covariability). Verifications with OIB data show good match between the observed and the retrieved parameters, including both sea ice thickness and snow depth. With detailed analysis, we show that the error of the retrieval mainly arises from the difference between the modeled and the observed (SMOS) L-band brightness temperature (TB). The narrow swath and the limited coverage of the sea ice cover by altimetry is the potential source of error associated with the modeling of L-band TB and retrieval. The proposed retrieval methodology can be applied to the basin-scale retrieval of sea ice thickness and snow depth, using concurrent passive remote sensing and active laser altimetry based on satellites such as ICESat-2 and WCOM.

  18. Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging

    PubMed Central

    Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.

    2015-01-01

    We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373

  19. Volume and intensity of Medicare physicians' services: An overview

    PubMed Central

    Kay, Terrence L.

    1990-01-01

    From 1978 to 1987, Medicare spending for physicians' services increased at annual compound rates of 16 percent, far exceeding increases expected based on inflation and increases in beneficiaries. As a result, Medicare spending for Part B physicians' services has attracted considerable attention. This article contains an overview of expenditure trends for Part B physicians' services, a summary of recent research findings on issues related to volume and intensity of physicians' services, and a discussion of options for controlling volume and intensity. The possible impact of the recently enacted relative-value-based fee schedule on volume and intensity of services is discussed briefly. PMID:10113398

  20. A patient-centered pharmacy services model of HIV patient care in community pharmacy settings: a theoretical and empirical framework.

    PubMed

    Kibicho, Jennifer; Owczarzak, Jill

    2012-01-01

    Reflecting trends in health care delivery, pharmacy practice has shifted from a drug-specific to a patient-centered model of care, aimed at improving the quality of patient care and reducing health care costs. In this article, we outline a theoretical model of patient-centered pharmacy services (PCPS), based on in-depth, qualitative interviews with a purposive sample of 28 pharmacists providing care to HIV-infected patients in specialty, semispecialty, and nonspecialty pharmacy settings. Data analysis was an interactive process informed by pharmacists' interviews and a review of the general literature on patient centered care, including Medication Therapy Management (MTM) services. Our main finding was that the current models of pharmacy services, including MTM, do not capture the range of pharmacy services in excess of mandated drug dispensing services. In this article, we propose a theoretical PCPS model that reflects the actual services pharmacists provide. The model includes five elements: (1) addressing patients as whole, contextualized persons; (2) customizing interventions to unique patient circumstances; (3) empowering patients to take responsibility for their own health care; (4) collaborating with clinical and nonclinical providers to address patient needs; and (5) developing sustained relationships with patients. The overarching goal of PCPS is to empower patients' to take responsibility for their own health care and self-manage their HIV-infection. Our findings provide the foundation for future studies regarding how widespread these practices are in diverse community settings, the validity of the proposed PCPS model, the potential for standardizing pharmacist practices, and the feasibility of a PCPS framework to reimburse pharmacists services.

  1. A minimal cost function method for optimizing the age-Depth relation of deep-sea sediment cores

    NASA Astrophysics Data System (ADS)

    Brüggemann, Wolfgang

    1992-08-01

    The question of an optimal age-depth relation for deep-sea sediment cores has been raised frequently. The data from such cores (e.g., δ18O values) are used to test the astronomical theory of ice ages as established by Milankovitch in 1938. In this work, we use a minimal cost function approach to find simultaneously an optimal age-depth relation and a linear model that optimally links solar insolation or other model input with global ice volume. Thus a general tool for the calibration of deep-sea cores to arbitrary tuning targets is presented. In this inverse modeling type approach, an objective function is minimized that penalizes: (1) the deviation of the data from the theoretical linear model (whose transfer function can be computed analytically for a given age-depth relation) and (2) the violation of a set of plausible assumptions about the model, the data and the obtained correction of a first guess age-depth function. These assumptions have been suggested before but are now quantified and incorporated explicitly into the objective function as penalty terms. We formulate an optimization problem that is solved numerically by conjugate gradient type methods. Using this direct approach, we obtain high coherences in the Milankovitch frequency bands (over 90%). Not only the data time series but also the the derived correction to a first guess linear age-depth function (and therefore the sedimentation rate) itself contains significant energy in a broad frequency band around 100 kyr. The use of a sedimentation rate which varies continuously on ice age time scales results in a shift of energy from 100 kyr in the original data spectrum to 41, 23, and 19 kyr in the spectrum of the corrected data. However, a large proportion of the data variance remains unexplained, particularly in the 100 kyr frequency band, where there is no significant input by orbital forcing. The presented method is applied to a real sediment core and to the SPECMAP stack, and results are compared with those obtained in earlier investigations.

  2. Effects of Host-rock Fracturing on Elastic-deformation Source Models of Volcano Deflation.

    PubMed

    Holohan, Eoghan P; Sudhaus, Henriette; Walter, Thomas R; Schöpfer, Martin P J; Walsh, John J

    2017-09-08

    Volcanoes commonly inflate or deflate during episodes of unrest or eruption. Continuum mechanics models that assume linear elastic deformation of the Earth's crust are routinely used to invert the observed ground motions. The source(s) of deformation in such models are generally interpreted in terms of magma bodies or pathways, and thus form a basis for hazard assessment and mitigation. Using discontinuum mechanics models, we show how host-rock fracturing (i.e. non-elastic deformation) during drainage of a magma body can progressively change the shape and depth of an elastic-deformation source. We argue that this effect explains the marked spatio-temporal changes in source model attributes inferred for the March-April 2007 eruption of Piton de la Fournaise volcano, La Reunion. We find that pronounced deflation-related host-rock fracturing can: (1) yield inclined source model geometries for a horizontal magma body; (2) cause significant upward migration of an elastic-deformation source, leading to underestimation of the true magma body depth and potentially to a misinterpretation of ascending magma; and (3) at least partly explain underestimation by elastic-deformation sources of changes in sub-surface magma volume.

  3. Potentiality of a fruit peel (banana peel) toward abatement of fluoride from synthetic and underground water samples collected from fluoride affected villages of Birbhum district

    NASA Astrophysics Data System (ADS)

    Mondal, Naba Kumar; Roy, Arunabha

    2018-06-01

    Contamination of underground water with fluoride (F) is a tremendous health hazard. Excessive F (> 1.5 mg/L) in drinking water can cause both dental and skeletal fluorosis. A fixed-bed column experiments were carried out with the operating variables such as different initial F concentrations, bed depths, pH and flow rates. Results revealed that the breakthrough time and exhaustion time decrease with increasing flow rate, decreasing bed depth and increasing influent fluoride concentration. The optimized conditions are: 10 mg/L initial fluoride concentration; flow rate 3.4 mL/min, bed depth 3.5 and pH 5. The bed depth service time model and the Thomas model were applied to the experimental results. Both the models were in good agreement with the experimental data for all the process parameters studied except flow rate, indicating that the models were appropriate for removal of F by natural banana peel dust in fix-bed design. Moreover, column adsorption was reversible and the regeneration was accomplished by pumping of 0.1 M NaOH through the loaded banana peel dust column. On the other hand, field water sample analysis data revealed that 86.5% fluoride can be removed under such optimized conditions. From the experimental results, it may be inferred that natural banana peel dust is an effective adsorbent for defluoridation of water.

  4. Training in the Air Force--The Example of Graduate Education.

    ERIC Educational Resources Information Center

    Hanushek, Eric A.

    The study applies models for rational investing in human capital to Air Force decisions on training and particularly decisions about graduate education for officers. Two separate questions are addressed in depth. First, among the possible types of officers (rated-nonrated, reserve-regular, by length of service) who could be sent to school, are…

  5. Water volume and sediment volume and density in Lake Linganore between Boyers Mill Road Bridge and Bens Branch, Frederick County, Maryland, 2012

    USGS Publications Warehouse

    Sekellick, Andrew J.; Banks, William S.L.; Myers, Michael K.

    2013-01-01

    To assist in understanding sediment loadings and the management of water resources, a bathymetric survey was conducted in the part of Lake Linganore between Boyers Mill Road Bridge and Bens Branch in Frederick County, Maryland. The bathymetric survey was performed in January 2012 by the U.S. Geological Survey, in cooperation with the City of Frederick and Frederick County. A separate, but related, field effort to collect 18 sediment cores was conducted in March and April 2012. Depth and location data from the bathymetric survey and location data for the sediment cores were compiled and edited by using geographic information system (GIS) software. A three-dimensional triangulated irregular network (TIN) model of the lake bottom was created to calculate the volume of stored water in the reservoir. Large-scale topographic maps of the valley prior to inundation in 1972 were provided by the Frederick County Division of Utilities and Solid Waste Management and digitized for comparison with current (2012) conditions in order to calculate sediment volume. Cartographic representations of both water depth and sediment accumulation were produced, along with an accuracy assessment for the resulting bathymetric model. Vertical accuracies at the 95-percent confidence level for the collected data, the bathymetric surface model, and the bathymetric contour map were calculated to be 0.64 feet (ft), 1.77 ft, and 2.30 ft, respectively. A dry bulk sediment density was calculated for each of the 18 sediment cores collected during March and April 2012, and used to determine accumulated sediment mass. Water-storage capacity in the study area is 110 acre-feet (acre-ft) at a full-pool elevation 308 ft above the National Geodetic Vertical Datum of 1929, whereas total sediment volume in the study area is 202 acre-ft. These totals indicate a loss of about 65 percent of the original water-storage capacity in the 40 years since dam construction. This corresponds to an average rate of sediment accumulation of 5.1 acre-ft per year since Linganore Creek was impounded. Sediment thicknesses ranged from 0 to 16.7 ft. Sediment densities ranged from 0.38 to 1.08 grams per cubic centimeter, and generally decreased in the downstream direction. The total accumulated-sediment mass was 156,000 metric tons between 1972 and 2012.

  6. Industrial Sector Technology Use Model (ISTUM): industrial energy use in the United States, 1974-2000. Volume 4. Technology appendix. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-10-01

    Volume IV of the ISTUM documentation gives information on the individual technology specifications, but relates closely with Chapter II of Volume I. The emphasis in that chapter is on providing an overview of where each technology fits into the general-model logic. Volume IV presents the actual cost structure and specification of every technology modeled in ISTUM. The first chapter presents a general overview of the ISTUM technology data base. It includes an explanation of the data base printouts and how the separate-cost building blocks are combined to derive an aggregate-technology cost. The remaining chapters are devoted to documenting the specific-technologymore » cost specifications. Technologies included are: conventional technologies (boiler and non-boiler conventional technologies); fossil-energy technologies (atmospheric fluidized bed combustion, low Btu coal and medium Btu coal gasification); cogeneration (steam, machine drive, and electrolytic service sectors); and solar and geothermal technologies (solar steam, solar space heat, and geothermal steam technologies), and conservation technologies.« less

  7. "There is such a thing as asking for trouble": taking rapid HIV testing to gay venues is fraught with challenges.

    PubMed

    Prost, Audrey; Chopin, Mathias; McOwan, Alan; Elam, Gillian; Dodds, Julie; Macdonald, Neil; Imrie, John

    2007-06-01

    To explore the feasibility and acceptability of offering rapid HIV testing to men who have sex with men in gay social venues. Qualitative study with in-depth interviews and focus group discussions. Interview transcripts were analysed for recurrent themes. 24 respondents participated in the study. Six gay venue owners, four gay service users and one service provider took part in in-depth interviews. Focus groups were conducted with eight members of a rapid HIV testing clinic staff and five positive gay men. Respondents had strong concerns about confidentiality and privacy, and many felt that HIV testing was "too serious" an event to be undertaken in social venues. Many also voiced concerns about issues relating to post-test support and behaviour, and clinical standards. Venue owners also discussed the potential negative impact of HIV testing on social venues. There are currently substantial barriers to offering rapid HIV tests to men who have sex with men in social venues. Further work to enhance acceptability must consider ways of increasing the confidentiality and professionalism of testing services, designing appropriate pre-discussion and post-discussion protocols, evaluating different models of service delivery, and considering their cost-effectiveness in relation to existing services.

  8. Seismicity patterns during a period of inflation at Sierra Negra volcano, Galápagos Ocean Island Chain

    NASA Astrophysics Data System (ADS)

    Davidge, Lindsey; Ebinger, Cynthia; Ruiz, Mario; Tepp, Gabrielle; Amelung, Falk; Geist, Dennis; Coté, Dustin; Anzieta, Juan

    2017-03-01

    Basaltic shield volcanoes of the western Galápagos islands are among the most rapidly deforming volcanoes worldwide, but little was known of the internal structure and brittle deformation processes accompanying inflation and deflation cycles. A 15-station broadband seismic array was deployed on and surrounding Sierra Negra volcano, Galápagos from July 2009 through June 2011 to characterize seismic strain patterns during an inter-eruption inflation period and to evaluate single and layered magma chamber models for ocean island volcanoes. We compare precise earthquake locations determined from a 3D velocity model and from a double difference cluster method. Using first-motion of P-arrivals, we determine focal mechanisms for 8 of the largest earthquakes (ML ≤ 1.5) located within the array. Most of the 2382 earthquakes detected by the array occurred beneath the broad (∼9 km-wide) Sierra Negra caldera, at depths from surface to about 8 km below sea level. Although outside our array, frequent and larger magnitude (ML ≤ 3.4) earthquakes occurred at Alcedo and Fernandina volcano, and in a spatial cluster beneath the shallow marine platform between Fernandina and Sierra Negra volcanoes. The time-space relations and focal mechanism solutions from a 4-day long period of intense seismicity June 4-9, 2010 along the southeastern flank of Sierra Negra suggests that the upward-migrating earthquake swarm occurred during a small volume intrusion at depths 5-8 km subsurface, but there was no detectable signal in InSAR data to further constrain geometry and volume. Focal mechanisms of earthquakes beneath the steep intra-caldera faults and along the ring fault system are reverse and strike-slip. These new seismicity data integrated with tomographic, geodetic, and petrological models indicate a stratified magmatic plumbing system: a shallow sill beneath the large caldera that is supplied by magma from a large volume deeper feeding system. The large amplitude inter-eruption inflation of the shallow sill beneath the Sierra Negra caldera is accompanied by only very small magnitude earthquakes, although historical records indicate that larger magnitude earthquakes (Mw <6) occur during eruptions, trapdoor faulting episodes without eruptions, and large volume flank intrusions.

  9. Modelling chemical depletion profiles in regolith

    USGS Publications Warehouse

    Brantley, S.L.; Bandstra, J.; Moore, J.; White, A.F.

    2008-01-01

    Chemical or mineralogical profiles in regolith display reaction fronts that document depletion of leachable elements or minerals. A generalized equation employing lumped parameters was derived to model such ubiquitously observed patterns:C = frac(C0, frac(C0 - Cx = 0, Cx = 0) exp (??ini ?? over(k, ??) ?? x) + 1)Here C, Cx = 0, and Co are the concentrations of an element at a given depth x, at the top of the reaction front, or in parent respectively. ??ini is the roughness of the dissolving mineral in the parent and k???? is a lumped kinetic parameter. This kinetic parameter is an inverse function of the porefluid advective velocity and a direct function of the dissolution rate constant times mineral surface area per unit volume regolith. This model equation fits profiles of concentration versus depth for albite in seven weathering systems and is consistent with the interpretation that the surface area (m2 mineral m- 3 bulk regolith) varies linearly with the concentration of the dissolving mineral across the front. Dissolution rate constants can be calculated from the lumped fit parameters for these profiles using observed values of weathering advance rate, the proton driving force, the geometric surface area per unit volume regolith and parent concentration of albite. These calculated values of the dissolution rate constant compare favorably to literature values. The model equation, useful for reaction fronts in both steady-state erosional and quasi-stationary non-erosional systems, incorporates the variation of reaction affinity using pH as a master variable. Use of this model equation to fit depletion fronts for soils highlights the importance of buffering of pH in the soil system. Furthermore, the equation should allow better understanding of the effects of important environmental variables on weathering rates. ?? 2008.

  10. A Review of Methods Applied by the U.S. Geological Survey in the Assessment of Identified Geothermal Resources

    USGS Publications Warehouse

    Williams, Colin F.; Reed, Marshall J.; Mariner, Robert H.

    2008-01-01

    The U. S. Geological Survey (USGS) is conducting an updated assessment of geothermal resources in the United States. The primary method applied in assessments of identified geothermal systems by the USGS and other organizations is the volume method, in which the recoverable heat is estimated from the thermal energy available in a reservoir. An important focus in the assessment project is on the development of geothermal resource models consistent with the production histories and observed characteristics of exploited geothermal fields. The new assessment will incorporate some changes in the models for temperature and depth ranges for electric power production, preferred chemical geothermometers for estimates of reservoir temperatures, estimates of reservoir volumes, and geothermal energy recovery factors. Monte Carlo simulations are used to characterize uncertainties in the estimates of electric power generation. These new models for the recovery of heat from heterogeneous, fractured reservoirs provide a physically realistic basis for evaluating the production potential of natural geothermal reservoirs.

  11. Drainage of Southeast Greenland firn aquifer water through crevasses to the bed

    NASA Astrophysics Data System (ADS)

    Poinar, Kristin; Joughin, Ian; Lilien, David; Brucker, Ludovic; Kehrl, Laura; Nowicki, Sophie

    2017-02-01

    A firn aquifer in the Helheim Glacier catchment of Southeast Greenland lies directly upstream of a crevasse field. Previous measurements show that a 3.5-km long segment of the aquifer lost a large volume of water (26,000 - 65,000 m2 in cross section) between spring 2012 and spring 2013, compared to annual meltwater accumulation of 6000 - 15,000 m2. The water is thought to have entered the crevasses, but whether the water reached the bed or refroze within the ice sheet is unknown. We used a thermo-visco-elastic model for crevasse propagation to calculate the depths and volumes of these water-filled crevasses. We compared our model output to data from the Airborne Topographic Mapper (ATM), which reveals the near-surface geometry of specific crevasses, and WorldView images, which capture the surface expressions of crevasses across our 1.5-km study area. We found a best fit with a shear modulus between 0.2 and 1.5 GPa within our study area. We show that surface meltwater can drive crevasses to the top surface of the firn aquifer ( 20 m depth), whereupon it receives water at rates corresponding to the water flux through the aquifer. Our model shows that crevasses receiving firn-aquifer water hydrofracture through to the bed, 1000 m below, in 10-40 days. Englacial refreezing of firn-aquifer water raises the average local ice temperature by 4°C over a ten-year period, which enhances deformational ice motion by 50 m/yr, compared to the observed surface velocity of 200 m/yr. The effect of the basal water on the sliding velocity remains unknown. Were the firn aquifer not present to concentrate surface meltwater into crevasses, we find that no surface melt would reach the bed; instead, it would refreeze annually in crevasses at depths <500 m. The crevasse field downstream of the firn aquifer likely allows a large fraction of the aquifer water in our study area to reach the bed. Thus, future studies should consider the aquifer and crevasses as part of a common system. This system may uniquely affect ice-sheet dynamics by routing a large volume of water to the bed outside of the typical runoff period.

  12. Evaluation of models for predicting (total) creep of prestressed concrete mixtures.

    DOT National Transportation Integrated Search

    2001-01-01

    Concrete experiences volume changes throughout its service life. When loaded, concrete experiences an instantaneous recoverable elastic deformation and a slow inelastic deformation called creep. Creep of concrete is composed of two components, basic ...

  13. Strategy Planning Visualization Tool (SPVT) for the Air Operations Center (AOC). Volume 2: Information Operations (IO) Planning Enhancements

    DTIC Science & Technology

    2009-12-31

    Status and Assessment data interfaces leverage the TBONE Services and data model. The services and supporting Java 2 Platform Enterprise Edition (J2EE...existing Java ™ and .Net developed “Fat Clients.” The IOPC-X design includes an Open Services Gateway Initiative (OSGi) compliant plug-in...J2EE Java 2 Platform Enterprise Edition JAOP Joint Air Operations Plan JAST JAOP AOD Status Tool JFACC Joint Forces Air Component Commander Data

  14. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    PubMed Central

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing <30–40 samples from 8–10 health-clinics; Tier-3/Community laboratories servicing ∼50 health-clinics, processing <150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing <300 or >600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT <6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and <24 hour LTR-TAT for the district at $7.42 per-test. Conclusion Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼12–24-hour LTR-TAT, is ∼$2 more than existing referred services per-test, but 2–4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4 services. PMID:25517412

  15. The value of volume and growth measurements in timber sales management of the National Forests

    NASA Technical Reports Server (NTRS)

    Lietzke, K. R.

    1977-01-01

    This paper summarizes work performed in the estimation of gross social value of timber volume and growth rate information used in making regional harvest decisions in the National Forest System. A model was developed to permit parametric analysis. The problem is formulated as one of finding optimal inventory holding patterns. Public timber management differs from other inventory holding problems in that the inventory, itself, generates value over time in providing recreational, aesthetic and environmental goods. 'Nontimber' demand estimates are inferred from past Forest Service harvest and sales levels. The solution requires a description of the harvest rates which maintain the optimum inventory level. Gross benefits of the Landsat systems are estimated by comparison with Forest Service information gathering models. Gross annual benefits are estimated to be $5.9 million for the MSS system and $7.2 million for the TM system.

  16. A framework for identifying plant species to be used as 'ecological engineers' for fixing soil on unstable slopes.

    PubMed

    Ghestem, Murielle; Cao, Kunfang; Ma, Wenzhang; Rowe, Nick; Leclerc, Raphaëlle; Gadenne, Clément; Stokes, Alexia

    2014-01-01

    Major reforestation programs have been initiated on hillsides prone to erosion and landslides in China, but no framework exists to guide managers in the choice of plant species. We developed such a framework based on the suitability of given plant traits for fixing soil on steep slopes in western Yunnan, China. We examined the utility of 55 native and exotic species with regard to the services they provided. We then chose nine species differing in life form. Plant root system architecture, root mechanical and physiological traits were then measured at two adjacent field sites. One site was highly unstable, with severe soil slippage and erosion. The second site had been replanted 8 years previously and appeared to be physically stable. How root traits differed between sites, season, depth in soil and distance from the plant stem were determined. Root system morphology was analysed by considering architectural traits (root angle, depth, diameter and volume) both up- and downslope. Significant differences between all factors were found, depending on species. We estimated the most useful architectural and mechanical traits for physically fixing soil in place. We then combined these results with those concerning root physiological traits, which were used as a proxy for root metabolic activity. Scores were assigned to each species based on traits. No one species possessed a suite of highly desirable traits, therefore mixtures of species should be used on vulnerable slopes. We also propose a conceptual model describing how to position plants on an unstable site, based on root system traits.

  17. A Framework for Identifying Plant Species to Be Used as ‘Ecological Engineers’ for Fixing Soil on Unstable Slopes

    PubMed Central

    Ghestem, Murielle; Cao, Kunfang; Ma, Wenzhang; Rowe, Nick; Leclerc, Raphaëlle; Gadenne, Clément; Stokes, Alexia

    2014-01-01

    Major reforestation programs have been initiated on hillsides prone to erosion and landslides in China, but no framework exists to guide managers in the choice of plant species. We developed such a framework based on the suitability of given plant traits for fixing soil on steep slopes in western Yunnan, China. We examined the utility of 55 native and exotic species with regard to the services they provided. We then chose nine species differing in life form. Plant root system architecture, root mechanical and physiological traits were then measured at two adjacent field sites. One site was highly unstable, with severe soil slippage and erosion. The second site had been replanted 8 years previously and appeared to be physically stable. How root traits differed between sites, season, depth in soil and distance from the plant stem were determined. Root system morphology was analysed by considering architectural traits (root angle, depth, diameter and volume) both up- and downslope. Significant differences between all factors were found, depending on species. We estimated the most useful architectural and mechanical traits for physically fixing soil in place. We then combined these results with those concerning root physiological traits, which were used as a proxy for root metabolic activity. Scores were assigned to each species based on traits. No one species possessed a suite of highly desirable traits, therefore mixtures of species should be used on vulnerable slopes. We also propose a conceptual model describing how to position plants on an unstable site, based on root system traits. PMID:25105571

  18. Magmatic densities control erupted volumes in Icelandic volcanic systems

    NASA Astrophysics Data System (ADS)

    Hartley, Margaret; Maclennan, John

    2018-04-01

    Magmatic density and viscosity exert fundamental controls on the eruptibility of magmas. In this study, we investigate the extent to which magmatic physical properties control the eruptibility of magmas from Iceland's Northern Volcanic Zone (NVZ). By studying subaerial flows of known age and volume, we are able to directly relate erupted volumes to magmatic physical properties, a task that has been near-impossible when dealing with submarine samples dredged from mid-ocean ridges. We find a strong correlation between magmatic density and observed erupted volumes on the NVZ. Over 85% of the total volume of erupted material lies close to a density and viscosity minimum that corresponds to the composition of basalts at the arrival of plagioclase on the liquidus. These magmas are buoyant with respect to the Icelandic upper crust. However, a number of small-volume eruptions with densities greater than typical Icelandic upper crust are also found in Iceland's neovolcanic zones. We use a simple numerical model to demonstrate that the eruption of magmas with higher densities and viscosities is facilitated by the generation of overpressure in magma chambers in the lower crust and uppermost mantle. This conclusion is in agreement with petrological constraints on the depths of crystallisation under Iceland.

  19. Unmanned Aviation Systems Models of the Radio Communications Links: Study Results - Appendices Annex 2. Volume 1 and Volume 2

    NASA Technical Reports Server (NTRS)

    Birr, Richard B.; Spencer, Roy; Murray, Jennifer; Lash, Andrew

    2013-01-01

    This report describes the analysis of communications between the Control Station and an Unmanned Aircraft (UA) flying in the National Airspace System (NAS). This work is based on the RTCA SC-203 Operational Services and Environment Description (OSED). The OSED document seeks to characterize the highly different attributes of all UAs navigating the airspace and define their relationship to airspace users, air traffic services, and operating environments of the NAS. One goal of this report is to lead to the development of Minimum Aviation System Performance Standards for Control and Communications. This report takes the nine scenarios found in the OSED and analyzes the communication links.

  20. Peace Corps Stateside Teacher Training for Volunteers in Liberia. Volume V: Cross-Cultural Training and Support Services. Final Report.

    ERIC Educational Resources Information Center

    PSI Associates, Inc., Washington, DC.

    The cross-cultural training module and support services for Peace Corps volunteers en route to Liberia make trainees more aware of and sensitive to cultural differences in human behavior and human interaction. In this part of the Peace Corps Stateside Teacher Training Model, the approach to training is both generic and specific, and both native…

  1. New Directions in U.S. National Security Strategy, Defense Plans, and Diplomacy -- A Review of Official Strategic Documents

    DTIC Science & Technology

    2011-07-01

    demand capabilities, a force-generation model that provides sufficient strategic depth, and a comprehensive study on the future balance between Active...career, and use of bonuses and credits to reward critical specialties and outstanding perfor- mance. They also include a continuum-of-service model that...development projects (for instance, the F–22) typically try to produce major leaps in technology and performance in a single step. A better model , it

  2. SU-C-304-06: Determination of Intermediate Correction Factors for Three Dosimeters in Small Composite Photon Fields Used in Robotic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christiansen, E; Belec, J; Vandervoort, E

    2015-06-15

    Purpose: To calculate using Monte-Carlo the intermediate and total correction factors (CFs) for two microchambers and a plastic scintillator for composite fields delivered by the CyberKnife system. Methods: A linac model was created in BEAMnrc by matching percentage depth dose (PDD) curves and output factors (OFs) measured using an A16 microchamber with Monte Carlo calculations performed in egs-chamber to explicitly model detector response. Intermediate CFs were determined for the A16 and A26 microchambers and the W1 plastic scintillator in fourteen different composite fields inside a solid water phantom. Seven of these fields used a 5 mm diameter collimator; the remainingmore » fields employed a 7.5 mm collimator but were otherwise identical to the first seven. Intermediate CFs are reported relative to the respective CF for a 60 mm collimator (800 mm source to detector distance and 100 mm depth in water). Results: For microchambers in composite fields, the intermediate CFs that account for detector density and volume were the largest contributors to total CFs. The total CFs for the A26 were larger than those for the A16, especially for the 5 mm cone (1.227±0.003 to 1.144±0.004 versus 1.142±0.003 to 1.099±0.004), due to the A26’s larger active volume (0.015 cc) relative to the A16 (0.007 cc), despite the A26 using similar wall and electrode material. The W1 total and intermediate CFs are closer to unity, due to its smaller active volume and near water-equivalent composition, however, 3–4% detector volume corrections are required for 5 mm collimator fields. In fields using the 7.5 mm collimator, the correction is nearly eliminated for the W1 except for a non-isocentric field. Conclusion: Large and variable CFs are required for microchambers in small composite fields primarily due to density and volume effects. Corrections are reduced but not eliminated for a plastic scintillator in the same fields.« less

  3. Analysis of laser surgery in non-melanoma skin cancer for optimal tissue removal

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, Félix; Salas-García, Irene; Arce-Diego, José Luis

    2015-02-01

    Laser surgery is a commonly used technique for tissue ablation or the resection of malignant tumors. It presents advantages over conventional non-optical ablation techniques, like a scalpel or electrosurgery, such as the increased precision of the resected volume, minimization of scars and shorter recovery periods. Laser surgery is employed in medical branches such as ophthalmology or dermatology. The application of laser surgery requires the optimal adjustment of laser beam parameters, taking into account the particular patient and lesion. In this work we present a predictive tool for tissue resection in biological tissue after laser surgery, which allows an a priori knowledge of the tissue ablation volume, area and depth. The model employs a Monte Carlo 3D approach for optical propagation and a rate equation for plasma-induced ablation. The tool takes into account characteristics of the specific lesion to be ablated, mainly the geometric, optical and ablation properties. It also considers the parameters of the laser beam, such as the radius, spatial profile, pulse width, total delivered energy or wavelength. The predictive tool is applied to dermatology tumor resection, particularly to different types of non-melanoma skin cancer tumors: basocellular carcinoma, squamous cell carcinoma and infiltrative carcinoma. The ablation volume, area and depth are calculated for healthy skin and for each type of tumor as a function of the laser beam parameters. The tool could be used for laser surgery planning before the clinical application. The laser parameters could be adjusted for optimal resection volume, by personalizing the process to the particular patient and lesion.

  4. Customer premises services market demand assessment 1980 - 2000. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Gamble, R. B.; Saporta, L.; Heidenrich, G. A.

    1983-01-01

    Estimates of market demand for domestic civilian telecommunications services for the years 1980 to 2000 are provided. Overall demand, demand or satellite services, demand for satellite delivered Customer Premises Service (CPS), and demand for 30/20 GHz Customer Premises Services are covered. Emphasis is placed on the CPS market and demand is segmented by market, by service, by user class and by geographic region. Prices for competing services are discussed and the distribution of traffic with respect to distance is estimated. A nationwide traffic distribution model for CPS in terms of demand for CPS traffic and earth stations for each of the major SMSAs in the United States are provided.

  5. A Conceptual Framework for Quality of Care

    PubMed Central

    Mosadeghrad, Ali Mohammad

    2012-01-01

    Despite extensive research on defining and measuring health care quality, little attention has been given to different stakeholders’ perspectives of high-quality health care services. The main purpose of this study was to explore the attributes of quality healthcare in the Iranian context. Exploratory in-depth individual and focus group interviews were conducted with key healthcare stakeholders including clients, providers, managers, policy makers, payers, suppliers and accreditation panel members to identify the healthcare service quality attributes and dimensions. Data analysis was carried out by content analysis, with the constant comparative method. Over 100 attributes of quality healthcare service were elicited and grouped into five categories. The dimensions were: efficacy, effectiveness, efficiency, empathy, and environment. Consequently, a comprehensive model of service quality was developed for health care context. The findings of the current study led to a conceptual framework of healthcare quality. This model leads to a better understanding of the different aspects of quality in health care and provides a better basis for defining, measuring and controlling quality of health care services. PMID:23922534

  6. Consumer awareness, satisfaction, motivation and perceived benefits from using an after-hours GP helpline - A mixed methods study.

    PubMed

    McKenzie, Rosemary

    2016-07-01

    The 'after hours GP helpline' (AGPH) was added to the nurse triage and advice services in Australia in July 2011 with the intention of improving access to general practitioner (GP) advice in the after-hours period. The objectives of the article are to examine consumer awareness, satisfaction, motivation for use and perceived benefits of using the AGPH. A mixed-methods approach used secondary data on population awareness and caller satisfaction, and an in-depth qualitative study of consumers. Awareness of the service was low but satisfaction was high. Users called the service because they did not know what to do, were afraid and/or could not access a health service after hours. Users derived reassurance and increased confidence in managing their health. A conceptual model identifying three experiential domains of dependence, access and health literacy illustrates the relationship between motivation for use and perceived benefits. The model may help to target the service to those who will benefit most.

  7. Study of blood flow sensing with microwave radiometry

    NASA Technical Reports Server (NTRS)

    Porter, R. A.; Wentz, F. J., III

    1973-01-01

    A study and experimental investigation has been performed to determine the feasibility of measuring regional blood flow and volume in man by means of microwave radiometry. An indication was expected of regional blood flow from measurement of surface and subsurface temperatures with a sensitive radiometer. Following theoretical modeling of biological tissue, to determine the optimum operating frequency for adequate sensing depth, a sensitive microwave radiometer was designed for operation at 793 MHz. A temperature sensitivity of of 0.06 K rms was realized in this equipment. Measurements performed on phantom tissue models, consisting of beef fat and lean beefsteak showed that the radiometer was capable of sensing temperatures from a depth between 3.8 and 5.1 cm. Radiometric and thermodynamic temperature measurements were also performed on the hind thighs of large dogs. These showed that the radiometer could sense subsurface temperatures from a depth of, at least, 1.3 cm. Delays caused by externally-generated RF interference, coupled with the lack of reliable blood flow measurement equipment, prevented correlation of radiometer readings with reginal blood flow. For the same reasons, it was not possible to extend the radiometric observations to human subjects.

  8. Effects of Spatial Variability of Soil Properties on the Triggering of Rainfall-Induced Shallow Landslides

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2015-04-01

    Naturally-occurring spatial variations in soil properties (e.g., soil depth, moisture, and texture) affect key hydrological processes and potentially the mechanical response of soil to hydromechanical loading (relative to the commonly-assumed uniform soil mantle). We quantified the effects of soil spatial variability on the triggering of rainfall-induced shallow landslides at the hillslope- and catchment-scales, using a physically-based landslide triggering model that considers interacting soil columns with mechanical strength thresholds (represented by the Fiber Bundle Model). The spatial variations in soil properties are represented as Gaussian random distributions and the level of variation is characterized by the coefficient of variation and correlation lengths of soil properties (i.e., soil depth, soil texture and initial water content in this study). The impacts of these spatial variations on landslide triggering characteristics were measured by comparing the times to triggering and landslide volumes for heterogeneous soil properties and homogeneous cases. Results at hillslope scale indicate that for spatial variations of an individual property (without cross correlation), the increasing of coefficient of variation introduces weak spots where mechanical damage is accelerated and leads to earlier onset of landslide triggering and smaller volumes. Increasing spatial correlation length of soil texture and initial water content also induces early landslide triggering and small released volumes due to the transition of failure mode from brittle to ductile failure. In contrast, increasing spatial correlation length of soil depth "reduces" local steepness and postpones landslide triggering. Cross-correlated soil properties generally promote landslide initiation, but depending on the internal structure of spatial distribution of each soil property, landslide triggering may be reduced. The effects of cross-correlation between initial water content and soil texture were investigated in detail at the catchment scale by incorporating correlations of both variables with topography. Results indicate that the internal structure of the spatial distribution of each soil property together with their interplays determine the overall performance of the coupled spatial variability. This study emphasizes the importance of both the randomness and spatial structure of soil properties on landslide triggering and characteristics.

  9. Evaluation of Amount of Blood in Dry Blood Spots: Ring-Disk Electrode Conductometry.

    PubMed

    Kadjo, Akinde F; Stamos, Brian N; Shelor, C Phillip; Berg, Jordan M; Blount, Benjamin C; Dasgupta, Purnendu K

    2016-06-21

    A fixed area punch in dried blood spot (DBS) analysis is assumed to contain a fixed amount of blood, but the amount actually depends on a number of factors. The presently preferred approach is to normalize the measurement with respect to the sodium level, measured by atomic spectrometry. Instead of sodium levels, we propose electrical conductivity of the extract as an equivalent nondestructive measure. A dip-type small diameter ring-disk electrode (RDE) is ideal for very small volumes. However, the conductance (G) measured by an RDE depends on the depth (D) of the liquid below the probe. There is no established way of computing the specific conductance (σ) of the solution from G. Using a COMSOL Multiphysics model, we were able to obtain excellent agreement between the measured and the model predicted conductance as a function of D. Using simulations over a large range of dimensions, we provide a spreadsheet-based calculator where the RDE dimensions are the input parameters and the procedure determines the 99% of the infinite depth conductance (G99) and the depth D99 at which this is reached. For typical small diameter probes (outer electrode diameter ∼ <2 mm), D99 is small enough for dip-type measurements in extract volumes of ∼100 μL. We demonstrate the use of such probes with DBS extracts. In a small group of 12 volunteers (age 20-66), the specific conductance of 100 μL aqueous extracts of 2 μL of spotted blood showed a variance of 17.9%. For a given subject, methanol extracts of DBS spots nominally containing 8 and 4 μL of blood differed by a factor of 1.8-1.9 in the chromatographically determined values of sulfate and chloride (a minor and major constituent, respectively). The values normalized with respect to the conductance of the extracts differed by ∼1%. For serum associated analytes, normalization of the analyte value by the extract conductance can thus greatly reduce errors from variations in the spotted blood volume/unit area.

  10. SCS-CN parameter determination using rainfall-runoff data in heterogeneous watersheds - the two-CN system approach

    NASA Astrophysics Data System (ADS)

    Soulis, K. X.; Valiantzas, J. D.

    2012-03-01

    The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN parameter values corresponding to various soil, land cover, and land management conditions can be selected from tables, but it is preferable to estimate the CN value from measured rainfall-runoff data if available. However, previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. Hence, they suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of soils and land cover spatial variability on its hydrologic response is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behaviour of the CN-rainfall function produced by the simplified two-CN system is approached theoretically, it is analysed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous methods based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.

  11. Theoretical distribution of gutta-percha within root canals filled using cold lateral compaction based on numeric calculus.

    PubMed

    Min, Yi; Song, Ying; Gao, Yuan; Dummer, Paul M H

    2016-08-01

    This study aimed to present a new method based on numeric calculus to provide data on the theoretical volume ratio of voids when using the cold lateral compaction technique in canals with various diameters and tapers. Twenty-one simulated mathematical root canal models were created with different tapers and sizes of apical diameter, and were filled with defined sizes of standardized accessory gutta-percha cones. The areas of each master and accessory gutta-percha cone as well as the depth of their insertion into the canals were determined mathematically in Microsoft Excel. When the first accessory gutta-percha cone had been positioned, the residual area of void was measured. The areas of the residual voids were then measured repeatedly upon insertion of additional accessary cones until no more could be inserted in the canal. The volume ratio of voids was calculated through measurement of the volume of the root canal and mass of gutta-percha cones. The theoretical volume ratio of voids was influenced by the taper of canal, the size of apical preparation and the size of accessory gutta-percha cones. Greater apical preparation size and larger taper together with the use of smaller accessory cones reduced the volume ratio of voids in the apical third. The mathematical model provided a precise method to determine the theoretical volume ratio of voids in root-filled canals when using cold lateral compaction.

  12. The uk Lidar-sunphotometer operational volcanic ash monitoring network

    NASA Astrophysics Data System (ADS)

    Adam, Mariana; Buxmann, Joelle; Freeman, Nigel; Horseman, Andrew; Salmon, Christopher; Sugier, Jacqueline; Bennett, Richard

    2018-04-01

    The Met Office completed the deployment of ten lidars (UV Raman and depolarization), each accompanied by a sunphotometer (polarized model), to provide quantitative monitoring of volcanic ash over UK for VAAC London. The lidars provide range corrected signal and volume depolarization ratio in near-real time. The sunphotometers deliver aerosol optical depth, Ångstrom exponent and degree of linear polarization. Case study analyses of Saharan dust events (as a proxy for volcanic ash) are presented.

  13. The impact of domestic rainwater harvesting systems in storm water runoff mitigation at the urban block scale.

    PubMed

    Palla, A; Gnecco, I; La Barbera, P

    2017-04-15

    In the framework of storm water management, Domestic Rainwater Harvesting (DRWH) systems are recently recognized as source control solutions according to LID principles. In order to assess the impact of these systems in storm water runoff control, a simple methodological approach is proposed. The hydrologic-hydraulic modelling is undertaken using EPA SWMM; the DRWH is implemented in the model by using a storage unit linked to the building water supply system and to the drainage network. The proposed methodology has been implemented for a residential urban block located in Genoa (Italy). Continuous simulations are performed by using the high-resolution rainfall data series for the ''do nothing'' and DRWH scenarios. The latter includes the installation of a DRWH system for each building of the urban block. Referring to the test site, the peak and volume reduction rate evaluated for the 2125 rainfall events are respectively equal to 33 and 26 percent, on average (with maximum values of 65 percent for peak and 51 percent for volume). In general, the adopted methodology indicates that the hydrologic performance of the storm water drainage network equipped with DRWH systems is noticeable even for the design storm event (T = 10 years) and the rainfall depth seems to affect the hydrologic performance at least when the total depth exceeds 20 mm. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Effects of soil spatial variability at the hillslope and catchment scales on characteristics of rainfall-induced landslides

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2016-03-01

    Spatial variations in soil properties affect key hydrological processes, yet their role in soil mechanical response to hydro-mechanical loading is rarely considered. This study aims to fill this gap by systematically quantifying effects of spatial variations in soil type and initial water content on rapid rainfall-induced shallow landslide predictions at the hillslope- and catchment-scales. We employed a physically-based landslide triggering model that considers mechanical interactions among soil columns governed by strength thresholds. At the hillslope scale, we found that the emergence of weak regions induced by spatial variations of soil type and initial water content resulted in early triggering of landslides with smaller volumes of released mass relative to a homogeneous slope. At the catchment scale, initial water content was linked to a topographic wetness index, whereas soil type varied deterministically with soil depth considering spatially correlated stochastic components. Results indicate that a strong spatial organization of initial water content delays landslide triggering, whereas spatially linked soil type with soil depth promoted landslide initiation. Increasing the standard deviation and correlation length of the stochastic component of soil type increases landslide volume and hastens onset of landslides. The study illustrates that for similar external boundary conditions and mean soil properties, landslide characteristics vary significantly with soil variability, hence it must be considered for improved landslide model predictions.

  15. Final Technical Report. DeepCwind Consortium Research Program. January 15, 2010 - March 31, 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dagher, Habib; Viselli, Anthony; Goupee, Andrew

    This is the final technical report for the U.S. Department of Energy-funded program, DE-0002981: DeepCwind Consortium Research Program. The project objective was the partial validation of coupled models and optimization of materials for offshore wind structures. The United States has a great opportunity to harness an indigenous abundant renewable energy resource: offshore wind. In 2010, the National Renewable Energy Laboratory (NREL) estimated there to be over 4,000 GW of potential offshore wind energy found within 50 nautical miles of the US coastlines (Musial and Ram, 2010). The US Energy Information Administration reported the total annual US electric energy generation inmore » 2010 was 4,120 billion kilowatt-hours (equivalent to 470 GW) (US EIA, 2011), slightly more than 10% of the potential offshore wind resource. In addition, deep water offshore wind is the dominant US ocean energy resource available comprising 75% of the total assessed ocean energy resource as compared to wave and tidal resources (Musial, 2008). Through these assessments it is clear offshore wind can be a major contributor to US energy supplies. The caveat to capturing offshore wind along many parts of the US coast is deep water. Nearly 60%, or 2,450 GW, of the estimated US offshore wind resource is located in water depths of 60 m or more (Musial and Ram, 2010). At water depths over 60 m building fixed offshore wind turbine foundations, such as those found in Europe, is likely economically infeasible (Musial et al., 2006). Therefore floating wind turbine technology is seen as the best option for extracting a majority of the US offshore wind energy resource. Volume 1 - Test Site; Volume 2 - Coupled Models; and Volume 3 - Composite Materials« less

  16. Does the 'diffusion of innovations' model enrich understanding of research use? Case studies of the implementation of thrombolysis services for stroke.

    PubMed

    Boaz, Annette; Baeza, Juan; Fraser, Alec

    2016-10-01

    To test whether the model of 'diffusion of innovations' enriches understanding of the implementation of evidence-based thrombolysis services for stroke patients. Four case studies of the implementation of evidence on thrombolysis in stroke services in England and Sweden. Semistructured interviews with 95 staff including doctors, nurses and managers working in stroke units, emergency medicine, radiology, the ambulance service, community rehabilitation services and commissioners. The implementation of thrombolysis in acute stroke management benefited from a critical mass of the factors featured in the model including: the support of national and local opinion leaders; a strong evidence base and financial incentives. However, while the model provided a starting point as an organizational framework for mapping the critical factors influencing implementation, to understand properly the process of implementation and the importance of the different factors identified, more detailed analyses of context and, in particular, of the human and social dimensions of change was needed. While recognising the usefulness of the model of diffusion of innovations in mapping the processes by which diffusion occurs, the use of methods that lend themselves to in-depth analysis, such as ethnography and the application of relevant bodies of social theory, are needed. © The Author(s) 2016.

  17. Long-times series of infrasonic records at open-vents volcanoes (Yasur volcano, Vanuatu, 2003-2014): the remarkable temporal stability of magma viscosity

    NASA Astrophysics Data System (ADS)

    Vergniolle, S.; Souty, V.; Zielinski, C.; Bani, P.; LE Pichon, A.; Lardy, M.; Millier, P.; Herry, P.; Todman, S.; Garaebiti, E.

    2017-12-01

    Open-vents volcanoes, often presenting series of Strombolian explosions of various intensity, are responding, although with a delay, to any changes in the degassing pattern, providing a quasi-direct route to processes at depth. Open-vents volcanoes display a persistent volcanic activity, although of variable intensity. Long-times series at open-vents volcanoes could therefore be key measurements to unravel physical processes at the origin of Strombolian explosions and be crucial for monitoring. Continuous infrasonic records can be used to estimate the gas volume expelled at the vent during explosions (bursting of a long slug). The gas volume of each explosion is deduced from a series of two successive integrations of acoustic pressure (monopole). Here we analysed more than 4 years of infrasonic records at Yasur volcano (Vanuatu), spanning between 2003 and 2014 and organised into 8 main quasi-continuous periods. The relationship between the gas volume of each explosion and its associated maximum positive acoustic pressure, a proxy for the inner gas overpressure at bursting, shows a remarkably stable trend over the 8 periods. Two main trends exists, one which covers the full range of acoustic pressures (called « strong explosions ») and the second which represents explosions with a large gas volume and mild acoustic pressure. The class of « strong explosions » clearly follows the model of Del Bello et al. (2012), which shows that the inner gas overpressure at bursting, here empirically measured by the maximum acoustic pressure, is proportional to the gas volume. Constrains on magma viscosity and conduit radius, are deduced from this trend and from the gas volume at the transition passive-active degassing. The remarkable stability of this trend over time suggests that 1) the magma viscosity is stable at the depth where gas overpressure is produced within the slug and 2) any potential changes in magma viscosity occur very close to the top of the magma column.

  18. Modification of Roberts' Theory for Rocket Exhaust Plumes Eroding Lunar Soil

    NASA Technical Reports Server (NTRS)

    Metzger, Philip T.; Lane, John E.; Immer, Christopher D.

    2008-01-01

    Roberts' model of lunar soil erosion beneath a landing rocket has been updated in several ways to predict the effects of future lunar landings. The model predicts, among other things, the number of divots that would result on surrounding hardware due to the impact of high velocity particulates, the amount and depth of surface material removed, the volume of ejected soil, its velocity, and the distance the particles travel on the Moon. The results are compared against measured results from the Apollo program and predictions are made for mitigating the spray around a future lunar outpost.

  19. NIR light propagation in a digital head model for traumatic brain injury (TBI)

    PubMed Central

    Francis, Robert; Khan, Bilal; Alexandrakis, George; Florence, James; MacFarlane, Duncan

    2015-01-01

    Near infrared spectroscopy (NIRS) is capable of detecting and monitoring acute changes in cerebral blood volume and oxygenation associated with traumatic brain injury (TBI). Wavelength selection, source-detector separation, optode density, and detector sensitivity are key design parameters that determine the imaging depth, chromophore separability, and, ultimately, clinical usefulness of a NIRS instrument. We present simulation results of NIR light propagation in a digital head model as it relates to the ability to detect intracranial hematomas and monitor the peri-hematomal tissue viability. These results inform NIRS instrument design specific to TBI diagnosis and monitoring. PMID:26417498

  20. The Relationship Between the Scope of Essential Health Benefits and Statutory Financing: An International Comparison Across Eight European Countries

    PubMed Central

    van der Wees, Philip J.; Wammes, Joost J.G.; Westert, Gert P.; Jeurissen, Patrick P.T.

    2016-01-01

    Background: Both rising healthcare costs and the global financial crisis have fueled a search for policy tools in order to avoid unsustainable future financing of essential health benefits. The scope of essential health benefits (the range of services covered) and depth of coverage (the proportion of costs of the covered benefits that is covered publicly) are corresponding variables in determining the benefits package. We hypothesized that a more comprehensive health benefit package may increase user cost-sharing charges. Methods: We conducted a desktop research study to assess the interrelationship between the scope of covered health benefits and the height of statutory spending in a sample of 8 European countries: Belgium, England, France, Germany, the Netherlands, Scotland, Sweden, and Switzerland. We conducted a targeted literature search to identify characteristics of the healthcare systems in our sample of countries. We analyzed similarities and differences based on the dimensions of publicly financed healthcare as published by the European Observatory on Health Care Systems. Results: We found that the scope of services is comparable and comprehensive across our sample, with only marginal differences. Cost-sharing arrangements show the most variation. In general, we found no direct interrelationship in this sample between the ranges of services covered in the health benefits package and the height of public spending on healthcare. With regard to specific services (dental care, physical therapy), we found indications of an association between coverage of services and cost-sharing arrangements. Strong variations in the volume and price of healthcare services between the 8 countries were found for services with large practice variations. Conclusion: Although reducing the scope of the benefit package as well as increasing user charges may contribute to the financial sustainability of healthcare, variations in the volume and price of care seem to have a much larger impact on financial sustainability. Policy-makers should focus on a variety of measures within an integrated approach. There is no silver bullet for addressing the sustainability of healthcare. PMID:26673645

  1. The Relationship Between the Scope of Essential Health Benefits and Statutory Financing: An International Comparison Across Eight European Countries.

    PubMed

    van der Wees, Philip J; Wammes, Joost J G; Westert, Gert P; Jeurissen, Patrick P T

    2015-09-12

    Both rising healthcare costs and the global financial crisis have fueled a search for policy tools in order to avoid unsustainable future financing of essential health benefits. The scope of essential health benefits (the range of services covered) and depth of coverage (the proportion of costs of the covered benefits that is covered publicly) are corresponding variables in determining the benefits package. We hypothesized that a more comprehensive health benefit package may increase user cost-sharing charges. We conducted a desktop research study to assess the interrelationship between the scope of covered health benefits and the height of statutory spending in a sample of 8 European countries: Belgium, England, France, Germany, the Netherlands, Scotland, Sweden, and Switzerland. We conducted a targeted literature search to identify characteristics of the healthcare systems in our sample of countries. We analyzed similarities and differences based on the dimensions of publicly financed healthcare as published by the European Observatory on Health Care Systems. We found that the scope of services is comparable and comprehensive across our sample, with only marginal differences. Cost-sharing arrangements show the most variation. In general, we found no direct interrelationship in this sample between the ranges of services covered in the health benefits package and the height of public spending on healthcare. With regard to specific services (dental care, physical therapy), we found indications of an association between coverage of services and cost-sharing arrangements. Strong variations in the volume and price of healthcare services between the 8 countries were found for services with large practice variations. Although reducing the scope of the benefit package as well as increasing user charges may contribute to the financial sustainability of healthcare, variations in the volume and price of care seem to have a much larger impact on financial sustainability. Policy-makers should focus on a variety of measures within an integrated approach. There is no silver bullet for addressing the sustainability of healthcare. © 2016 by Kerman University of Medical Sciences.

  2. Study of the water transportation characteristics of marsh saline soil in the Yellow River Delta.

    PubMed

    He, Fuhong; Pan, Yinghua; Tan, Lili; Zhang, Zhenhua; Li, Peng; Liu, Jia; Ji, Shuxin; Qin, Zhaohua; Shao, Hongbo; Song, Xueyan

    2017-01-01

    One-dimensional soil column water infiltration and capillary adsorption water tests were conducted in the laboratory to study the water transportation characteristics of marsh saline soil in the Yellow River Delta, providing a theoretical basis for the improvement, utilization and conservation of marsh saline soil. The results indicated the following: (1) For soils with different vegetation covers, the cumulative infiltration capacity increased with the depth of the soil layers. The initial infiltration rate of soils covered by Suaeda and Tamarix chinensis increased with depth of the soil layers, but that of bare soil decreased with soil depth. (2) The initial rate of capillary rise of soils with different vegetation covers showed an increasing trend from the surface toward the deeper layers, but this pattern with respect to soil depth was relatively weak. (3) The initial rates of capillary rise were lower than the initial infiltration rates, but infiltration rate decreased more rapidly than capillary water adsorption rate. (4) The two-parameter Kostiakov model can very well-simulate the changes in the infiltration and capillary rise rates of wetland saline soil. The model simulated the capillary rise rate better than it simulated the infiltration rate. (5) There were strong linear relationships between accumulative infiltration capacity, wetting front, accumulative capillary adsorbed water volume and capillary height. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Value-added strategy models to provide quality services in senior health business.

    PubMed

    Yang, Ya-Ting; Lin, Neng-Pai; Su, Shyi; Chen, Ya-Mei; Chang, Yao-Mao; Handa, Yujiro; Khan, Hafsah Arshed Ali; Elsa Hsu, Yi-Hsin

    2017-06-20

    The rapid population aging is now a global issue. The increase in the elderly population will impact the health care industry and health enterprises; various senior needs will promote the growth of the senior health industry. Most senior health studies are focused on the demand side and scarcely on supply. Our study selected quality enterprises focused on aging health and analyzed different strategies to provide excellent quality services to senior health enterprises. We selected 33 quality senior health enterprises in Taiwan and investigated their excellent quality services strategies by face-to-face semi-structured in-depth interviews with CEO and managers of each enterprise in 2013. A total of 33 senior health enterprises in Taiwan. Overall, 65 CEOs and managers of 33 enterprises were interviewed individually. None. Core values and vision, organization structure, quality services provided, strategies for quality services. This study's results indicated four type of value-added strategy models adopted by senior enterprises to offer quality services: (i) residential care and co-residence model, (ii) home care and living in place model, (iii) community e-business experience model and (iv) virtual and physical portable device model. The common part in these four strategy models is that the services provided are elderly centered. These models offer virtual and physical integrations, and also offer total solutions for the elderly and their caregivers. Through investigation of successful strategy models for providing quality services to seniors, we identified opportunities to develop innovative service models and successful characteristics, also policy implications were summarized. The observations from this study will serve as a primary evidenced base for enterprises developing their senior market and, also for promoting the value co-creation possibility through dialogue between customers and those that deliver service. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  4. Nonlinear storage models of unconfined flow through a shallow aquifer on an inclined base and their quasi-steady flow application

    NASA Astrophysics Data System (ADS)

    Varvaris, Ioannis; Gravanis, Elias; Koussis, Antonis; Akylas, Evangelos

    2013-04-01

    Hillslope processes involving flow through an inclined shallow aquifer range from subsurface stormflow to stream base flow (drought flow, or groundwater recession flow). In the case of recharge, the infiltrating water moves vertically as unsaturated flow until it reaches the saturated groundwater, where the flow is approximately parallel to the base of the aquifer. Boussinesq used the Dupuit-Forchheimer (D-F) hydraulic theory to formulate unconfined groundwater flow through a soil layer resting on an impervious inclined bed, deriving a nonlinear equation for the flow rate that consists of a linear gravity-driven component and a quadratic pressure-gradient component. Inserting that flow rate equation into the differential storage balance equation (volume conservation) Boussinesq obtained a nonlinear second-order partial differential equation for the depth. So far however, only few special solutions have been advanced for that governing equation. The nonlinearity of the equation of Boussinesq is the major obstacle to deriving a general analytical solution for the depth profile of unconfined flow on a sloping base with recharge (from which the discharges could be then determined). Henderson and Wooding (1964) were able to obtain an exact analytical solution for steady unconfined flow on a sloping base, with recharge, and their work deserves special note in the realm of solutions of the nonlinear equation of Boussinesq. However, the absence of a general solution for the transient case, which is of practical interest to hydrologists, has been the motivation for developing approximate solutions of the non-linear equation of Boussinesq. In this work, we derive the aquifer storage function by integrating analytically over the aquifer base the depth profiles resulting from the complete nonlinear Boussinesq equation for steady flow. This storage function consists of a linear and a nonlinear outflow-dependent term. Then, we use this physics-based storage function in the transient storage balance over the hillslope, obtaining analytical solutions of the outflow and the storage, for recharge and drainage, via a quasi-steady flow calculation. The hydraulically derived storage model is thus embedded in a quasi-steady approximation of transient unconfined flow in sloping aquifers. We generalise this hydrologic model of groundwater flow by modifying the storage function to be the weighted sum of the linear and the nonlinear storage terms, determining the weighting factor objectively from a known integral quantity of the flow (either an initial volume of water stored in the aquifer or a drained water volume). We demonstrate the validity of this model through comparisons with experimental data and simulation results.

  5. Locomotive crashworthiness research : volume 5 : cab car crashworthiness report

    DOT National Transportation Integrated Search

    1996-03-01

    Models used to analyze locomotive crashworthiness are modified for application to control cab cars of the types used for intercity and commuter rail passenger service. An existing control cab car is analyzed for crashworthiness based on scenarios dev...

  6. Oxygen transport and pyrite oxidation in unsaturated coal-mine spoil

    USGS Publications Warehouse

    Guo, Weixing; Cravotta, Charles A.

    1996-01-01

    An understanding of the mechanisms of oxygen (02) transport in unsaturated mine spoil is necessary to design and implement effective measures to exclude 02 from pyritic materials and to control the formation of acidic mine drainage. Partial pressure of oxygen (Po2) in pore gas, chemistry of pore water, and temperature were measured at different depths in unsaturated spoil at two reclaimed surface coal mines in Pennsylvania. At mine 1, where spoil was loose, blocky sandstone, Po2 changed little with depth, decreasing from 21 volume percent (vol%) at the ground surface to a minimum of about 18 vol% at 10 m depth. At mine 2, where spoil was compacted, friable shale, Po2 decreased to less than 2 vol% at depth of about 10 m. Although pore-water chemistry and temperature data indicate that acid-forming reactions were active at both mines, the pore-gas data indicate that mechanisms for 0 2 transport were different at each mine. A numerical model was developed to simulate 02 transport and pyrite oxidation in unsaturated mine spoil. The results of the numerical simulations indicate that differences in 02 transport at the two mines can be explained by differences in the air permeability of spoil. Po2 changes little with depth if advective transport of 02 dominates as at mine 1, but decreases greatly with depth if diffusive transport of 02 dominates, as in mine 2. Model results also indicate that advective transport becomes significant if the air permeability of spoil is greater than 10-9 m2, which is expected for blocky sandstone spoil. In the advective-dominant system, thermally-induced convective air flow, as a consequence of the exothermic oxidation of pyrite, supplies the 02 to maintain high Po2 within the deep unsaturated zone.

  7. Influence of variable water depth and turbidity on microalgae production in a shallow estuarine lake system - A modelling study

    NASA Astrophysics Data System (ADS)

    Tirok, Katrin; Scharler, Ursula M.

    2014-06-01

    Strongly varying water levels and turbidities are typical characteristics of the large shallow estuarine lake system of St. Lucia, one of the largest on the African continent. This theoretical study investigated the combined effects of variable water depth and turbidity on seasonal pelagic and benthic microalgae production using a mathematical model, in order to ascertain productivity levels during variable and extreme conditions. Simulated pelagic and benthic net production varied between 0.3 and 180 g C m-2 year-1 and 0 and 220 g C m-2 year-1, respectively, dependent on depth, turbidity, and variability in turbidity. Although not surprising production and biomass decreased with increasing turbidity and depth. A high variability in turbidity, i.e. an alteration of calm and windy days, could reduce or enhance the seasonal pelagic and benthic production by more than 30% compared to a low variability. The day-to-day variability in wind-induced turbidity therefore influences production in the long term. On the other hand, varying water depth within a year did not significantly influence the seasonal production for turbidities representative of Lake St. Lucia. Reduced lake area and volume as observed during dry periods in Lake St. Lucia did not reduce primary production of the entire system since desiccation resulted in lower water depth and thus increased light availability. This agrees with field observations suggesting little light limitation and high areal microalgal biomass during a period with below average rainfall (2005-2011). Thus, microalgae potentially fulfil their function in the lake food-web even under extreme drought conditions. We believe that these results are of general interest to shallow aquatic ecosystems that are sensitive to drought periods due to either human or natural causes.

  8. MAC/GMC 4.0 User's Manual: Example Problem Manual. Volume 3

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2002-01-01

    This document is the third volume in the three volume set of User's Manuals for the Micromechanics Analysis Code with Generalized Method of Cells Version 4.0 (MAC/GMC 4.0). Volume 1 is the Theory Manual, Volume 2 is the Keywords Manual, and this document is the Example Problems Manual. MAC/GMC 4.0 is a composite material and laminate analysis software program developed at the NASA Glenn Research Center. It is based on the generalized method of cells (GMC) micromechanics theory, which provides access to the local stress and strain fields in the composite material. This access grants GMC the ability to accommodate arbitrary local models for inelastic material behavior and various types of damage and failure analysis. MAC/GMC 4.0 has been built around GMC to provide the theory with a user-friendly framework, along with a library of local inelastic, damage, and failure models. Further, application of simulated thermo-mechanical loading, generation of output results, and selection of architectures to represent the composite material, have been automated in MAC/GMC 4.0. Finally, classical lamination theory has been implemented within MAC/GMC 4.0 wherein GMC is used to model the composite material response of each ply. Consequently, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. This volume provides in-depth descriptions of 43 example problems, which were specially designed to highlight many of the most important capabilities of the code. The actual input files associated with each example problem are distributed with the MAC/GMC 4.0 software; thus providing the user with a convenient starting point for their own specialized problems of interest.

  9. Effects of topography on the interpretation of the deformation field of prominent volcanoes - Application to Etna

    USGS Publications Warehouse

    Cayol, V.; Cornet, F.H.

    1998-01-01

    We have investigated the effects of topography on the surface-deformation field of volcanoes. Our study provides limits to the use of classical half-space models. Considering axisymmetrical volcanoes, we show that interpreting ground-surface displacements with half-space models can lead to erroneous estimations of the shape of the deformation source. When the average slope of the flanks of a volcano exceeds 20??, tilting in the summit area is reversed to that expected for a flat surface. Thus, neglecting topography may lead to misinterpreting an inflation of the source as a deflation. Comparisons of Mogi's model with a three-dimensional model shows that ignoring topography may lead to an overestimate of the source-volume change by as much as 50% for a slope of 30??. This comparison also shows that the depths calculated by using Mogi's solution for prominent volcanoes should be considered as depths from the summit of the edifices. Finally, we illustrate these topographic effects by analyzing the deformation field measured by radar interferometry at Mount Etna during its 1991-1993 eruption. A three-dimensional modeling calculation shows that the flattening of the deflation field near the volcano's summit is probably a topographic effect.

  10. Coulomb Mechanics And Landscape Geometry Explain Landslide Size Distribution

    NASA Astrophysics Data System (ADS)

    Jeandet, L.; Steer, P.; Lague, D.; Davy, P.

    2017-12-01

    It is generally observed that the dimensions of large bedrock landslides follow power-law scaling relationships. In particular, the non-cumulative frequency distribution (PDF) of bedrock landslide area is well characterized by a negative power-law above a critical size, with an exponent 2.4. However, the respective role of bedrock mechanical properties, landscape shape and triggering mechanisms on the scaling properties of landslide dimensions are still poorly understood. Yet, unravelling the factors that control this distribution is required to better estimate the total volume of landslides triggered by large earthquakes or storms. To tackle this issue, we develop a simple probabilistic 1D approach to compute the PDF of rupture depths in a given landscape. The model is applied to randomly sampled points along hillslopes of studied digital elevation models. At each point location, the model determines the range of depth and angle leading to unstable rupture planes, by applying a simple Mohr-Coulomb rupture criterion only to the rupture planes that intersect downhill surface topography. This model therefore accounts for both rock mechanical properties, friction and cohesion, and landscape shape. We show that this model leads to realistic landslide depth distribution, with a power-law arising when the number of samples is high enough. The modeled PDF of landslide size obtained for several landscapes match the ones from earthquakes-driven landslides catalogues for the same landscape. In turn, this allows us to invert landslide effective mechanical parameters, friction and cohesion, associated to those specific events, including Chi-Chi, Wenchuan, Niigata and Gorkha earthquakes. The cohesion and friction ranges (25-35 degrees and 5-20 kPa) are in good agreement with previously inverted values. Our results demonstrate that reduced complexity mechanics is efficient to model the distribution of unstable depths, and show the role of landscape variability in landslide size distribution.

  11. A Parametric Study of Slag Skin Formation in Electroslag Remelting

    NASA Astrophysics Data System (ADS)

    Yanke, Jeff; Krane, Matthew John M.

    In electroslag remelting (ESR), the slag generates heat, chemically refines the melting electrode material, and forms frozen skin on the mold. An axisymmetric model is used to simulate fluid flow, heat transfer, solidification, and electromagnetics and their interaction with slag skin formation in ESR. A volume of fluid (VOF) method is used to track the slag/metal interface, allowing simulation of slag freezing to the mold. Mold diameter and applied current are varied to determine how these parameters affect melt rate and formation of slag skin during ESR. Variations in the slag skin thickness within the slag cap are found to have a significant impact on melt rate and depth of metal sump. Changes in slag cap volume resulted in small changes in melt rate.

  12. Questa Baseline and Pre-Mining Ground-Water Quality Investigation. 24. Seismic Refraction Tomography for Volume Analysis of Saturated Alluvium in the Straight Creek Drainage and Its Confluence With Red River, Taos County, New Mexico

    USGS Publications Warehouse

    Powers, Michael H.; Burton, Bethany L.

    2007-01-01

    As part of a research effort directed by the New Mexico Environment Department to determine pre-mining water quality of the Red River at a molybdenum mining site in northern New Mexico, we used seismic refraction tomography to create subsurface compressional-wave velocity images along six lines that crossed the Straight Creek drainage and three that crossed the valley of Red River. Field work was performed in June 2002 (lines 1-4) and September 2003 (lines 5-9). We interpreted the images to determine depths to the water table and to the top of bedrock. Depths to water and bedrock in boreholes near the lines correlate well with our interpretations based on seismic data. In general, the images suggest that the alluvium in this area has a trapezoidal cross section. Using a U.S. Geological Survey digital elevation model grid of surface elevations of this region and the interpreted elevations to water table and bedrock obtained from the seismic data, we generated new models of the shape of the buried bedrock surface and the water table through surface interpolation and extrapolation. Then, using elevation differences between the two grids, we calculated volumes of dry and wet alluvium in the two drainages. The Red River alluvium is about 51 percent saturated, whereas the much smaller volume of alluvium in the tributary Straight Creek is only about 18 percent saturated. When combined with average ground-water velocity values, the information we present can be used to determine discharge of Straight Creek into Red River relative to the total discharge of Red River moving past Straight Creek. This information will contribute to more accurate models of ground-water flow, which are needed to determine the pre-mining water quality in the Red River.

  13. Stratospheric aerosol optical depths, 1850-1990

    NASA Technical Reports Server (NTRS)

    Sato, Makiko; Hansen, James E.; Mccormick, M. Patrick; Pollack, James B.

    1993-01-01

    A global stratospheric aerosol database employed for climate simulations is described. For the period 1883-1990, aerosol optical depths are estimated from optical extinction data, whose quality increases with time over that period. For the period 1850-1882, aerosol optical depths are more crudely estimated from volcanological evidence for the volume of ejecta from major known volcanoes. The data set is available over Internet.

  14. Quantification of in vivo implant wear in total knee replacement from dynamic single plane radiography

    NASA Astrophysics Data System (ADS)

    Teeter, Matthew G.; Seslija, Petar; Milner, Jaques S.; Nikolov, Hristo N.; Yuan, Xunhua; Naudie, Douglas D. R.; Holdsworth, David W.

    2013-05-01

    An in vivo method to measure wear in total knee replacements was developed using dynamic single-plane fluoroscopy. A dynamic, anthropomorphic total knee replacement phantom with interchangeable, custom-fabricated components of known wear volume was created, and dynamic imaging was performed. For each frame of the fluoroscopy data, the relative location of the femoral and tibial components were determined, and the apparent intersection of the femoral component with the tibial insert was used to calculate wear volume, wear depth, and frequency of intersection. No difference was found between the measured and true wear volumes. The precision of the measurements was ±39.7 mm3 for volume and ±0.126 mm for wear depth. The results suggest the system is capable of tracking wear volume changes across multiple time points in patients. As a dynamic technique, this method can provide both kinematic and wear measurements that may be useful for evaluating new implant designs for total knee replacements.

  15. Intelligent Modeling Combining Adaptive Neuro Fuzzy Inference System and Genetic Algorithm for Optimizing Welding Process Parameters

    NASA Astrophysics Data System (ADS)

    Gowtham, K. N.; Vasudevan, M.; Maduraimuthu, V.; Jayakumar, T.

    2011-04-01

    Modified 9Cr-1Mo ferritic steel is used as a structural material for steam generator components of power plants. Generally, tungsten inert gas (TIG) welding is preferred for welding of these steels in which the depth of penetration achievable during autogenous welding is limited. Therefore, activated flux TIG (A-TIG) welding, a novel welding technique, has been developed in-house to increase the depth of penetration. In modified 9Cr-1Mo steel joints produced by the A-TIG welding process, weld bead width, depth of penetration, and heat-affected zone (HAZ) width play an important role in determining the mechanical properties as well as the performance of the weld joints during service. To obtain the desired weld bead geometry and HAZ width, it becomes important to set the welding process parameters. In this work, adaptative neuro fuzzy inference system is used to develop independent models correlating the welding process parameters like current, voltage, and torch speed with weld bead shape parameters like depth of penetration, bead width, and HAZ width. Then a genetic algorithm is employed to determine the optimum A-TIG welding process parameters to obtain the desired weld bead shape parameters and HAZ width.

  16. Forecasting the Future Food Service World of Work. Final Report. Volume II. Centralized Food Service Systems. Service Management Reports.

    ERIC Educational Resources Information Center

    Powers, Thomas F., Ed.; Swinton, John R., Ed.

    Volume II of a three-volume study on the future of the food service industry considers the effects that centralized food production will have on the future of food production systems. Based on information from the Fair Acres Project and the Michigan State University Vegetable Processing Center, the authors describe the operations of a centralized…

  17. A Mass Diffusion Model for Dry Snow Utilizing a Fabric Tensor to Characterize Anisotropy

    NASA Astrophysics Data System (ADS)

    Shertzer, Richard H.; Adams, Edward E.

    2018-03-01

    A homogenization algorithm for randomly distributed microstructures is applied to develop a mass diffusion model for dry snow. Homogenization is a multiscale approach linking constituent behavior at the microscopic level—among ice and air—to the macroscopic material—snow. Principles of continuum mechanics at the microscopic scale describe water vapor diffusion across an ice grain's surface to the air-filled pore space. Volume averaging and a localization assumption scale up and down, respectively, between microscopic and macroscopic scales. The model yields a mass diffusivity expression at the macroscopic scale that is, in general, a second-order tensor parameterized by both bulk and microstructural variables. The model predicts a mass diffusivity of water vapor through snow that is less than that through air. Mass diffusivity is expected to decrease linearly with ice volume fraction. Potential anisotropy in snow's mass diffusivity is captured due to the tensor representation. The tensor is built from directional data assigned to specific, idealized microstructural features. Such anisotropy has been observed in the field and laboratories in snow morphologies of interest such as weak layers of depth hoar and near-surface facets.

  18. Secondary Ion Mass Spectrometry SIMS XI

    NASA Astrophysics Data System (ADS)

    Gillen, G.; Lareau, R.; Bennett, J.; Stevie, F.

    2003-05-01

    This volume contains 252 contributions presented as plenary, invited and contributed poster and oral presentations at the 11th International Conference on Secondary Ion Mass Spectrometry (SIMS XI) held at the Hilton Hotel, Walt Disney World Village, Orlando, Florida, 7 12 September, 1997. The book covers a diverse range of research, reflecting the rapid growth in advanced semiconductor characterization, ultra shallow depth profiling, TOF-SIMS and the new areas in which SIMS techniques are being used, for example in biological sciences and organic surface characterization. Papers are presented under the following categories: Isotopic SIMS Biological SIMS Semiconductor Characterization Techniques and Applications Ultra Shallow Depth Profiling Depth Profiling Fundamental/Modelling and Diffusion Sputter-Induced Topography Fundamentals of Molecular Desorption Organic Materials Practical TOF-SIMS Polyatomic Primary Ions Materials/Surface Analysis Postionization Instrumentation Geological SIMS Imaging Fundamentals of Sputtering Ion Formation and Cluster Formation Quantitative Analysis Environmental/Particle Characterization Related Techniques These proceedings provide an invaluable source of reference for both newcomers to the field and experienced SIMS users.

  19. Delineation of the hydrogeologic framework of the Big Sioux aquifer near Sioux Falls, South Dakota, using airborne electromagnetic data

    USGS Publications Warehouse

    Valseth, Kristen J.; Delzer, Gregory C.; Price, Curtis V.

    2018-03-21

    The U.S. Geological Survey, in cooperation with the City of Sioux Falls, South Dakota, began developing a groundwater-flow model of the Big Sioux aquifer in 2014 that will enable the City to make more informed water management decisions, such as delineation of areas of the greatest specific yield, which is crucial for locating municipal wells. Innovative tools are being evaluated as part of this study that can improve the delineation of the hydrogeologic framework of the aquifer for use in development of a groundwater-flow model, and the approach could have transfer value for similar hydrogeologic settings. The first step in developing a groundwater-flow model is determining the hydrogeologic framework (vertical and horizontal extents of the aquifer), which typically is determined by interpreting geologic information from drillers’ logs and surficial geology maps. However, well and borehole data only provide hydrogeologic information for a single location; conversely, nearly continuous geophysical data are collected along flight lines using airborne electromagnetic (AEM) surveys. These electromagnetic data are collected every 3 meters along a flight line (on average) and subsequently can be related to hydrogeologic properties. AEM data, coupled with and constrained by well and borehole data, can substantially improve the accuracy of aquifer hydrogeologic framework delineations and result in better groundwater-flow models. AEM data were acquired using the Resolve frequency-domain AEM system to map the Big Sioux aquifer in the region of the city of Sioux Falls. The survey acquired more than 870 line-kilometers of AEM data over a total area of about 145 square kilometers, primarily over the flood plain of the Big Sioux River between the cities of Dell Rapids and Sioux Falls. The U.S. Geological Survey inverted the survey data to generate resistivity-depth sections that were used in two-dimensional maps and in three-dimensional volumetric visualizations of the Earth resistivity distribution. Contact lines were drawn using a geographic information system to delineate interpreted geologic stratigraphy. The contact lines were converted to points and then interpolated into a raster surface. The methods used to develop elevation and depth maps of the hydrogeologic framework of the Big Sioux aquifer are described herein.The final AEM interpreted aquifer thickness ranged from 0 to 31 meters with an average thickness of 12.8 meters. The estimated total volume of the aquifer was 1,060,000,000 cubic meters based on the assumption that the top of the aquifer is the land-surface elevation. A simple calculation of the volume (length times width times height) of a previous delineation of the aquifer estimated the aquifer volume at 378,000,000 cubic meters; thus, the estimation based on AEM data is more than twice the previous estimate. The depth to top of Sioux Quartzite, which ranged in depth from 0 to 90 meters, also was delineated from the AEM data.

  20. Automatic characterization and segmentation of human skin using three-dimensional optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Hori, Yasuaki; Yasuno, Yoshiaki; Sakai, Shingo; Matsumoto, Masayuki; Sugawara, Tomoko; Madjarova, Violeta; Yamanari, Masahiro; Makita, Shuichi; Yasui, Takeshi; Araki, Tsutomu; Itoh, Masahide; Yatagai, Toyohiko

    2006-03-01

    A set of fully automated algorithms that is specialized for analyzing a three-dimensional optical coherence tomography (OCT) volume of human skin is reported. The algorithm set first determines the skin surface of the OCT volume, and a depth-oriented algorithm provides the mean epidermal thickness, distribution map of the epidermis, and a segmented volume of the epidermis. Subsequently, an en face shadowgram is produced by an algorithm to visualize the infundibula in the skin with high contrast. The population and occupation ratio of the infundibula are provided by a histogram-based thresholding algorithm and a distance mapping algorithm. En face OCT slices at constant depths from the sample surface are extracted, and the histogram-based thresholding algorithm is again applied to these slices, yielding a three-dimensional segmented volume of the infundibula. The dermal attenuation coefficient is also calculated from the OCT volume in order to evaluate the skin texture. The algorithm set examines swept-source OCT volumes of the skins of several volunteers, and the results show the high stability, portability and reproducibility of the algorithm.

  1. Apprentice Food Service Specialist (AFSC 62230).

    ERIC Educational Resources Information Center

    Air Univ., Gunter AFS, Ala. Extension Course Inst.

    This two-volume student text is designed for use by Air Force personnel enrolled in a self-study extension course for apprentice food service specialists. Covered in the first volume are fundamentals of food preparation and service (careers in food service, food service sanitation, principles of food preparation and service, and baking…

  2. Seismic reflection images of the central California coast ranges and the tremor region of the San-Andreas-Fault system near Cholame

    NASA Astrophysics Data System (ADS)

    Gutjahr, Stine; Buske, Stefan

    2010-05-01

    The SJ-6 seismic reflection profile was acquired in 1981 over a distance of about 180 km from Morro Bay to the Sierra Nevada foothills in South Central California. The profile runs across several prominent fault systems, e.g. the Riconada Fault (RF) in the western part as well as the San Andreas Fault (SAF) in its central part. The latter includes the region of increased tremor activity near Cholame, as reported recently by several authors. We have recorrelated the original field data to 26 seconds two-way traveltime which allows us to image the crust and uppermost mantle down to approximately 40 km depth. A 3D tomographic velocity model derived from local earthquake data (Thurber et al., 2006) was used and Kirchhoff prestack depth migration as well as Fresnel-Volume-Migration were applied to the data set. Both imaging techniques were implemented in 3D by taking into account the true shot and receiver locations. The imaged subsurface volume itself was divided into three separate parts to correctly account for the significant kink in the profile line near the SAF. The most prominent features in the resulting images are areas of high reflectivity down to 30 km depth in particular in the central western part of the profile corresponding to the Salinian Block between the RF and the SAF. In the southwestern part strong reflectors can be identified that are dipping slightly to the northeast at depths of around 15-25 km. The eastern part consists of west dipping sediments at depths of 2-10 km that form a syncline structure in the west of the eastern part. The resulting images are compared to existing interpretations (Trehu and Wheeler, 1987; Wentworth and Zoback, 1989; Bloch et al., 1993) and discussed in the frame of the suggested tremor locations in that area.

  3. The impact of social franchising on the use of reproductive health and family planning services at public commune health stations in Vietnam.

    PubMed

    Ngo, Anh D; Alden, Dana L; Pham, Van; Phan, Ha

    2010-02-28

    Service franchising is a business model that involves building a network of outlets (franchisees) that are locally owned, but act in coordinated manner with the guidance of a central headquarters (franchisor). The franchisor maintains quality standards, provides managerial training, conducts centralized purchasing and promotes a common brand. Research indicates that franchising private reproductive health and family planning (RHFP) services in developing countries improves quality and utilization. However, there is very little evidence that franchising improves RHFP services delivered through community-based public health clinics. This study evaluates behavioral outcomes associated with a new approach - the Government Social Franchise (GSF) model - developed to improve RHFP service quality and capacity in Vietnam's commune health stations (CHSs). The project involved networking and branding 36 commune health station (CHS) clinics in two central provinces of Da Nang and Khanh Hoa, Vietnam. A quasi-experimental design with 36 control CHSs assessed GSF model effects on client use as measured by: 1) clinic-reported client volume; 2) the proportion of self-reported RHFP service users at participating CHS clinics over the total sample of respondents; and 3) self-reported RHFP service use frequency. Monthly clinic records were analyzed. In addition, household surveys of 1,181 CHS users and potential users were conducted prior to launch and then 6 and 12 months after implementing the GSF network. Regression analyses controlled for baseline differences between intervention and control groups. CHS franchise membership was significantly associated with a 40% plus increase in clinic-reported client volumes for both reproductive and general health services. A 45% increase in clinic-reported family planning service clients related to GSF membership was marginally significant (p = 0.05). Self-reported frequency of RHFP service use increased by 20% from the baseline survey to the 12 month post-launch survey (p < 0.05). However, changes in self-reported usage rate were not significantly associated with franchise membership (p = 0.15). This study provides preliminary evidence regarding the ability of the Government Social Franchise model to increase use of reproductive health and family planning service in smaller public sector clinics. Further investigations, including assessment of health outcomes associated with increased use of GSF services and cost-effectiveness of the model, are required to better delineate the effectiveness and limitations of franchising RHFP services in the public health system in Vietnam and other developing countries.

  4. The impact of social franchising on the use of reproductive health and family planning services at public commune health stations in Vietnam

    PubMed Central

    2010-01-01

    Background Service franchising is a business model that involves building a network of outlets (franchisees) that are locally owned, but act in coordinated manner with the guidance of a central headquarters (franchisor). The franchisor maintains quality standards, provides managerial training, conducts centralized purchasing and promotes a common brand. Research indicates that franchising private reproductive health and family planning (RHFP) services in developing countries improves quality and utilization. However, there is very little evidence that franchising improves RHFP services delivered through community-based public health clinics. This study evaluates behavioral outcomes associated with a new approach - the Government Social Franchise (GSF) model - developed to improve RHFP service quality and capacity in Vietnam's commune health stations (CHSs). Methods The project involved networking and branding 36 commune health station (CHS) clinics in two central provinces of Da Nang and Khanh Hoa, Vietnam. A quasi-experimental design with 36 control CHSs assessed GSF model effects on client use as measured by: 1) clinic-reported client volume; 2) the proportion of self-reported RHFP service users at participating CHS clinics over the total sample of respondents; and 3) self-reported RHFP service use frequency. Monthly clinic records were analyzed. In addition, household surveys of 1,181 CHS users and potential users were conducted prior to launch and then 6 and 12 months after implementing the GSF network. Regression analyses controlled for baseline differences between intervention and control groups. Results CHS franchise membership was significantly associated with a 40% plus increase in clinic-reported client volumes for both reproductive and general health services. A 45% increase in clinic-reported family planning service clients related to GSF membership was marginally significant (p = 0.05). Self-reported frequency of RHFP service use increased by 20% from the baseline survey to the 12 month post-launch survey (p < 0.05). However, changes in self-reported usage rate were not significantly associated with franchise membership (p = 0.15). Conclusions This study provides preliminary evidence regarding the ability of the Government Social Franchise model to increase use of reproductive health and family planning service in smaller public sector clinics. Further investigations, including assessment of health outcomes associated with increased use of GSF services and cost-effectiveness of the model, are required to better delineate the effectiveness and limitations of franchising RHFP services in the public health system in Vietnam and other developing countries. PMID:20187974

  5. Using spin-label W-band EPR to study membrane fluidity profiles in samples of small volume

    NASA Astrophysics Data System (ADS)

    Mainali, Laxman; Hyde, James S.; Subczynski, Witold K.

    2013-01-01

    Conventional and saturation-recovery (SR) EPR at W-band (94 GHz) using phosphatidylcholine spin labels (labeled at the alkyl chain [n-PC] and headgroup [T-PC]) to obtain profiles of membrane fluidity has been demonstrated. Dimyristoylphosphatidylcholine (DMPC) membranes with and without 50 mol% cholesterol have been studied, and the results have been compared with similar studies at X-band (9.4 GHz) (L. Mainali, J.B. Feix, J.S. Hyde, W.K. Subczynski, J. Magn. Reson. 212 (2011) 418-425). Profiles of the spin-lattice relaxation rate (T1-1) obtained from SR EPR measurements for n-PCs and T-PC were used as a convenient quantitative measure of membrane fluidity. Additionally, spectral analysis using Freed's MOMD (microscopic-order macroscopic-disorder) model (E. Meirovitch, J.H. Freed J. Phys. Chem. 88 (1984) 4995-5004) provided rotational diffusion coefficients (R⊥ and R||) and order parameters (S0). Spectral analysis at X-band provided one rotational diffusion coefficient, R⊥. T1-1, R⊥, and R|| profiles reflect local membrane dynamics of the lipid alkyl chain, while the order parameter shows only the amplitude of the wobbling motion of the lipid alkyl chain. Using these dynamic parameters, namely T1-1, R⊥, and R||, one can discriminate the different effects of cholesterol at different depths, showing that cholesterol has a rigidifying effect on alkyl chains to the depth occupied by the rigid steroid ring structure and a fluidizing effect at deeper locations. The nondynamic parameter, S0, shows that cholesterol has an ordering effect on alkyl chains at all depths. Conventional and SR EPR measurements with T-PC indicate that cholesterol has a fluidizing effect on phospholipid headgroups. EPR at W-band provides more detailed information about the depth-dependent dynamic organization of the membrane compared with information obtained at X-band. EPR at W-band has the potential to be a powerful tool for studying membrane fluidity in samples of small volume, ˜30 nL, compared with a representative sample volume of ˜3 μL at X-band.

  6. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.

  7. Changes in High School Chemistry Teacher Beliefs and Practice after a Professional Development Program

    ERIC Educational Resources Information Center

    Spraker, Ralph Everett, Jr.

    2010-01-01

    This study proposed that when professional development modeled the inquiry-approach and provided time for peer-observed enactment and reflection, it would produce change in in-service chemistry teachers' beliefs and practices. Case study methodology was used to collect a variety of in-depth data on teachers' beliefs and practice including…

  8. Depth of penetration of a 785nm wavelength laser in food powders

    NASA Astrophysics Data System (ADS)

    Chao, Kuanglin; Dhakal, Sagar; Qin, Jianwei; Kim, Moon S.; Peng, Yankun; Schmidt, Walter F.

    2015-05-01

    Raman spectroscopy is a useful, rapid, and non-destructive method for both qualitative and quantitative evaluation of chemical composition. However it is important to measure the depth of penetration of the laser light to ensure that chemical particles at the very bottom of a sample volume is detected by Raman system. The aim of this study was to investigate the penetration depth of a 785nm laser (maximum power output 400mw) into three different food powders, namely dry milk powder, corn starch, and wheat flour. The food powders were layered in 5 depths between 1 and 5 mm overtop a Petri dish packed with melamine. Melamine was used as the subsurface reference material for measurement because melamine exhibits known and identifiable Raman spectral peaks. Analysis of the sample spectra for characteristics of melamine and characteristics of milk, starch and flour allowed determination of the effective penetration depth of the laser light in the samples. Three laser intensities (100, 200 and 300mw) were used to study the effect of laser intensity to depth of penetration. It was observed that 785nm laser source was able to easily penetrate through every point in all three food samples types at 1mm depth. However, the number of points that the laser could penetrate decreased with increasing depth of the food powder. ANOVA test was carried out to study the significant effect of laser intensity to depth of penetration. It was observed that laser intensity significantly influences the depth of penetration. The outcome of this study will be used in our next phase of study to detect different chemical contaminants in food powders and develop quantitative analysis models for detection of chemical contaminants.

  9. Estimating floodwater depths from flood inundation maps and topography

    USGS Publications Warehouse

    Cohen, Sagy; Brakenridge, G. Robert; Kettner, Albert; Bates, Bradford; Nelson, Jonathan M.; McDonald, Richard R.; Huang, Yu-Fen; Munasinghe, Dinuke; Zhang, Jiaqi

    2018-01-01

    Information on flood inundation extent is important for understanding societal exposure, water storage volumes, flood wave attenuation, future flood hazard, and other variables. A number of organizations now provide flood inundation maps based on satellite remote sensing. These data products can efficiently and accurately provide the areal extent of a flood event, but do not provide floodwater depth, an important attribute for first responders and damage assessment. Here we present a new methodology and a GIS-based tool, the Floodwater Depth Estimation Tool (FwDET), for estimating floodwater depth based solely on an inundation map and a digital elevation model (DEM). We compare the FwDET results against water depth maps derived from hydraulic simulation of two flood events, a large-scale event for which we use medium resolution input layer (10 m) and a small-scale event for which we use a high-resolution (LiDAR; 1 m) input. Further testing is performed for two inundation maps with a number of challenging features that include a narrow valley, a large reservoir, and an urban setting. The results show FwDET can accurately calculate floodwater depth for diverse flooding scenarios but also leads to considerable bias in locations where the inundation extent does not align well with the DEM. In these locations, manual adjustment or higher spatial resolution input is required.

  10. Factors that influence the way local communities respond to consultation processes about major service change: A qualitative study

    PubMed Central

    Barratt, Helen; Harrison, David A.; Raine, Rosalind; Fulop, Naomi J.

    2015-01-01

    Objectives In England, proposed service changes such as Emergency Department closures typically face local opposition. Consequently, public consultation exercises often involve protracted, hostile debates. This study examined a process aimed at engaging a community in decision-making about service reconfiguration, and the public response to this process. Methods A documentary analysis was conducted to map consultation methods used in an urban area of England where plans to consolidate hospital services on fewer sites were under discussion. In-depth interviews (n = 20) were conducted with parents, older people, and patient representatives. The analysis combined inductive and deductive approaches, informed by risk communication theories. Results The commissioners provided a large volume of information about the changes, alongside a programme of public events. However, the complexity of the process, together with what members of the public perceived to be the commissioners’ dismissal of their concerns, led the community to question their motivation. This was compounded by a widespread perception that the proposals were financially driven. Discussion Government policy emphasises the importance of clinical leadership and ‘evidence’ in public consultation. However, an engagement process based on this approach fuelled hostility to the proposals. Policymakers should not assume communities can be persuaded to accommodate service change which may result in reduced access to care. PMID:25975768

  11. Factors that influence the way local communities respond to consultation processes about major service change: A qualitative study.

    PubMed

    Barratt, Helen; Harrison, David A; Raine, Rosalind; Fulop, Naomi J

    2015-09-01

    In England, proposed service changes such as Emergency Department closures typically face local opposition. Consequently, public consultation exercises often involve protracted, hostile debates. This study examined a process aimed at engaging a community in decision-making about service reconfiguration, and the public response to this process. A documentary analysis was conducted to map consultation methods used in an urban area of England where plans to consolidate hospital services on fewer sites were under discussion. In-depth interviews (n=20) were conducted with parents, older people, and patient representatives. The analysis combined inductive and deductive approaches, informed by risk communication theories. The commissioners provided a large volume of information about the changes, alongside a programme of public events. However, the complexity of the process, together with what members of the public perceived to be the commissioners' dismissal of their concerns, led the community to question their motivation. This was compounded by a widespread perception that the proposals were financially driven. Government policy emphasises the importance of clinical leadership and 'evidence' in public consultation. However, an engagement process based on this approach fuelled hostility to the proposals. Policymakers should not assume communities can be persuaded to accommodate service change which may result in reduced access to care. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  12. Special Recreation in Rural Areas. Institute Report #7. National Institute on New Models of Community Based Recreation and Leisure Programs and Services for Handicapped Children and Youth.

    ERIC Educational Resources Information Center

    Nesbitt, John A., Comp.; Seymour, Clifford T., Comp.

    The fifth of nine volumes (EC 114 401-409) on recreation for the handicapped examines special recreation in rural areas. The following 11 papers are included: "Recreational, Cultural and Leisure Services for the Handicapped in Rural Communities in Iowa" (D. Szymanski); "Recreation for Handicapped in Rural Communities" (J. Nesbitt); "Programming…

  13. Wayside Energy Storage Study : Volume 1. Summary.

    DOT National Transportation Integrated Search

    1979-02-01

    Volume I summarizes an in-depth application study which was conducted to determine the practicality and viability of using large wayside flywheels to recuperate braking energy from freight trains on long downgrades. The study examined the route struc...

  14. Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1997-01-01

    This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.

  15. Apathy is related to cortex morphology in CADASIL. A sulcal-based morphometry study.

    PubMed

    Jouvent, E; Reyes, S; Mangin, J-F; Roca, P; Perrot, M; Thyreau, B; Hervé, D; Dichgans, M; Chabriat, H

    2011-04-26

    Apathy is a debilitating symptom in cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL), the pathophysiology of which remains poorly understood. The aim of this study was to evaluate the neuroanatomic correlates of apathy, using new MRI postprocessing methods based on the identification of cortical sulci, in a large cohort of patients with CADASIL. A total of 132 patients with genetically confirmed diagnosis were included in this prospective cohort study. Global cognitive performances were assessed by the Mattis Dementia Rating Scale (MDRS) and disability by the modified Rankin score (mRS). Apathy was defined according to standard criteria. Depth, width, and cortical thickness of 10 large sulci of the frontal lobe in each hemisphere were measured. Logistic regression modeling was used to evaluate the links between apathy and cortical thickness, depth, or width of the different sulci. All models were adjusted for age, gender, level of education, MDRS, mRS, depression, and global brain volume. Complete MRI datasets of high quality were available in 119 patients. Depth of the posterior cingulate sulcus exhibited the strongest association with apathy in fully adjusted models (right: p value = 0.0006; left: p value = 0.004). Depth and width of cortical sulci in mediofrontal and orbitofrontal areas were independently associated with apathy. By contrast, cortical thickness was not. Cortical morphology in mediofrontal and orbitofrontal areas, by contrast to cortical thickness, is strongly and independently associated with apathy. These results suggest that apathy is related to a reduction of cortical surface rather than of cortical thickness secondary to lesion accumulation in CADASIL.

  16. Satellite provided customer premise services: A forecast of potential domestic demand through the year 2000. Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Al-Kinani, G.

    1983-01-01

    The potential United States domestic telecommunications demand for satellite provided customer premises voice, data and video services through the year 2000 were forecast, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by Computer premises services systems, identification of that portion of the satellite market addressabble by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a market distribution model and a network optimization model. Forecasts were developed for; 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.

  17. Satellite provided customer premise services: A forecast of potential domestic demand through the year 2000. Volume 2: Technical report

    NASA Astrophysics Data System (ADS)

    Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Al-Kinani, G.

    1983-08-01

    The potential United States domestic telecommunications demand for satellite provided customer premises voice, data and video services through the year 2000 were forecast, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by Computer premises services systems, identification of that portion of the satellite market addressabble by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a market distribution model and a network optimization model. Forecasts were developed for; 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.

  18. Imputatoin and Model-Based Updating Technique for Annual Forest Inventories

    Treesearch

    Ronald E. McRoberts

    2001-01-01

    The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...

  19. Impact Cratering: Bridging the Gap Between Modeling and Observations

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This volume contains abstracts that have been accepted for presentation at the workshop on Impact Cratering: Bridging the Gap Between Modeling and Observations, February 7-9, 2003, in Houston, Texas. Logistics, onsite administration, and publications for this workshop were provided by the staff of the Publications and Program Services Department at the Lunar and Planetary Institute.

  20. Interaction between aerosol and the planetary boundary layer depth at sites in the US and China

    NASA Astrophysics Data System (ADS)

    Sawyer, V. R.

    2015-12-01

    The depth of the planetary boundary layer (PBL) defines a changing volume into which pollutants from the surface can disperse, which affects weather, surface air quality and radiative forcing in the lower troposphere. Model simulations have also shown that aerosol within the PBL heats the layer at the expense of the surface, changing the stability profile and therefore also the development of the PBL itself: aerosol radiative forcing within the PBL suppresses surface convection and causes shallower PBLs. However, the effect has been difficult to detect in observations. The most intensive radiosonde measurements have a temporal resolution too coarse to detect the full diurnal variability of the PBL, but remote sensing such as lidar can fill in the gaps. Using a method that combines two common PBL detection algorithms (wavelet covariance and iterative curve-fitting) PBL depth retrievals from micropulse lidar (MPL) at the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site are compared to MPL-derived PBL depths from a multiyear lidar deployment at the Hefei Radiation Observatory (HeRO). With aerosol optical depth (AOD) measurements from both sites, it can be shown that a weak inverse relationship exists between AOD and daytime PBL depth. This relationship is stronger at the more polluted HeRO site than at SGP. Figure: Mean daily AOD vs. mean daily PBL depth, with the Nadaraya-Watson estimator overlaid on the kernel density estimate. Left, SGP; right, HeRO.

  1. International VLBI Service for Geodesy and Astrometry: General Meeting Proceedings

    NASA Technical Reports Server (NTRS)

    Vandenberg, Nancy R. (Editor); Baver, Karen D. (Editor)

    2002-01-01

    This volume contains the proceedings of the second General Meeting of the International VLBI Service for Geodesy and Astrometry (IVS), held in Tsukuba, Japan, February 4-7, 2002. The contents of this volume also appear on the IVS Web site at http://ivscc.gsfc.nasa.gov/publications/gm2002. The key-note of the second GM was prospectives for the future, in keeping with the re-organization of the IAG around the motivation of geodesy as 'an old science with a dynamic future' and noting that providing reference frames for Earth system science that are consistent over decades on the highest accuracy level will provide a challenging role for IVS. The goal of the meeting was to provide an interesting and informative program for a wide cross section of IVS members, including station operators, program managers, and analysts. This volume contains 72 papers and five abstracts of papers presented at the GM. The volume also includes reports about three splinter meetings held in conjunction with the GM: a mini-TOW (Technical Operations Workshop), the third IVS Analysis Workshop and a meeting of the analysis working group on geophysical modeling.

  2. Commissioning dose computation models for spot scanning proton beams in water for a commercially available treatment planning system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, X. R.; Poenisch, F.; Lii, M.

    2013-04-15

    Purpose: To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). Methods: The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm{sup 2}/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateralmore » dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. Results: We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. Conclusions: We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future.« less

  3. Commissioning dose computation models for spot scanning proton beams in water for a commercially available treatment planning system

    PubMed Central

    Zhu, X. R.; Poenisch, F.; Lii, M.; Sawakuchi, G. O.; Titt, U.; Bues, M.; Song, X.; Zhang, X.; Li, Y.; Ciangaru, G.; Li, H.; Taylor, M. B.; Suzuki, K.; Mohan, R.; Gillin, M. T.; Sahoo, N.

    2013-01-01

    Purpose: To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). Methods: The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm2/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateral dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. Results: We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. Conclusions: We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future. PMID:23556893

  4. Commissioning dose computation models for spot scanning proton beams in water for a commercially available treatment planning system.

    PubMed

    Zhu, X R; Poenisch, F; Lii, M; Sawakuchi, G O; Titt, U; Bues, M; Song, X; Zhang, X; Li, Y; Ciangaru, G; Li, H; Taylor, M B; Suzuki, K; Mohan, R; Gillin, M T; Sahoo, N

    2013-04-01

    To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm(2)/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateral dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future.

  5. Highway Safety Program Manual: Volume 15: Police Traffic Services.

    ERIC Educational Resources Information Center

    National Highway Traffic Safety Administration (DOT), Washington, DC.

    Volume 15 of the 19-volume Highway Safety Program Manual (which provides guidance to State and local governments on preferred highway safety practices) focuses on police traffic services. The purpose and objectives of a police services program are described. Federal authority in the areas of highway safety and policies regarding a police traffic…

  6. Real-time 3D human pose recognition from reconstructed volume via voxel classifiers

    NASA Astrophysics Data System (ADS)

    Yoo, ByungIn; Choi, Changkyu; Han, Jae-Joon; Lee, Changkyo; Kim, Wonjun; Suh, Sungjoo; Park, Dusik; Kim, Junmo

    2014-03-01

    This paper presents a human pose recognition method which simultaneously reconstructs a human volume based on ensemble of voxel classifiers from a single depth image in real-time. The human pose recognition is a difficult task since a single depth camera can capture only visible surfaces of a human body. In order to recognize invisible (self-occluded) surfaces of a human body, the proposed algorithm employs voxel classifiers trained with multi-layered synthetic voxels. Specifically, ray-casting onto a volumetric human model generates a synthetic voxel, where voxel consists of a 3D position and ID corresponding to the body part. The synthesized volumetric data which contain both visible and invisible body voxels are utilized to train the voxel classifiers. As a result, the voxel classifiers not only identify the visible voxels but also reconstruct the 3D positions and the IDs of the invisible voxels. The experimental results show improved performance on estimating the human poses due to the capability of inferring the invisible human body voxels. It is expected that the proposed algorithm can be applied to many fields such as telepresence, gaming, virtual fitting, wellness business, and real 3D contents control on real 3D displays.

  7. Soil carbon storage in a small arid catchment in the Negev desert (Israel)

    NASA Astrophysics Data System (ADS)

    Hoffmann, Ulrike; Kuhn, Nikolaus

    2010-05-01

    The mineral soil represents a major pool in the global carbon cycle. The behavior of mineral soil as a carbon reservoir in global climate and environmental issues is far from fully understood and causes a serious lack of comparable data on mineral soil organic carbon (SOC) at regional scale. To improve our understanding of soil carbon sequestration, it is necessary to acquire regional estimates of soil carbon pools in different ecosystem types. So far, little attention has been given to Dryland ecosystems, but they are often considered as highly sensitive to environmental change, with large and rapid responses to even smallest changes of climate conditions. Due to this fact, Drylands, as an ecosystem with extensive surface area across the globe (6.15 billion ha), have been suggested as a potential component for major carbon storage. A priori reasoning suggests that regional spatial patterns of SOC density (kg/m²) in Drylands are mostly affected by vegetation, soil texture, landscape position, soil truncation, wind erosion/deposition and the effect of water supply. Particularly unassigned is the interaction between soil volume, geomorphic processes and SOC density on regional scale. This study aims to enhance our understanding of regional spatial variability in dependence on soil volume, topography and surface parameters in areas susceptible to environmental change. Soil samples were taken in small transects at different representative slope positions across a range of elevations, soil texture, vegetation types, and terrain positions in a small catchment (600 ha) in the Negev desert. Topographic variables were extracted from a high resolution (0.5m) digital elevation model. Subsequently, we estimated the soil volume by excavating the entire soil at the representative sampling position. The volume was then estimated by laser scanning before and after soil excavation. SOC concentration of the soil samples was determined by CHN-analyser. For each sample, carbon densities (in kg/m²) were estimated for the mineral soil layer. The results indicate a large spatial variability of the carbon contents, the soil volume and depths across the landscape. In general, topography exerts a strong control on the carbon contents and the soil depths in the study site. Lowest carbon contents are apparent at the hillslope tops with increasing contents downslope. Because of the significantly larger carbon content at the northern exposed slope, we suggest that solar radiation driven differences of soil moisture content major controls SOC. Regarding the soil depths, the differences are not that clear. Soil depths seem to be higher at the southern exposed slope, but differences with respect to the slope position are not significant. Concerning the total amount of carbon storage in the study area, the results show that soil carbon may not be neglected in arid areas. Our results should provide an indication that carbon contents in dynamic environments are more affected and controlled by surface properties (soil volume) than by climate. Concluding that hint, climate is less important than surface processes in dryland ecosystems.

  8. High organic inputs explain shallow and deep SOC storage in a long-term agroforestry system - combining experimental and modeling approaches

    NASA Astrophysics Data System (ADS)

    Cardinael, Rémi; Guenet, Bertrand; Chevallier, Tiphaine; Dupraz, Christian; Cozzi, Thomas; Chenu, Claire

    2018-01-01

    Agroforestry is an increasingly popular farming system enabling agricultural diversification and providing several ecosystem services. In agroforestry systems, soil organic carbon (SOC) stocks are generally increased, but it is difficult to disentangle the different factors responsible for this storage. Organic carbon (OC) inputs to the soil may be larger, but SOC decomposition rates may be modified owing to microclimate, physical protection, or priming effect from roots, especially at depth. We used an 18-year-old silvoarable system associating hybrid walnut trees (Juglans regia × nigra) and durum wheat (Triticum turgidum L. subsp. durum) and an adjacent agricultural control plot to quantify all OC inputs to the soil - leaf litter, tree fine root senescence, crop residues, and tree row herbaceous vegetation - and measured SOC stocks down to 2 m of depth at varying distances from the trees. We then proposed a model that simulates SOC dynamics in agroforestry accounting for both the whole soil profile and the lateral spatial heterogeneity. The model was calibrated to the control plot only. Measured OC inputs to soil were increased by about 40 % (+ 1.11 t C ha-1 yr-1) down to 2 m of depth in the agroforestry plot compared to the control, resulting in an additional SOC stock of 6.3 t C ha-1 down to 1 m of depth. However, most of the SOC storage occurred in the first 30 cm of soil and in the tree rows. The model was strongly validated, properly describing the measured SOC stocks and distribution with depth in agroforestry tree rows and alleys. It showed that the increased inputs of fresh biomass to soil explained the observed additional SOC storage in the agroforestry plot. Moreover, only a priming effect variant of the model was able to capture the depth distribution of SOC stocks, suggesting the priming effect as a possible mechanism driving deep SOC dynamics. This result questions the potential of soils to store large amounts of carbon, especially at depth. Deep-rooted trees modify OC inputs to soil, a process that deserves further study given its potential effects on SOC dynamics.

  9. Economic models for prevention: making a system work for patients

    PubMed Central

    2015-01-01

    The purpose of this article is to describe alternative means of providing patient centered, preventive based, services using an alternative non-profit, economic model. Hard to reach, vulnerable groups, including children, adults and elders, often have difficulties accessing traditional dental services for a number of reasons, including economic barriers. By partnering with community organizations that serve these groups, collaborative services and new opportunities for access are provided. The concept of a dental home is well accepted as a means of providing care, and, for these groups, provision of such services within community settings provides a sustainable means of delivery. Dental homes provided through community partnerships can deliver evidence based dental care, focused on a preventive model to achieve and maintain oral health. By using a non-profit model, the entire dental team is provided with incentives to deliver measurable quality improvements in care, rather than a more traditional focus on volume of activity alone. Examples are provided that demonstrate how integrated oral health services can deliver improved health outcomes with the potential to reduce total costs while improving quality. PMID:26391814

  10. Satellite provided customer promises services, a forecast of potential domestic demand through the year 2000. Volume 4: Sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.

    1984-01-01

    The overall purpose was to forecast the potential United States domestic telecommunications demand for satellite provided customer promises voice, data and video services through the year 2000, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: (1) development of a forecast of the total domestic telecommunications demand; (2) identification of that portion of the telecommunications demand suitable for transmission by satellite systems; (3) identification of that portion of the satellite market addressable by consumer promises service (CPS) systems; (4) identification of that portion of the satellite market addressable by Ka-band CPS system; and (5) postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a parametric cost model, a market distribution model and a network optimization model. Forecasts were developed for: 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.

  11. Satellite provided customer promises services, a forecast of potential domestic demand through the year 2000. Volume 4: Sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.

    1984-03-01

    The overall purpose was to forecast the potential United States domestic telecommunications demand for satellite provided customer promises voice, data and video services through the year 2000, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: (1) development of a forecast of the total domestic telecommunications demand; (2) identification of that portion of the telecommunications demand suitable for transmission by satellite systems; (3) identification of that portion of the satellite market addressable by consumer promises service (CPS) systems; (4) identification of that portion of the satellite market addressable by Ka-band CPS system; and (5) postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a parametric cost model, a market distribution model and a network optimization model. Forecasts were developed for: 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.

  12. Crustal deformation, the earthquake cycle, and models of viscoelastic flow in the asthenosphere

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.; Kramer, M. J.

    1983-01-01

    The crustal deformation patterns associated with the earthquake cycle can depend strongly on the rheological properties of subcrustal material. Substantial deviations from the simple patterns for a uniformly elastic earth are expected when viscoelastic flow of subcrustal material is considered. The detailed description of the deformation pattern and in particular the surface displacements, displacement rates, strains, and strain rates depend on the structure and geometry of the material near the seismogenic zone. The origin of some of these differences are resolved by analyzing several different linear viscoelastic models with a common finite element computational technique. The models involve strike-slip faulting and include a thin channel asthenosphere model, a model with a varying thickness lithosphere, and a model with a viscoelastic inclusion below the brittle slip plane. The calculations reveal that the surface deformation pattern is most sensitive to the rheology of the material that lies below the slip plane in a volume whose extent is a few times the fault depth. If this material is viscoelastic, the surface deformation pattern resembles that of an elastic layer lying over a viscoelastic half-space. When the thickness or breath of the viscoelastic material is less than a few times the fault depth, then the surface deformation pattern is altered and geodetic measurements are potentially useful for studying the details of subsurface geometry and structure. Distinguishing among the various models is best accomplished by making geodetic measurements not only near the fault but out to distances equal to several times the fault depth. This is where the model differences are greatest; these differences will be most readily detected shortly after an earthquake when viscoelastic effects are most pronounced.

  13. Bearing Capacity Assessment on low Volume Roads

    NASA Astrophysics Data System (ADS)

    Zariņš, A.

    2015-11-01

    A large part of Latvian road network consists of low traffic volume roads and in particular of roads without hard pavement. Unbounded pavements shows serious problems in the form of rutting and other deformations, which finally lead to weak serviceability and damage of the road structure after intensive exploitation periods. Traditionally, these problems have been associated with heavy goods transport, overloaded vehicles and their impact. To find the specific damaging factors causing road pavement deformations and evaluate their prevention possibilities, and establish conditions that will allow doing it, the study was carried out. The tire pressure has been set as the main factor of load. Two different tire pressures have been used in tests and their impacts were compared. The comparison was done using deflection measurements with LWD together with dielectric constant measurements in a road structure using percometer. Measurements were taken in the upper pavement structure layers at different depths during full-scale loading and in different moisture/temperature conditions. Advisable load intensity and load factors for heavy traffic according to road conditions were set based on the study results.

  14. A numerical investigation of wave-breaking-induced turbulent coherent structure under a solitary wave

    NASA Astrophysics Data System (ADS)

    Zhou, Zheyu; Sangermano, Jacob; Hsu, Tian-Jian; Ting, Francis C. K.

    2014-10-01

    To better understand the effect of wave-breaking-induced turbulence on the bed, we report a 3-D large-eddy simulation (LES) study of a breaking solitary wave in spilling condition. Using a turbulence-resolving approach, we study the generation and the fate of wave-breaking-induced turbulent coherent structures, commonly known as obliquely descending eddies (ODEs). Specifically, we focus on how these eddies may impinge onto bed. The numerical model is implemented using an open-source CFD library of solvers, called OpenFOAM, where the incompressible 3-D filtered Navier-Stokes equations for the water and the air phases are solved with a finite volume scheme. The evolution of the water-air interfaces is approximated with a volume of fluid method. Using the dynamic Smagorinsky closure, the numerical model has been validated with wave flume experiments of solitary wave breaking over a 1/50 sloping beach. Simulation results show that during the initial overturning of the breaking wave, 2-D horizontal rollers are generated, accelerated, and further evolve into a couple of 3-D hairpin vortices. Some of these vortices are sufficiently intense to impinge onto the bed. These hairpin vortices possess counter-rotating and downburst features, which are key characteristics of ODEs observed by earlier laboratory studies using Particle Image Velocimetry. Model results also suggest that those ODEs that impinge onto bed can induce strong near-bed turbulence and bottom stress. The intensity and locations of these near-bed turbulent events could not be parameterized by near-surface (or depth integrated) turbulence unless in very shallow depth.

  15. [Left ventricular volume determination by first-pass radionuclide angiocardiography using a semi-geometric count-based method].

    PubMed

    Kinoshita, S; Suzuki, T; Yamashita, S; Muramatsu, T; Ide, M; Dohi, Y; Nishimura, K; Miyamae, T; Yamamoto, I

    1992-01-01

    A new radionuclide technique for the calculation of left ventricular (LV) volume by the first-pass (FP) method was developed and examined. Using a semi-geometric count-based method, the LV volume can be measured by the following equation: CV = CM/(L/d). V = (CT/CV) x d3 = (CT/CM) x L x d2. (V = LV volume, CV = voxel count, CM = the maximum LV count, CT = the total LV count, L = LV depth where the maximum count was obtained, and d = pixel size.) This theorem was applied to FP LV images obtained in the 30-degree right anterior oblique position. Frame-mode acquisition was performed and the LV end-diastolic maximum count and total count were obtained. The maximum LV depth was obtained as the maximum width of the LV on the FP end-diastolic image, using the assumption that the LV cross-section is circular. These values were substituted in the above equation and the LV end-diastolic volume (FP-EDV) was calculated. A routine equilibrium (EQ) study was done, and the end-diastolic maximum count and total count were obtained. The LV maximum depth was measured on the FP end-diastolic frame, as the maximum length of the LV image. Using these values, the EQ-EDV was calculated and the FP-EDV was compared to the EQ-EDV. The correlation coefficient for these two values was r = 0.96 (n = 23, p less than 0.001), and the standard error of the estimated volume was 10 ml.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. Comparative Risk Assessment of spill response options for a deepwater oil well blowout: Part 1. Oil spill modeling.

    PubMed

    French-McCay, Deborah; Crowley, Deborah; Rowe, Jill J; Bock, Michael; Robinson, Hilary; Wenning, Richard; Walker, Ann Hayward; Joeckel, John; Nedwed, Tim J; Parkerton, Thomas F

    2018-06-01

    Oil spill model simulations of a deepwater blowout in the Gulf of Mexico De Soto Canyon, assuming no intervention and various response options (i.e., subsea dispersant injection SSDI, in addition to mechanical recovery, in-situ burning, and surface dispersant application) were compared. Predicted oil fate, amount and area of surfaced oil, and exposure concentrations in the water column above potential effects thresholds were used as inputs to a Comparative Risk Assessment to identify response strategies that minimize long-term impacts. SSDI reduced human and wildlife exposure to volatile organic compounds; dispersed oil into a large water volume at depth; enhanced biodegradation; and reduced surface water, nearshore and shoreline exposure to floating oil and entrained/dissolved oil in the upper water column. Tradeoffs included increased oil exposures at depth. However, since organisms are less abundant below 200 m, results indicate that overall exposure of valued ecosystem components was minimized by use of SSDI. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Snowcover influence on backscattering from terrain

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T.; Abdelrazik, M.; Stiles, W. H.

    1984-01-01

    The effects of snowcover on the microwave backscattering from terrain in the 8-35 GHz region are examined through the analysis of experimental data and by application of a semiempirical model. The model accounts for surface backscattering contributions by the snow-air and snow-soil interfaces, and for volume backscattering contributions by the snow layer. Through comparisons of backscattering data for different terrain surfaces measured both with and without snowcover, the masking effects of snow are evaluated as a function of snow water equivalent and liquid water content. The results indicate that with dry snowcover it is not possible to discriminate between different types of ground surface (concrete, asphalt, grass, and bare ground) if the snow water equivalent is greater than about 20 cm (or a depth greater than 60 cm for a snow density of 0.3 g/cu cm). For the same density, however, if the snow is wet, a depth of 10 cm is sufficient to mask the underlying surface.

  18. Modeling and Analysis of the Static Characteristics and Dynamic Responses of Herringbone-grooved Thrust Bearings

    NASA Astrophysics Data System (ADS)

    Yu, Yunluo; Pu, Guang; Jiang, Kyle

    2017-12-01

    This paper describes a theoretical investigation of static and dynamic characteristics of herringbone-grooved air thrust bearings. Firstly, Finite Difference Method (FDM) and Finite Volume Method (FVM) are used in combination to solve the non-linear Reynolds equation and to find the pressure distribution of the film and the total loading capacity of the bearing. The influence of design parameters on air film gap characteristics, including the air film thickness, depth of the groove and rotating speed, are analyzed based on the FDM model. The simulation results show that hydrostatic thrust bearings can achieve a better load capacity with less air consumption than herringbone grooved thrust bearings at low compressibility number; herringbone grooved thrust bearings can achieve a higher load capacity but with more air consumption than hydrostatic thrust bearing at high compressibility number; herringbone grooved thrust bearings would lose stability at high rotating speeds, and the stability increases with the depth of the grooves.

  19. SToRM: A numerical model for environmental surface flows

    USGS Publications Warehouse

    Simoes, Francisco J.

    2009-01-01

    SToRM (System for Transport and River Modeling) is a numerical model developed to simulate free surface flows in complex environmental domains. It is based on the depth-averaged St. Venant equations, which are discretized using unstructured upwind finite volume methods, and contains both steady and unsteady solution techniques. This article provides a brief description of the numerical approach selected to discretize the governing equations in space and time, including important aspects of solving natural environmental flows, such as the wetting and drying algorithm. The presentation is illustrated with several application examples, covering both laboratory and natural river flow cases, which show the model’s ability to solve complex flow phenomena.

  20. Method for inverting reflection trace data from 3-D and 4-D seismic surveys and identifying subsurface fluid and pathways in and among hydrocarbon reservoirs based on impedance models

    DOEpatents

    He, W.; Anderson, R.N.

    1998-08-25

    A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management. 20 figs.

  1. Method for inverting reflection trace data from 3-D and 4-D seismic surveys and identifying subsurface fluid and pathways in and among hydrocarbon reservoirs based on impedance models

    DOEpatents

    He, Wei; Anderson, Roger N.

    1998-01-01

    A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management.

  2. Cultural Resource Reconnaissance of U.S. Army Corps of Engineers Land Alongside Lake Sakakawea in Dunn County, North Dakota. Volume 1. Main Report

    DTIC Science & Technology

    1987-11-01

    and depth: Depression is 200cm in depth. Vegetation: Pasture/short grass. Depression full of chokecherries and trees. Ground surface visibility...Strata and depth: Unknown - likely 0-10cm. Vegetation: Dwarf juniper and chokecherry (Locus 1) and bunch grass and ball cactus (Locus 2). Ground surface...position: On the NE edge of a long, narrow ridge/erosional remnant or bluff. Site size: 6m2. Strata and depth: Unknown. Vegetation: Buckbrush, chokecherry

  3. Lighting design for globally illuminated volume rendering.

    PubMed

    Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.

  4. An information model for managing multi-dimensional gridded data in a GIS

    NASA Astrophysics Data System (ADS)

    Xu, H.; Abdul-Kadar, F.; Gao, P.

    2016-04-01

    Earth observation agencies like NASA and NOAA produce huge volumes of historical, near real-time, and forecasting data representing terrestrial, atmospheric, and oceanic phenomena. The data drives climatological and meteorological studies, and underpins operations ranging from weather pattern prediction and forest fire monitoring to global vegetation analysis. These gridded data sets are distributed mostly as files in HDF, GRIB, or netCDF format and quantify variables like precipitation, soil moisture, or sea surface temperature, along one or more dimensions like time and depth. Although the data cube is a well-studied model for storing and analyzing multi-dimensional data, the GIS community remains in need of a solution that simplifies interactions with the data, and elegantly fits with existing database schemas and dissemination protocols. This paper presents an information model that enables Geographic Information Systems (GIS) to efficiently catalog very large heterogeneous collections of geospatially-referenced multi-dimensional rasters—towards providing unified access to the resulting multivariate hypercubes. We show how the implementation of the model encapsulates format-specific variations and provides unified access to data along any dimension. We discuss how this framework lends itself to familiar GIS concepts like image mosaics, vector field visualization, layer animation, distributed data access via web services, and scientific computing. Global data sources like MODIS from USGS and HYCOM from NOAA illustrate how one would employ this framework for cataloging, querying, and intuitively visualizing such hypercubes. ArcGIS—an established platform for processing, analyzing, and visualizing geospatial data—serves to demonstrate how this integration brings the full power of GIS to the scientific community.

  5. An InSAR-based survey of volcanic deformation in the central Andes

    NASA Astrophysics Data System (ADS)

    Pritchard, M. E.; Simons, M.

    2004-02-01

    We extend an earlier interferometric synthetic aperture radar (InSAR) survey covering about 900 remote volcanos of the central Andes (14°-27°S) between the years 1992 and 2002. Our survey reveals broad (10s of km), roughly axisymmetric deformation at 4 volcanic centers: two stratovolcanoes are inflating (Uturuncu, Bolivia, and Hualca Hualca, Peru); another source of inflation on the border between Chile and Argentina is not obviously associated with a volcanic edifice (here called Lazufre); and a caldera (Cerro Blanco, also called Robledo) in northwest Argentina is subsiding. We explore the range of source depths and volumes allowed by our observations, using spherical, ellipsoidal and crack-like source geometries. We further examine the effects of local topography upon the deformation field and invert for a spherical point-source in both elastic half-space and layered-space crustal models. We use a global search algorithm, with gradient search methods used to further constrain best-fitting models. Inferred source depths are model-dependent, with differences in the assumed source geometry generating a larger range of accepted depths than variations in elastic structure. Source depths relative to sea level are: 8-18 km at Hualca Hualca; 12-25 km for Uturuncu; 5-13 km for Lazufre, and 5-10 km at Cerro Blanco. Deformation at all four volcanoes seems to be time-dependent, and only Uturuncu and Cerro Blanco were deforming during the entire time period of observation. Inflation at Hualca Hualca stopped in 1997, perhaps related to a large eruption of nearby Sabancaya volcano in May 1997, although there is no obvious relation between the rate of deformation and the eruptions of Sabancaya. We do not observe any deformation associated with eruptions of Lascar, Chile, at 16 other volcanoes that had recent small eruptions or fumarolic activity, or associated with a short-lived thermal anomaly at Chiliques volcano. We posit a hydrothermal system at Cerro Blanco to explain the rate of subsidence there. For the last decade, we calculate the ratio of the volume of magma intruded to extruded is between 1-10, and that the combined rate of intrusion and extrusion is within an order of magnitude of the inferred geologic rate.

  6. The effect of topography on pyroclastic flow mobility

    NASA Astrophysics Data System (ADS)

    Ogburn, S. E.; Calder, E. S.

    2010-12-01

    Pyroclastic flows are among the most destructive volcanic phenomena. Hazard mitigation depends upon accurate forecasting of possible flow paths, often using computational models. Two main metrics have been proposed to describe the mobility of pyroclastic flows. The Heim coefficient, height-dropped/run-out (H/L), exhibits an inverse relationship with flow volume. This coefficient corresponds to the coefficient of friction and informs computational models that use Coulomb friction laws. Another mobility measure states that with constant shear stress, planimetric area is proportional to the flow volume raised to the 2/3 power (A∝V^(2/3)). This relationship is incorporated in models using constant shear stress instead of constant friction, and used directly by some empirical models. Pyroclastic flows from Soufriere Hills Volcano, Montserrat; Unzen, Japan; Colima, Mexico; and Augustine, Alaska are well described by these metrics. However, flows in specific valleys exhibit differences in mobility. This study investigates the effect of topography on pyroclastic flow mobility, as measured by the above mentioned mobility metrics. Valley width, depth, and cross-sectional area all influence flow mobility. Investigating the appropriateness of these mobility measures, as well as the computational models they inform, indicates certain circumstances under which each model performs optimally. Knowing which conditions call for which models allows for better model selection or model weighting, and therefore, more realistic hazard predictions.

  7. Nanoclay-Reinforced Glass-Ionomer Cements: In Vitro Wear Evaluation and Comparison by Two Wear-Test Methods

    PubMed Central

    Fareed, Muhammad A.; Stamboulis, Artemis

    2017-01-01

    Glass ionomer cement (GIC) represents a major transformation in restorative dentistry. Wear of dental restoratives is a common phenomenon and the determination of the wear resistance of direct-restorative materials is a challenging task. The aim of this paper was to evaluate the wear resistance of novel glass ionomer cement by two wear-test methods and to compare the two wear methods.The wear resistance of a conventional glass ionomer cement (HiFi Advanced Health Care Kent, UK) and cements modified by including various percentages of nanoclays (1, 2 and 4 wt %) was measured by a reciprocating wear test (ball-on-flat) and Oregon Health and Sciences University’s (OHSU) wear simulator. The OHSU wear simulation subjected the cement specimens to three wear mechanisms, namely abrasion, three-body abrasion and attrition using a steatite antagonist. The abrasion wear resulted in material loss from GIC specimen as the steatite antagonist forced through the exposed glass particles when it travelled along the sliding path.The hardness of specimens was measured by the Vickers hardness test. The results of reciprocation wear test showed that HiFi-1 resulted in the lowest wear volume 4.90 (0.60) mm3 (p < 0.05), but there was no significant difference (p > 0.05) in the wear volume in comparison to HiFi, HiFi-2 and HiFi-4. Similarly, the results of OHSU wear simulator showed that the total wear volume of HiFi-4 1.49 (0.24) was higher than HiFi-1 and HiFi-2. However, no significant difference (p > 0.05) was found in the OHSU total wear volume in GICs after nanoclay incorporation. The Vickers hardness (HV) of the nanoclay-reinforced cements was measured between 62 and 89 HV. Nanoclay addition at a higher concentration (4%) resulted in higher wear volume and wear depth. The total wear volumes were less dependent upon abrasion volume and attrition volume. The total wear depths were strongly influenced by attrition depth and to some extent by abrasion depth. The addition of nanoclay in higher wt % to HiFi did not result in significant improvement in wear resistance and hardness. Nonetheless, wear is a very complex phenomenon because it is sensitive to a wide number of factors that do not necessarily act in the same way when compared using different parameters. PMID:29563434

  8. Grid Enabled Geospatial Catalogue Web Service

    NASA Technical Reports Server (NTRS)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  9. Impact of needle insertion depth on the removal of hard-tissue debris.

    PubMed

    Perez, R; Neves, A A; Belladonna, F G; Silva, E J N L; Souza, E M; Fidel, S; Versiani, M A; Lima, I; Carvalho, C; De-Deus, G

    2017-06-01

    To evaluate the effect of depth of insertion of an irrigation needle tip on the removal of hard-tissue debris using micro-computed tomographic (micro-CT) imaging. Twenty isthmus-containing mesial roots of mandibular molars were anatomically matched based on similar morphological dimensions using micro-CT evaluation and assigned to two groups (n = 10), according to the depth of the irrigation needle tip during biomechanical preparation: 1 or 5 mm short of the working length (WL). The preparation was performed with Reciproc R25 file (tip size 25, .08 taper) and 5.25% NaOCl as irrigant. The final rinse was 17% EDTA followed by bidistilled water. Then, specimens were scanned again, and the matched images of the canals, before and after preparation, were examined to quantify the amount of hard-tissue debris, expressed as the percentage volume of the initial root canal volume. Data were compared statistically using the Mann-Whitney U-test. None of the tested needle insertion depths yielded root canals completely free from hard-tissue debris. The insertion depth exerted a significant influence on debris removal, with a significant reduction in the percentage volume of hard-tissue debris when the needle was inserted 1 mm short of the WL (P < 0.05). The insertion depth of irrigation needles significantly influenced the removal of hard-tissue debris. A needle tip positioned 1 mm short of the WL resulted in percentage levels of hard-tissue debris removal almost three times higher than when positioned 5 mm from the WL. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  10. Magmatic dyking and recharge in the Asal Rift, Republic of Djibouti

    NASA Astrophysics Data System (ADS)

    Peltzer, G.; Harrington, J.; Doubre, C.; Tomic, J.

    2012-12-01

    The Asal Rift, Republic of Djibouti, has been the locus of a major magmatic event in 1978 and seems to have maintained a sustained activity in the three decade following the event. We compare the dyking event of 1978 with the magmatic activity occurring in the rift during the 1997-2008 time period. We use historical air photos and satellite images to quantify the horizontal opening on the major faults activated in 1978. These observations are combined with ground based geodetic data acquired between 1973 and 1979 across the rift to constrain a kinematic model of the 1978 rifting event, including bordering faults and mid-crustal dykes under the Asal Rift and the Ghoubbet Gulf. The model indicates that extension was concentrated between the surface and a depth of 3 km in the crust, resulting in the opening of faults, dykes and fissures between the two main faults, E and gamma, and that the structure located under the Asal Rift, below 3 km, deflated. These results suggest that, during the 1978 event, magmatic fluids transferred from a mid-crustal reservoir to the shallow structures, injecting dykes and filling faults and fissures, and reaching the surface in the Ardoukoba fissural eruption. Surface deformation observed by InSAR during the 1997-2008 decade reveals a slow, yet sustained inflation and extension across the Asal Rift combined with continuous subsidence of the rift inner floor. Modeling shows that these observations cannot be explained by visco-elastic relaxation, a process, which mostly vanishes 20 to 30 years after the 1978 event. However, the InSAR observations over this decade are well explained by a kinematic model in which an inflating body is present at mid-crustal depth, approximately under the Fieale caldera, and shallow faults accommodate both horizontal opening and down-dip slip. The total geometric moment rate, or inflation rate, due to the opening of the mid-crustal structure and the deeper parts of the opening faults is 3 106 m3yr. Such a volume change per year corresponds to 1-2% of the total volume of magma estimated to have been mobilized during the 1978 seismo-magmatic event. The comparison of the 1978-dyking and post-dyking models of rift suggests that the source of the injected magma during the 1978 event lies at mi-crustal depth under the Fieale caldera and appears to be recharging at a sustained rate more than 20 years after the event. Whether this rate is a transient rate or a long-term rate will determine the time of the next magma injection in the shallow crust. However, at the current rate, the 1978 total volume would be replenished in 50-100 years.

  11. How do organisational characteristics influence teamwork and service delivery in lung cancer diagnostic assessment programmes? A mixed-methods study

    PubMed Central

    Honein-AbouHaidar, Gladys N; Stuart-McEwan, Terri; Waddell, Tom; Salvarrey, Alexandra; Smylie, Jennifer; Dobrow, Mark J; Brouwers, Melissa C; Gagliardi, Anna R

    2017-01-01

    Objectives Diagnostic assessment programmes (DAPs) can reduce wait times for cancer diagnosis, but optimal DAP design is unknown. This study explored how organisational characteristics influenced multidisciplinary teamwork and diagnostic service delivery in lung cancer DAPs. Design A mixed-methods approach integrated data from descriptive qualitative interviews and medical record abstraction at 4 lung cancer DAPs. Findings were analysed with the Integrated Team Effectiveness Model. Setting 4 DAPs at 2 teaching and 2 community hospitals in Canada. Participants 22 staff were interviewed about organisational characteristics, target service benchmarks, and teamwork processes, determinants and outcomes; 314 medical records were reviewed for actual service benchmarks. Results Formal, informal and asynchronous team processes enabled service delivery and yielded many perceived benefits at the patient, staff and service levels. However, several DAP characteristics challenged teamwork and service delivery: referral volume/workload, time since launch, days per week of operation, rural–remote population, number and type of full-time/part-time human resources, staff colocation, information systems. As a result, all sites failed to meet target benchmarks (from referral to consultation median 4.0 visits, median wait time 35.0 days). Recommendations included improved information systems, more staff in all specialties, staff colocation and expanded roles for patient navigators. Findings were captured in a conceptual framework of lung cancer DAP teamwork determinants and outcomes. Conclusions This study identified several DAP characteristics that could be improved to facilitate teamwork and enhance service delivery, thereby contributing to knowledge of organisational determinants of teamwork and associated outcomes. Findings can be used to update existing DAP guidelines, and by managers to plan or evaluate lung cancer DAPs. Ongoing research is needed to identify ideal roles for navigators, and staffing models tailored to case volumes. PMID:28235969

  12. Dynamic soft tissue deformation estimation based on energy analysis

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Yao, Bin

    2016-10-01

    The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.

  13. Factors affecting Japanese retirees' healthcare service utilisation in Malaysia: a qualitative study

    PubMed Central

    Kohno, Ayako; Nik Farid, Nik Daliana; Musa, Ghazali; Abdul Aziz, Norlaili; Nakayama, Takeo; Dahlui, Maznah

    2016-01-01

    Objective While living overseas in another culture, retirees need to adapt to a new environment but often this causes difficulties, particularly among those elderly who require healthcare services. This study examines factors affecting healthcare service utilisation among Japanese retirees in Malaysia. Design We conducted 6 focus group discussions with Japanese retirees and interviewed 8 relevant medical services providers in-depth. Guided by the Andersen Healthcare Utilisation Model, we managed and analysed the data, using QSR NVivo 10 software and the directed content analysis method. Setting We interviewed participants at Japan Clubs and their offices. Participants 30 Japanese retirees who live in Kuala Lumpur and Ipoh, and 8 medical services providers. Results We identified health beliefs, medical symptoms and health insurance as the 3 most important themes, respectively, representing the 3 dimensions within the Andersen Healthcare Utilisation Model. Additionally, language barriers, voluntary health repatriation to Japan and psychological support were unique themes that influence healthcare service utilisation among Japanese retirees. Conclusions The healthcare service utilisation among Japanese retirees in Malaysia could be partially explained by the Andersen Healthcare Utilisation Model, together with some factors that were unique findings to this study. Healthcare service utilisation among Japanese retirees in Malaysia could be improved by alleviating negative health beliefs through awareness programmes for Japanese retirees about the healthcare systems and cultural aspects of medical care in Malaysia. PMID:27006344

  14. Impact of Resource-Based Practice Expenses on the Medicare Physician Volume

    PubMed Central

    Maxwell, Stephanie; Zuckerman, Stephen

    2007-01-01

    In 1999, Medicare implemented a resource-based relative value unit (RVU) system for physician practice expense payments, and increased the number of services for which practice expense payments differ by site. Using 1998-2004 data, we examined RVU growth and decomposed that growth into resource-based RVUs, site of service, and service quantity and mix. We found that the number services with site of service differentials doubled, and that shifts in site of service and introduction of resource-based practice expenses (RBPE) were important sources of change in practice expense RVU volume. Service quantity and mix remained the largest source of growth in total RVU volume. PMID:18435224

  15. Forecasting the Future Food Service World of Work. Final Report. Volume III. Technical Papers on the Future of the Food Service Industry. Service Management Reports.

    ERIC Educational Resources Information Center

    Powers, Thomas F., Ed.; Swinton, John R., Ed.

    This third and final volume of a study on the future of the food service industry contains the technical papers on which the information in the previous two volumes was based. The papers were written by various members of the Pennsylvania State University departments of economics, food science, nutrition, social psychology, and engineering and by…

  16. Assessing nest-building behavior of mice using a 3D depth camera.

    PubMed

    Okayama, Tsuyoshi; Goto, Tatsuhiko; Toyoda, Atsushi

    2015-08-15

    We developed a novel method to evaluate the nest-building behavior of mice using an inexpensive depth camera. The depth camera clearly captured nest-building behavior. Using three-dimensional information from the depth camera, we obtained objective features for assessing nest-building behavior, including "volume," "radius," and "mean height". The "volume" represents the change in volume of the nesting material, a pressed cotton square that a mouse shreds and untangles in order to build its nest. During the nest-building process, the total volume of cotton fragments is increased. The "radius" refers to the radius of the circle enclosing the fragments of cotton. It describes the extent of nesting material dispersion. The "radius" averaged approximately 60mm when a nest was built. The "mean height" represents the change in the mean height of objects. If the nest walls were high, the "mean height" was also high. These features provided us with useful information for assessment of nest-building behavior, similar to conventional methods for the assessment of nest building. However, using the novel method, we found that JF1 mice built nests with higher walls than B6 mice, and B6 mice built nests faster than JF1 mice. Thus, our novel method can evaluate the differences in nest-building behavior that cannot be detected or quantified by conventional methods. In future studies, we will evaluate nest-building behaviors of genetically modified, as well as several inbred, strains of mice, with several nesting materials. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Salinity of deep groundwater in California: Water quantity, quality, and protection.

    PubMed

    Kang, Mary; Jackson, Robert B

    2016-07-12

    Deep groundwater aquifers are poorly characterized but could yield important sources of water in California and elsewhere. Deep aquifers have been developed for oil and gas extraction, and this activity has created both valuable data and risks to groundwater quality. Assessing groundwater quantity and quality requires baseline data and a monitoring framework for evaluating impacts. We analyze 938 chemical, geological, and depth data points from 360 oil/gas fields across eight counties in California and depth data from 34,392 oil and gas wells. By expanding previous groundwater volume estimates from depths of 305 m to 3,000 m in California's Central Valley, an important agricultural region with growing groundwater demands, fresh [<3,000 ppm total dissolved solids (TDS)] groundwater volume is almost tripled to 2,700 km(3), most of it found shallower than 1,000 m. The 3,000-m depth zone also provides 3,900 km(3) of fresh and saline water, not previously estimated, that can be categorized as underground sources of drinking water (USDWs; <10,000 ppm TDS). Up to 19% and 35% of oil/gas activities have occurred directly in freshwater zones and USDWs, respectively, in the eight counties. Deeper activities, such as wastewater injection, may also pose a potential threat to groundwater, especially USDWs. Our findings indicate that California's Central Valley alone has close to three times the volume of fresh groundwater and four times the volume of USDWs than previous estimates suggest. Therefore, efforts to monitor and protect deeper, saline groundwater resources are needed in California and beyond.

  18. Salinity of deep groundwater in California: Water quantity, quality, and protection

    PubMed Central

    Kang, Mary; Jackson, Robert B.

    2016-01-01

    Deep groundwater aquifers are poorly characterized but could yield important sources of water in California and elsewhere. Deep aquifers have been developed for oil and gas extraction, and this activity has created both valuable data and risks to groundwater quality. Assessing groundwater quantity and quality requires baseline data and a monitoring framework for evaluating impacts. We analyze 938 chemical, geological, and depth data points from 360 oil/gas fields across eight counties in California and depth data from 34,392 oil and gas wells. By expanding previous groundwater volume estimates from depths of 305 m to 3,000 m in California’s Central Valley, an important agricultural region with growing groundwater demands, fresh [<3,000 ppm total dissolved solids (TDS)] groundwater volume is almost tripled to 2,700 km3, most of it found shallower than 1,000 m. The 3,000-m depth zone also provides 3,900 km3 of fresh and saline water, not previously estimated, that can be categorized as underground sources of drinking water (USDWs; <10,000 ppm TDS). Up to 19% and 35% of oil/gas activities have occurred directly in freshwater zones and USDWs, respectively, in the eight counties. Deeper activities, such as wastewater injection, may also pose a potential threat to groundwater, especially USDWs. Our findings indicate that California’s Central Valley alone has close to three times the volume of fresh groundwater and four times the volume of USDWs than previous estimates suggest. Therefore, efforts to monitor and protect deeper, saline groundwater resources are needed in California and beyond. PMID:27354527

  19. Retrieving the axial position of fluorescent light emitting spots by shearing interferometry

    NASA Astrophysics Data System (ADS)

    Schindler, Johannes; Schau, Philipp; Brodhag, Nicole; Frenner, Karsten; Osten, Wolfgang

    2016-12-01

    A method for the depth-resolved detection of fluorescent radiation based on imaging of an interference pattern of two intersecting beams and shearing interferometry is presented. The illumination setup provides the local addressing of the excitation of fluorescence and a coarse confinement of the excitation volume in axial and lateral directions. The reconstruction of the depth relies on the measurement of the phase of the fluorescent wave fronts. Their curvature is directly related to the distance of a source to the focus of the imaging system. Access to the phase information is enabled by a lateral shearing interferometer based on a Michelson setup. This allows the evaluation of interference signals even for spatially and temporally incoherent light such as emitted by fluorophors. An analytical signal model is presented and the relations for obtaining the depth information are derived. Measurements of reference samples with different concentrations and spatial distributions of fluorophors and scatterers prove the experimental feasibility of the method. In a setup optimized for flexibility and operating in the visible range, sufficiently large interference signals are recorded for scatterers placed in depths in the range of hundred micrometers below the surface in a material with scattering properties comparable to dental enamel.

  20. Everglades Depth Estimation Network (EDEN) Applications: Tools to View, Extract, Plot, and Manipulate EDEN Data

    USGS Publications Warehouse

    Telis, Pamela A.; Henkel, Heather

    2009-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated system of real-time water-level monitoring, ground-elevation data, and water-surface elevation modeling to provide scientists and water managers with current on-line water-depth information for the entire freshwater part of the greater Everglades. To assist users in applying the EDEN data to their particular needs, a series of five EDEN tools, or applications (EDENapps), were developed. Using EDEN's tools, scientists can view the EDEN datasets of daily water-level and ground elevations, compute and view daily water depth and hydroperiod surfaces, extract data for user-specified locations, plot transects of water level, and animate water-level transects over time. Also, users can retrieve data from the EDEN datasets for analysis and display in other analysis software programs. As scientists and managers attempt to restore the natural volume, timing, and distribution of sheetflow in the wetlands, such information is invaluable. Information analyzed and presented with these tools is used to advise policy makers, planners, and decision makers of the potential effects of water management and restoration scenarios on the natural resources of the Everglades.

Top