EPOS-WP16: A Platform for European Multi-scale Laboratories
NASA Astrophysics Data System (ADS)
Spiers, Chris; Drury, Martyn; Kan-Parker, Mirjam; Lange, Otto; Willingshofer, Ernst; Funiciello, Francesca; Rosenau, Matthias; Scarlato, Piergiorgio; Sagnotti, Leonardo; W16 Participants
2016-04-01
The participant countries in EPOS embody a wide range of world-class laboratory infrastructures ranging from high temperature and pressure experimental facilities, to electron microscopy, micro-beam analysis, analogue modeling and paleomagnetic laboratories. Most data produced by the various laboratory centres and networks are presently available only in limited "final form" in publications. As such many data remain inaccessible and/or poorly preserved. However, the data produced at the participating laboratories are crucial to serving society's need for geo-resources exploration and for protection against geo-hazards. Indeed, to model resource formation and system behaviour during exploitation, we need an understanding from the molecular to the continental scale, based on experimental data. This contribution will describe the work plans that the laboratories community in Europe is making, in the context of EPOS. The main objectives are: - To collect and harmonize available and emerging laboratory data on the properties and processes controlling rock system behaviour at multiple scales, in order to generate products accessible and interoperable through services for supporting research activities. - To co-ordinate the development, integration and trans-national usage of the major solid Earth Science laboratory centres and specialist networks. The length scales encompassed by the infrastructures included range from the nano- and micrometer levels (electron microscopy and micro-beam analysis) to the scale of experiments on centimetre sized samples, and to analogue model experiments simulating the reservoir scale, the basin scale and the plate scale. - To provide products and services supporting research into Geo-resources and Geo-storage, Geo-hazards and Earth System Evolution.
Laboratory Modelling of Volcano Plumbing Systems: a review
NASA Astrophysics Data System (ADS)
Galland, Olivier; Holohan, Eoghan P.; van Wyk de Vries, Benjamin; Burchardt, Steffi
2015-04-01
Earth scientists have, since the XIX century, tried to replicate or model geological processes in controlled laboratory experiments. In particular, laboratory modelling has been used study the development of volcanic plumbing systems, which sets the stage for volcanic eruptions. Volcanic plumbing systems involve complex processes that act at length scales of microns to thousands of kilometres and at time scales from milliseconds to billions of years, and laboratory models appear very suitable to address them. This contribution reviews laboratory models dedicated to study the dynamics of volcano plumbing systems (Galland et al., Accepted). The foundation of laboratory models is the choice of relevant model materials, both for rock and magma. We outline a broad range of suitable model materials used in the literature. These materials exhibit very diverse rheological behaviours, so their careful choice is a crucial first step for the proper experiment design. The second step is model scaling, which successively calls upon: (1) the principle of dimensional analysis, and (2) the principle of similarity. The dimensional analysis aims to identify the dimensionless physical parameters that govern the underlying processes. The principle of similarity states that "a laboratory model is equivalent to his geological analogue if the dimensionless parameters identified in the dimensional analysis are identical, even if the values of the governing dimensional parameters differ greatly" (Barenblatt, 2003). The application of these two steps ensures a solid understanding and geological relevance of the laboratory models. In addition, this procedure shows that laboratory models are not designed to exactly mimic a given geological system, but to understand underlying generic processes, either individually or in combination, and to identify or demonstrate physical laws that govern these processes. From this perspective, we review the numerous applications of laboratory models to understand the distinct key features of volcanic plumbing systems: dykes, cone sheets, sills, laccoliths, caldera-related structures, ground deformation, magma/fault interactions, and explosive vents. Barenblatt, G.I., 2003. Scaling. Cambridge University Press, Cambridge. Galland, O., Holohan, E.P., van Wyk de Vries, B., Burchardt, S., Accepted. Laboratory modelling of volcanic plumbing systems: A review, in: Breitkreuz, C., Rocchi, S. (Eds.), Laccoliths, sills and dykes: Physical geology of shallow level magmatic systems. Springer.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
Improved Strength and Damage Modeling of Geologic Materials
NASA Astrophysics Data System (ADS)
Stewart, Sarah; Senft, Laurel
2007-06-01
Collisions and impact cratering events are important processes in the evolution of planetary bodies. The time and length scales of planetary collisions, however, are inaccessible in the laboratory and require the use of shock physics codes. We present the results from a new rheological model for geological materials implemented in the CTH code [1]. The `ROCK' model includes pressure, temperature, and damage effects on strength, as well as acoustic fluidization during impact crater collapse. We demonstrate that the model accurately reproduces final crater shapes, tensile cracking, and damaged zones from laboratory to planetary scales. The strength model requires basic material properties; hence, the input parameters may be benchmarked to laboratory results and extended to planetary collision events. We show the effects of varying material strength parameters, which are dependent on both scale and strain rate, and discuss choosing appropriate parameters for laboratory and planetary situations. The results are a significant improvement in models of continuum rock deformation during large scale impact events. [1] Senft, L. E., Stewart, S. T. Modeling Impact Cratering in Layered Surfaces, J. Geophys. Res., submitted.
Acoustic Treatment Design Scaling Methods. Volume 3; Test Plans, Hardware, Results, and Evaluation
NASA Technical Reports Server (NTRS)
Yu, J.; Kwan, H. W.; Echternach, D. K.; Kraft, R. E.; Syed, A. A.
1999-01-01
The ability to design, build, and test miniaturized acoustic treatment panels on scale-model fan rigs representative of the full-scale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. To be able to use scale model treatment as a full-scale design tool, it is necessary that the designer be able to reliably translate the scale model design and performance to an equivalent full-scale design. The primary objective of the study presented in this volume of the final report was to conduct laboratory tests to evaluate liner acoustic properties and validate advanced treatment impedance models. These laboratory tests include DC flow resistance measurements, normal incidence impedance measurements, DC flow and impedance measurements in the presence of grazing flow, and in-duct liner attenuation as well as modal measurements. Test panels were fabricated at three different scale factors (i.e., full-scale, half-scale, and one-fifth scale) to support laboratory acoustic testing. The panel configurations include single-degree-of-freedom (SDOF) perforated sandwich panels, SDOF linear (wire mesh) liners, and double-degree-of-freedom (DDOF) linear acoustic panels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
Multiscale Laboratory Infrastructure and Services to users: Plans within EPOS
NASA Astrophysics Data System (ADS)
Spiers, Chris; Willingshofer, Ernst; Drury, Martyn; Funiciello, Francesca; Rosenau, Matthias; Scarlato, Piergiorgio; Sagnotti, Leonardo; EPOS WG6, Corrado Cimarelli
2015-04-01
The participant countries in EPOS embody a wide range of world-class laboratory infrastructures ranging from high temperature and pressure experimental facilities, to electron microscopy, micro-beam analysis, analogue modeling and paleomagnetic laboratories. Most data produced by the various laboratory centres and networks are presently available only in limited "final form" in publications. Many data remain inaccessible and/or poorly preserved. However, the data produced at the participating laboratories are crucial to serving society's need for geo-resources exploration and for protection against geo-hazards. Indeed, to model resource formation and system behaviour during exploitation, we need an understanding from the molecular to the continental scale, based on experimental data. This contribution will describe the plans that the laboratories community in Europe is making, in the context of EPOS. The main objectives are: • To collect and harmonize available and emerging laboratory data on the properties and processes controlling rock system behaviour at multiple scales, in order to generate products accessible and interoperable through services for supporting research activities. • To co-ordinate the development, integration and trans-national usage of the major solid Earth Science laboratory centres and specialist networks. The length scales encompassed by the infrastructures included range from the nano- and micrometer levels (electron microscopy and micro-beam analysis) to the scale of experiments on centimetre sized samples, and to analogue model experiments simulating the reservoir scale, the basin scale and the plate scale. • To provide products and services supporting research into Geo-resources and Geo-storage, Geo-hazards and Earth System Evolution. If the EPOS Implementation Phase proposal presently under construction is successful, then a range of services and transnational activities will be put in place to realize these objectives.
Towards a better understanding of the cracking behavior in soils
USDA-ARS?s Scientific Manuscript database
Understanding and modeling shrinkage-induced cracks helps bridge the gap between flow problem in the laboratory and at the field. Modeling flow at the field scale with Darcian fluxes developed at the laboratory scales is challenged with preferential flows attributed to the cracking behavior of soils...
Validation of mathematical model for CZ process using small-scale laboratory crystal growth furnace
NASA Astrophysics Data System (ADS)
Bergfelds, Kristaps; Sabanskis, Andrejs; Virbulis, Janis
2018-05-01
The present material is focused on the modelling of small-scale laboratory NaCl-RbCl crystal growth furnace. First steps towards fully transient simulations are taken in the form of stationary simulations that deal with the optimization of material properties to match the model to experimental conditions. For this purpose, simulation software primarily used for the modelling of industrial-scale silicon crystal growth process was successfully applied. Finally, transient simulations of the crystal growth are presented, giving a sufficient agreement to experimental results.
Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media
Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.
2000-01-01
To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
Infrared radiation models for atmospheric methane
NASA Technical Reports Server (NTRS)
Cess, R. D.; Kratz, D. P.; Caldwell, J.; Kim, S. J.
1986-01-01
Mutually consistent line-by-line, narrow-band and broad-band infrared radiation models are presented for methane, a potentially important anthropogenic trace gas within the atmosphere. Comparisons of the modeled band absorptances with existing laboratory data produce the best agreement when, within the band models, spurious band intensities are used which are consistent with the respective laboratory data sets, but which are not consistent with current knowledge concerning the intensity of the infrared fundamental band of methane. This emphasizes the need for improved laboratory band absorptance measurements. Since, when applied to atmospheric radiation calculations, the line-by-line model does not require the use of scaling approximations, the mutual consistency of the band models provides a means of appraising the accuracy of scaling procedures. It is shown that Curtis-Godson narrow-band and Chan-Tien broad-band scaling provide accurate means of accounting for atmospheric temperature and pressure variations.
Application of lab derived kinetic biodegradation parameters at the field scale
NASA Astrophysics Data System (ADS)
Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.
2003-04-01
Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.
Connelly, Stephanie; Shin, Seung G.; Dillon, Robert J.; Ijaz, Umer Z.; Quince, Christopher; Sloan, William T.; Collins, Gavin
2017-01-01
Studies investigating the feasibility of new, or improved, biotechnologies, such as wastewater treatment digesters, inevitably start with laboratory-scale trials. However, it is rarely determined whether laboratory-scale results reflect full-scale performance or microbial ecology. The Expanded Granular Sludge Bed (EGSB) bioreactor, which is a high-rate anaerobic digester configuration, was used as a model to address that knowledge gap in this study. Two laboratory-scale idealizations of the EGSB—a one-dimensional and a three- dimensional scale-down of a full-scale design—were built and operated in triplicate under near-identical conditions to a full-scale EGSB. The laboratory-scale bioreactors were seeded using biomass obtained from the full-scale bioreactor, and, spent water from the distillation of whisky from maize was applied as substrate at both scales. Over 70 days, bioreactor performance, microbial ecology, and microbial community physiology were monitored at various depths in the sludge-beds using 16S rRNA gene sequencing (V4 region), specific methanogenic activity (SMA) assays, and a range of physical and chemical monitoring methods. SMA assays indicated dominance of the hydrogenotrophic pathway at full-scale whilst a more balanced activity profile developed during the laboratory-scale trials. At each scale, Methanobacterium was the dominant methanogenic genus present. Bioreactor performance overall was better at laboratory-scale than full-scale. We observed that bioreactor design at laboratory-scale significantly influenced spatial distribution of microbial community physiology and taxonomy in the bioreactor sludge-bed, with 1-D bioreactor types promoting stratification of each. In the 1-D laboratory bioreactors, increased abundance of Firmicutes was associated with both granule position in the sludge bed and increased activity against acetate and ethanol as substrates. We further observed that stratification in the sludge-bed in 1-D laboratory-scale bioreactors was associated with increased richness in the underlying microbial community at species (OTU) level and improved overall performance. PMID:28507535
A Simple Laboratory Scale Model of Iceberg Dynamics and its Role in Undergraduate Education
NASA Astrophysics Data System (ADS)
Burton, J. C.; MacAyeal, D. R.; Nakamura, N.
2011-12-01
Lab-scale models of geophysical phenomena have a long history in research and education. For example, at the University of Chicago, Dave Fultz developed laboratory-scale models of atmospheric flows. The results from his laboratory were so stimulating that similar laboratories were subsequently established at a number of other institutions. Today, the Dave Fultz Memorial Laboratory for Hydrodynamics (http://geosci.uchicago.edu/~nnn/LAB/) teaches general circulation of the atmosphere and oceans to hundreds of students each year. Following this tradition, we have constructed a lab model of iceberg-capsize dynamics for use in the Fultz Laboratory, which focuses on the interface between glaciology and physical oceanography. The experiment consists of a 2.5 meter long wave tank containing water and plastic "icebergs". The motion of the icebergs is tracked using digital video. Movies can be found at: http://geosci.uchicago.edu/research/glaciology_files/tsunamigenesis_research.shtml. We have had 3 successful undergraduate interns with backgrounds in mathematics, engineering, and geosciences perform experiments, analyze data, and interpret results. In addition to iceberg dynamics, the wave-tank has served as a teaching tool in undergraduate classes studying dam-breaking and tsunami run-up. Motivated by the relatively inexpensive cost of our apparatus (~1K-2K dollars) and positive experiences of undergraduate students, we hope to serve as a model for undergraduate research and education that other universities may follow.
Comparing field investigations with laboratory models to predict landfill leachate emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fellner, Johann; Doeberl, Gernot; Allgaier, Gerhard
2009-06-15
Investigations into laboratory reactors and landfills are used for simulating and predicting emissions from municipal solid waste landfills. We examined water flow and solute transport through the same waste body for different volumetric scales (laboratory experiment: 0.08 m{sup 3}, landfill: 80,000 m{sup 3}), and assessed the differences in water flow and leachate emissions of chloride, total organic carbon and Kjeldahl nitrogen. The results indicate that, due to preferential pathways, the flow of water in field-scale landfills is less uniform than in laboratory reactors. Based on tracer experiments, it can be discerned that in laboratory-scale experiments around 40% of pore watermore » participates in advective solute transport, whereas this fraction amounts to less than 0.2% in the investigated full-scale landfill. Consequences of the difference in water flow and moisture distribution are: (1) leachate emissions from full-scale landfills decrease faster than predicted by laboratory experiments, and (2) the stock of materials remaining in the landfill body, and thus the long-term emission potential, is likely to be underestimated by laboratory landfill simulations.« less
D.R. Weise; E. Koo; X. Zhou; S. Mahalingam
2011-01-01
Observed fire spread rates from 240 laboratory fires in horizontally-oriented single-species live fuel beds were compared to predictions from various implementations and modifications of the Rothermel rate of spread model and a physical fire spread model developed by Pagni and Koo. Packing ratio of the laboratory fuel beds was generally greater than that observed in...
NASA Astrophysics Data System (ADS)
Wosnik, Martin; Bachant, Peter
2016-11-01
Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.
The removal of volatile organic compounds (VOCs) from groundwater through in-well vapor stripping has been demonstrated by Gonen and Gvirtzman (1997, J. Contam. Hydrol., 00: 000-000) at the laboratory scale. The present study compares experimental breakthrough...
The toxic equivalency (TEQ) values of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) are predicted with a model based on the homologue concentrations measured from a laboratory-scale reactor (124 data points), a package boiler (61 data points), and ...
The computer program AQUASIM was used to model biological treatment of perchlorate-contaminated water using zero-valent iron corrosion as the hydrogen gas source. The laboratory-scale column was seeded with an autohydrogenotrophic microbial consortium previously shown to degrade ...
12. PHOTOGRAPH OF A PHOTOGRAPH OF A SCALE MODEL OF ...
12. PHOTOGRAPH OF A PHOTOGRAPH OF A SCALE MODEL OF THE WASTE CALCINER FACILITY, SHOWING WEST ELEVATION. (THE ORIGINAL MODEL HAS BEEN LOST.) INEEL PHOTO NUMBER 95-903-1-3. - Idaho National Engineering Laboratory, Old Waste Calcining Facility, Scoville, Butte County, ID
Cold-Cap Temperature Profile Comparison between the Laboratory and Mathematical Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Derek R.; Schweiger, Michael J.; Riley, Brian J.
2015-06-01
The rate of waste vitrification in an electric melter is connected to the feed-to-glass conversion process, which occurs in the cold cap, a layer of reacting feed on top of molten glass. The cold cap consists of two layers: a low temperature (~100°C – ~800°C) region of unconnected feed and a high temperature (~800°C – ~1100°C) region of foam with gas bubbles and cavities mixed in the connected glass melt. A recently developed mathematical model describes the effect of the cold cap on glass production. For verification of the mathematical model, a laboratory-scale melter was used to produce a coldmore » cap that could be cross-sectioned and polished in order to determine the temperature profile related to position in the cold cap. The cold cap from the laboratory-scale melter exhibited an accumulation of feed ~400°C due to radiant heat from the molten glass creating dry feed conditions in the melter, which was not the case in the mathematical model where wet feed conditions were calculated. Through the temperature range from ~500°C – ~1100°C, there was good agreement between the model and the laboratory cold cap. Differences were observed between the two temperature profiles due to the temperature of the glass melts and the lack of secondary foam, large cavities, and shrinkage of the primary foam bubbles upon the cooling of the laboratory-scale cold cap.« less
EPOS-WP16: A coherent and collaborative network of Solid Earth Multi-scale laboratories
NASA Astrophysics Data System (ADS)
Calignano, Elisa; Rosenau, Matthias; Lange, Otto; Spiers, Chris; Willingshofer, Ernst; Drury, Martyn; van Kan-Parker, Mirjam; Elger, Kirsten; Ulbricht, Damian; Funiciello, Francesca; Trippanera, Daniele; Sagnotti, Leonardo; Scarlato, Piergiorgio; Tesei, Telemaco; Winkler, Aldo
2017-04-01
Laboratory facilities are an integral part of Earth Science research. The diversity of methods employed in such infrastructures reflects the multi-scale nature of the Earth system and is essential for the understanding of its evolution, for the assessment of geo-hazards and for the sustainable exploitation of geo-resources. In the frame of EPOS (European Plate Observing System), the Working Package 16 represents a developing community of European Geoscience Multi-scale laboratories. The participant and collaborating institutions (Utrecht University, GFZ, RomaTre University, INGV, NERC, CSIC-ICTJA, CNRS, LMU, C4G-UBI, ETH, CNR*) embody several types of laboratory infrastructures, engaged in different fields of interest of Earth Science: from high temperature and pressure experimental facilities, to electron microscopy, micro-beam analysis, analogue tectonic and geodynamic modelling and paleomagnetic laboratories. The length scales encompassed by these infrastructures range from the nano- and micrometre levels (electron microscopy and micro-beam analysis) to the scale of experiments on centimetres-sized samples, and to analogue model experiments simulating the reservoir scale, the basin scale and the plate scale. The aim of WP16 is to provide two services by the year 2019: first, providing virtual access to data from laboratories (data service) and, second, providing physical access to laboratories (transnational access, TNA). Regarding the development of a data service, the current status is such that most data produced by the various laboratory centres and networks are available only in limited "final form" in publications, many data remain inaccessible and/or poorly preserved. Within EPOS the TCS Multi-scale laboratories is collecting and harmonizing available and emerging laboratory data on the properties and process controlling rock system behaviour at all relevant scales, in order to generate products accessible and interoperable through services for supporting research activities into Geo-resources and Geo-storage, Geo-hazards and Earth System Evolution. Regarding the provision of physical access to laboratories the current situation is such that access to WP16's laboratories is often based on professional relations, available budgets, shared interests and other constraints. In WP16 we aim at reducing the present diversity and non-transparency of access rules and replace ad-hoc procedures for access by a streamlined mechanisms, objective rules and a transparent policy. We work on procedures and mechanisms regulating application, negotiation, evaluation, feedback, selection, admission, approval, feasibility check, setting-up, use, monitoring and dismantling. In the end laboratories should each have a single point providing clear and transparent information on the facility itself, its services, access policy, data management policy and the legal terms and conditions for use of equipment. Through its role as an intermediary and information broker, EPOS will acquire a wealth of information from Research Infrastructures and users on the establishment of efficient collaboration agreements.
3-D Printing as a Tool to Investigate the Effects of Changes in Rock Microstructures on Permeability
NASA Astrophysics Data System (ADS)
Head, D. A.; Vanorio, T.
2016-12-01
Rocks are naturally heterogeneous; two rock samples with identical bulk properties can vary widely in microstructure. Understanding the evolutionary trends of rock properties requires the ability to connect time-lapse measurements of properties at different scales: the macro- scale used in the laboratory and field analyses capturing the bulk scale changes and the micro- scale used in imaging and digital techniques capturing the changes to the pore space. However, measuring those properties at different scales is very challenging, and sometimes impossible. The advent of modern 3D printing has provided an unprecedented opportunity to link those scales by combining the strengths of digital and experimental rock physics. To determine the feasibility of this technique we characterized the resolution capabilities of two different 3D printers. To calibrate our digital models with our printed models, we created a sample with an analytically solvable permeability. This allowed us to directly compare analytic calculation, numerical simulation, and laboratory measurement of permeability of the exact same sample. Next we took a CT-scanned model of a natural carbonate pore space, then iteratively digitally manipulated, 3D printed, and measured the flow properties in the laboratory. This approach allowed us to access multiple scales digitally and experimentally, to test hypotheses about how changes in rock microstructure due to compaction and dissolution affect bulk transport properties, and to connect laboratory measurements of porosity and permeability to quantities that are traditionally impossible to measure in the laboratory such as changes in surface area and tortuosity. As 3D printing technology continues to advance, we expect this technique to contribute to our ability to characterize the properties of remote and/or delicate samples as well as to test the impact of microstructural alteration on bulk physical properties in the lab in a highly consistent, repeatable manner.
Zhang, Dongda; Dechatiwongse, Pongsathorn; Del Rio-Chanona, Ehecatl Antonio; Maitland, Geoffrey C; Hellgardt, Klaus; Vassiliadis, Vassilios S
2015-12-01
This paper investigates the scaling-up of cyanobacterial biomass cultivation and biohydrogen production from laboratory to industrial scale. Two main aspects are investigated and presented, which to the best of our knowledge have never been addressed, namely the construction of an accurate dynamic model to simulate cyanobacterial photo-heterotrophic growth and biohydrogen production and the prediction of the maximum biomass and hydrogen production in different scales of photobioreactors. To achieve the current goals, experimental data obtained from a laboratory experimental setup are fitted by a dynamic model. Based on the current model, two key original findings are made in this work. First, it is found that selecting low-chlorophyll mutants is an efficient way to increase both biomass concentration and hydrogen production particularly in a large scale photobioreactor. Second, the current work proposes that the width of industrial scale photobioreactors should not exceed 0.20 m for biomass cultivation and 0.05 m for biohydrogen production, as severe light attenuation can be induced in the reactor beyond this threshold. © 2015 Wiley Periodicals, Inc.
Zhu, Tong; Moussa, Ehab M; Witting, Madeleine; Zhou, Deliang; Sinha, Kushal; Hirth, Mario; Gastens, Martin; Shang, Sherwin; Nere, Nandkishor; Somashekar, Shubha Chetan; Alexeenko, Alina; Jameel, Feroz
2018-07-01
Scale-up and technology transfer of lyophilization processes remains a challenge that requires thorough characterization of the laboratory and larger scale lyophilizers. In this study, computational fluid dynamics (CFD) was employed to develop computer-based models of both laboratory and manufacturing scale lyophilizers in order to understand the differences in equipment performance arising from distinct designs. CFD coupled with steady state heat and mass transfer modeling of the vial were then utilized to study and predict independent variables such as shelf temperature and chamber pressure, and response variables such as product resistance, product temperature and primary drying time for a given formulation. The models were then verified experimentally for the different lyophilizers. Additionally, the models were applied to create and evaluate a design space for a lyophilized product in order to provide justification for the flexibility to operate within a certain range of process parameters without the need for validation. Published by Elsevier B.V.
BISON and MARMOT Development for Modeling Fast Reactor Fuel Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, Kyle Allan Lawrence; Williamson, Richard L.; Schwen, Daniel
2015-09-01
BISON and MARMOT are two codes under development at the Idaho National Laboratory for engineering scale and lower length scale fuel performance modeling. It is desired to add capabilities for fast reactor applications to these codes. The fast reactor fuel types under consideration are metal (U-Pu-Zr) and oxide (MOX). The cladding types of interest include 316SS, D9, and HT9. The purpose of this report is to outline the proposed plans for code development and provide an overview of the models added to the BISON and MARMOT codes for fast reactor fuel behavior. A brief overview of preliminary discussions on themore » formation of a bilateral agreement between the Idaho National Laboratory and the National Nuclear Laboratory in the United Kingdom is presented.« less
NASA Technical Reports Server (NTRS)
Fisher, Lloyd J; Hoffman, Edward L
1958-01-01
Data from ditching investigations conducted at the Langley Aeronautical Laboratory with dynamic scale models of various airplanes are presented in the form of tables. The effects of design parameters on the ditching characteristics of airplanes, based on scale-model investigations and on reports of full-scale ditchings, are discussed. Various ditching aids are also discussed as a means of improving ditching behavior.
Stress drop with constant, scale independent seismic efficiency and overshoot
Beeler, N.M.
2001-01-01
To model dissipated and radiated energy during earthquake stress drop, I calculate dynamic fault slip using a single degree of freedom spring-slider block and a laboratory-based static/kinetic fault strength relation with a dynamic stress drop proportional to effective normal stress. The model is scaled to earthquake size assuming a circular rupture; stiffness varies inversely with rupture radius, and rupture duration is proportional to radius. Calculated seismic efficiency, the ratio of radiated to total energy expended during stress drop, is in good agreement with laboratory and field observations. Predicted overshoot, a measure of how much the static stress drop exceeds the dynamic stress drop, is higher than previously published laboratory and seismic observations and fully elasto-dynamic calculations. Seismic efficiency and overshoot are constant, independent of normal stress and scale. Calculated variation of apparent stress with seismic moment resembles the observational constraints of McGarr [1999].
NASA Astrophysics Data System (ADS)
Gregori, G.; Reville, B.; Miniati, F.
2015-11-01
The advent of high-power laser facilities has, in the past two decades, opened a new field of research where astrophysical environments can be scaled down to laboratory dimensions, while preserving the essential physics. This is due to the invariance of the equations of magneto-hydrodynamics to a class of similarity transformations. Here we review the relevant scaling relations and their application in laboratory astrophysics experiments with a focus on the generation and amplification of magnetic fields in cosmic environment. The standard model for the origin of magnetic fields is a multi stage process whereby a vanishing magnetic seed is first generated by a rotational electric field and is then amplified by turbulent dynamo action to the characteristic values observed in astronomical bodies. We thus discuss the relevant seed generation mechanisms in cosmic environment including resistive mechanism, collision-less and fluid instabilities, as well as novel laboratory experiments using high power laser systems aimed at investigating the amplification of magnetic energy by magneto-hydrodynamic (MHD) turbulence. Future directions, including efforts to model in the laboratory the process of diffusive shock acceleration are also discussed, with an emphasis on the potential of laboratory experiments to further our understanding of plasma physics on cosmic scales.
Computational simulation of laboratory-scale volcanic jets
NASA Astrophysics Data System (ADS)
Solovitz, S.; Van Eaton, A. R.; Mastin, L. G.; Herzog, M.
2017-12-01
Volcanic eruptions produce ash clouds that may travel great distances, significantly impacting aviation and communities downwind. Atmospheric hazard forecasting relies partly on numerical models of the flow physics, which incorporate data from eruption observations and analogue laboratory tests. As numerical tools continue to increase in complexity, they must be validated to fine-tune their effectiveness. Since eruptions are relatively infrequent and challenging to observe in great detail, analogue experiments can provide important insights into expected behavior over a wide range of input conditions. Unfortunately, laboratory-scale jets cannot easily attain the high Reynolds numbers ( 109) of natural volcanic eruption columns. Comparisons between the computational models and analogue experiments can help bridge this gap. In this study, we investigate a 3-D volcanic plume model, the Active Tracer High-resolution Atmospheric Model (ATHAM), which has been used to simulate a variety of eruptions. However, it has not been previously validated using laboratory-scale data. We conducted numerical simulations of three flows that we have studied in the laboratory: a vertical jet in a quiescent environment, a vertical jet in horizontal cross flow, and a particle-laden jet. We considered Reynolds numbers from 10,000 to 50,000, jet-to-cross flow velocity ratios of 2 to 10, and particle mass loadings of up to 25% of the exit mass flow rate. Vertical jet simulations produce Gaussian velocity profiles in the near exit region by 3 diameters downstream, matching the mean experimental profiles. Simulations of air entrainment are of the correct order of magnitude, but they show decreasing entrainment with vertical distance from the vent. Cross flow simulations reproduce experimental trajectories for the jet centerline initially, although confinement appears to impact the response later. Particle-laden simulations display minimal variation in concentration profiles between cases with different mass loadings and size distributions, indicating that differences in particle behavior may not be evident at this laboratory scale.
NASA Astrophysics Data System (ADS)
Luczak, M. M.; Mucchi, E.; Telega, J.
2016-09-01
The goal of the research is to develop a vibration-based procedure for the identification of structural failures in a laboratory scale model of a tripod supporting structure of an offshore wind turbine. In particular, this paper presents an experimental campaign on the scale model tested in two stages. Stage one encompassed the model tripod structure tested in air. The second stage was done in water. The tripod model structure allows to investigate the propagation of a circumferential representative crack of a cylindrical upper brace. The in-water test configuration included the tower with three bladed rotor. The response of the structure to the different waves loads were measured with accelerometers. Experimental and operational modal analysis was applied to identify the dynamic properties of the investigated scale model for intact and damaged state with different excitations and wave patterns. A comprehensive test matrix allows to assess the differences in estimated modal parameters due to damage or as potentially introduced by nonlinear structural response. The presented technique proves to be effective for detecting and assessing the presence of representative cracks.
Conversion of municipal solid waste to hydrogen
NASA Astrophysics Data System (ADS)
Richardson, J. H.; Rogers, R. S.; Thorsness, C. B.
1995-04-01
LLNL and Texaco are cooperatively developing a physical and chemical treatment method for the conversion of municipal solid waste (MSW) to hydrogen via the steps of hydrothermal pretreatment, gasification and purification. LLNL's focus has been on hydrothermal pretreatment of MSW in order to prepare a slurry of suitable viscosity and heating value to allow efficient and economical gasification and hydrogen production. The project has evolved along 3 parallel paths: laboratory scale experiments, pilot scale processing, and process modeling. Initial laboratory-scale MSW treatment results (e.g., viscosity, slurry solids content) over a range of temperatures and times with newspaper and plastics will be presented. Viscosity measurements have been correlated with results obtained at MRL. A hydrothermal treatment pilot facility has been rented from Texaco and is being reconfigured at LLNL; the status of that facility and plans for initial runs will be described. Several different operational scenarios have been modeled. Steady state processes have been modeled with ASPEN PLUS; consideration of steam injection in a batch mode was handled using continuous process modules. A transient model derived from a general purpose packed bed model is being developed which can examine the aspects of steam heating inside the hydrothermal reactor vessel. These models have been applied to pilot and commercial scale scenarios as a function of MSW input parameters and have been used to outline initial overall economic trends. Part of the modeling, an overview of the MSW gasification process and the modeling of the MSW as a process material, was completed by a DOE SERS (Science and Engineering Research Semester) student. The ultimate programmatic goal is the technical demonstration of the gasification of MSW to hydrogen at the laboratory and pilot scale and the economic analysis of the commercial feasibility of such a process.
Lunar exploration rover program developments
NASA Technical Reports Server (NTRS)
Klarer, P. R.
1994-01-01
The Robotic All Terrain Lunar Exploration Rover (RATLER) design concept began at Sandia National Laboratories in late 1991 with a series of small, proof-of-principle, working scale models. The models proved the viability of the concept for high mobility through mechanical simplicity, and eventually received internal funding at Sandia National Laboratories for full scale, proof-of-concept prototype development. Whereas the proof-of-principle models demonstrated the mechanical design's capabilities for mobility, the full scale proof-of-concept design currently under development is intended to support field operations for experiments in telerobotics, autonomous robotic operations, telerobotic field geology, and advanced man-machine interface concepts. The development program's current status is described, including an outline of the program's work over the past year, recent accomplishments, and plans for follow-on development work.
USDA-ARS?s Scientific Manuscript database
Accurate determination of predicted environmental concentrations (PECs) is a continuing and often elusive goal of pesticide risk assessment. PECs are typically derived using simulation models that depend on laboratory generated data for key input parameters (t1/2, Koc, etc.). Model flexibility in ...
USDA-ARS?s Scientific Manuscript database
Accurate determination of predicted environmental concentrations (PECs) is a continuing and often elusive goal of pesticide risk assessment. PECs are typically derived using simulation models that depend on laboratory generated data for key input parameters (t1/2, Koc, etc.). Model flexibility in ev...
NASA Astrophysics Data System (ADS)
Le Touz, N.; Toullier, T.; Dumoulin, J.
2017-05-01
The present study addresses the thermal behaviour of a modified pavement structure to prevent icing at its surface in adverse winter time conditions or overheating in hot summer conditions. First a multi-physic model based on infinite elements method was built to predict the evolution of the surface temperature. In a second time, laboratory experiments on small specimen were carried out and the surface temperature was monitored by infrared thermography. Results obtained are analyzed and performances of the numerical model for real scale outdoor application are discussed. Finally conclusion and perspectives are proposed.
NASA Astrophysics Data System (ADS)
Li, Shuangcai; Duffy, Christopher J.
2011-03-01
Our ability to predict complex environmental fluid flow and transport hinges on accurate and efficient simulations of multiple physical phenomenon operating simultaneously over a wide range of spatial and temporal scales, including overbank floods, coastal storm surge events, drying and wetting bed conditions, and simultaneous bed form evolution. This research implements a fully coupled strategy for solving shallow water hydrodynamics, sediment transport, and morphological bed evolution in rivers and floodplains (PIHM_Hydro) and applies the model to field and laboratory experiments that cover a wide range of spatial and temporal scales. The model uses a standard upwind finite volume method and Roe's approximate Riemann solver for unstructured grids. A multidimensional linear reconstruction and slope limiter are implemented, achieving second-order spatial accuracy. Model efficiency and stability are treated using an explicit-implicit method for temporal discretization with operator splitting. Laboratory-and field-scale experiments were compiled where coupled processes across a range of scales were observed and where higher-order spatial and temporal accuracy might be needed for accurate and efficient solutions. These experiments demonstrate the ability of the fully coupled strategy in capturing dynamics of field-scale flood waves and small-scale drying-wetting processes.
15. YAZOO BACKWATER PUMPING STATION MODEL, YAZOO RIVER BASIN (MODEL ...
15. YAZOO BACKWATER PUMPING STATION MODEL, YAZOO RIVER BASIN (MODEL SCALE: 1' = 26'). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
A plausible and consistent model is developed to obtain a quantitative description of the gradual disappearance of hexavalent chromium (Cr(VI)) from groundwater in a small-scale field tracer test and in batch kinetic experiments using aquifer sediments under similar chemical cond...
Hypersonic Glider Model in Full Scale Tunnel 1957
1957-09-07
L57-1439 A model based on Langley s concept of a hypersonic glider was test flown on an umbilical cord inside the Full Scale Tunnel in 1957. Photograph published in Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917-1958 by James R. Hansen. Page 374.
Hatanaka, N; Yamamoto, Y; Ichihara, K; Mastuo, S; Nakamura, Y; Watanabe, M; Iwatani, Y
2008-04-01
Various scales have been devised to predict development of pressure ulcers on the basis of clinical and laboratory data, such as the Braden Scale (Braden score), which is used to monitor activity and skin conditions of bedridden patients. However, none of these scales facilitates clinically reliable prediction. To develop a clinical laboratory data-based predictive equation for the development of pressure ulcers. Subjects were 149 hospitalised patients with respiratory disorders who were monitored for the development of pressure ulcers over a 3-month period. The proportional hazards model (Cox regression) was used to analyse the results of 12 basic laboratory tests on the day of hospitalisation in comparison with Braden score. Pressure ulcers developed in 38 patients within the study period. A Cox regression model consisting solely of Braden scale items showed that none of these items contributed to significantly predicting pressure ulcers. Rather, a combination of haemoglobin (Hb), C-reactive protein (CRP), albumin (Alb), age, and gender produced the best model for prediction. Using the set of explanatory variables, we created a new indicator based on a multiple logistic regression equation. The new indicator showed high sensitivity (0.73) and specificity (0.70), and its diagnostic power was higher than that of Alb, Hb, CRP, or the Braden score alone. The new indicator may become a more useful clinical tool for predicting presser ulcers than Braden score. The new indicator warrants verification studies to facilitate its clinical implementation in the future.
Validity of thermally-driven small-scale ventilated filling box models
NASA Astrophysics Data System (ADS)
Partridge, Jamie L.; Linden, P. F.
2013-11-01
The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.
On the physical properties of volcanic rock masses
NASA Astrophysics Data System (ADS)
Heap, M. J.; Villeneuve, M.; Ball, J. L.; Got, J. L.
2017-12-01
The physical properties (e.g., elastic properties, porosity, permeability, cohesion, strength, amongst others) of volcanic rocks are crucial input parameters for modelling volcanic processes. These parameters, however, are often poorly constrained and there is an apparent disconnect between modellers and those who measure/determine rock and rock mass properties. Although it is well known that laboratory measurements are scale dependent, experimentalists, field volcanologists, and modellers should work together to provide the most appropriate model input parameters. Our pluridisciplinary approach consists of (1) discussing with modellers to better understand their needs, (2) using experimental know-how to build an extensive database of volcanic rock properties, and (3) using geotechnical and field-based volcanological know-how to address scaling issues. For instance, increasing the lengthscale of interest from the laboratory-scale to the volcano-scale will reduce the elastic modulus and strength and increase permeability, but to what extent? How variable are the physical properties of volcanic rocks, and is it appropriate to assume constant, isotropic, and/or homogeneous values for volcanoes? How do alteration, depth, and temperature influence rock physical and mechanical properties? Is rock type important, or do rock properties such as porosity exert a greater control on such parameters? How do we upscale these laboratory-measured properties to rock mass properties using the "fracturedness" of a volcano or volcanic outcrop, and how do we quantify fracturedness? We hope to discuss and, where possible, address some of these issues through active discussion between two (or more) scientific communities.
26. CURRENT METERS WITH FOLDING SCALE (MEASURED IN INCHES) IN ...
26. CURRENT METERS WITH FOLDING SCALE (MEASURED IN INCHES) IN FOREGROUND: GURLEY MODEL NO. 665 AT CENTER, GURLEY MODEL NO. 625 'PYGMY' CURRENT METER AT LEFT, AND WES MINIATURE PRICE-TYPE CURRENT METER AT RIGHT. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
Use of a PhET Interactive Simulation in General Chemistry Laboratory: Models of the Hydrogen Atom
ERIC Educational Resources Information Center
Clark, Ted M.; Chamberlain, Julia M.
2014-01-01
An activity supporting the PhET interactive simulation, Models of the Hydrogen Atom, has been designed and used in the laboratory portion of a general chemistry course. This article describes the framework used to successfully accomplish implementation on a large scale. The activity guides students through a comparison and analysis of the six…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Illangasekare, Tissa; Trevisan, Luca; Agartan, Elif
2015-03-31
Carbon Capture and Storage (CCS) represents a technology aimed to reduce atmospheric loading of CO 2 from power plants and heavy industries by injecting it into deep geological formations, such as saline aquifers. A number of trapping mechanisms contribute to effective and secure storage of the injected CO 2 in supercritical fluid phase (scCO 2) in the formation over the long term. The primary trapping mechanisms are structural, residual, dissolution and mineralization. Knowledge gaps exist on how the heterogeneity of the formation manifested at all scales from the pore to the site scales affects trapping and parameterization of contributing mechanismsmore » in models. An experimental and modeling study was conducted to fill these knowledge gaps. Experimental investigation of fundamental processes and mechanisms in field settings is not possible as it is not feasible to fully characterize the geologic heterogeneity at all relevant scales and gathering data on migration, trapping and dissolution of scCO 2. Laboratory experiments using scCO 2 under ambient conditions are also not feasible as it is technically challenging and cost prohibitive to develop large, two- or three-dimensional test systems with controlled high pressures to keep the scCO 2 as a liquid. Hence, an innovative approach that used surrogate fluids in place of scCO 2 and formation brine in multi-scale, synthetic aquifers test systems ranging in scales from centimeter to meter scale developed used. New modeling algorithms were developed to capture the processes controlled by the formation heterogeneity, and they were tested using the data from the laboratory test systems. The results and findings are expected to contribute toward better conceptual models, future improvements to DOE numerical codes, more accurate assessment of storage capacities, and optimized placement strategies. This report presents the experimental and modeling methods and research results.« less
NASA Astrophysics Data System (ADS)
Schellart, Wouter P.; Strak, Vincent
2016-10-01
We present a review of the analogue modelling method, which has been used for 200 years, and continues to be used, to investigate geological phenomena and geodynamic processes. We particularly focus on the following four components: (1) the different fundamental modelling approaches that exist in analogue modelling; (2) the scaling theory and scaling of topography; (3) the different materials and rheologies that are used to simulate the complex behaviour of rocks; and (4) a range of recording techniques that are used for qualitative and quantitative analyses and interpretations of analogue models. Furthermore, we apply these four components to laboratory-based subduction models and describe some of the issues at hand with modelling such systems. Over the last 200 years, a wide variety of analogue materials have been used with different rheologies, including viscous materials (e.g. syrups, silicones, water), brittle materials (e.g. granular materials such as sand, microspheres and sugar), plastic materials (e.g. plasticine), visco-plastic materials (e.g. paraffin, waxes, petrolatum) and visco-elasto-plastic materials (e.g. hydrocarbon compounds and gelatins). These materials have been used in many different set-ups to study processes from the microscale, such as porphyroclast rotation, to the mantle scale, such as subduction and mantle convection. Despite the wide variety of modelling materials and great diversity in model set-ups and processes investigated, all laboratory experiments can be classified into one of three different categories based on three fundamental modelling approaches that have been used in analogue modelling: (1) The external approach, (2) the combined (external + internal) approach, and (3) the internal approach. In the external approach and combined approach, energy is added to the experimental system through the external application of a velocity, temperature gradient or a material influx (or a combination thereof), and so the system is open. In the external approach, all deformation in the system is driven by the externally imposed condition, while in the combined approach, part of the deformation is driven by buoyancy forces internal to the system. In the internal approach, all deformation is driven by buoyancy forces internal to the system and so the system is closed and no energy is added during an experimental run. In the combined approach, the externally imposed force or added energy is generally not quantified nor compared to the internal buoyancy force or potential energy of the system, and so it is not known if these experiments are properly scaled with respect to nature. The scaling theory requires that analogue models are geometrically, kinematically and dynamically similar to the natural prototype. Direct scaling of topography in laboratory models indicates that it is often significantly exaggerated. This can be ascribed to (1) The lack of isostatic compensation, which causes topography to be too high. (2) The lack of erosion, which causes topography to be too high. (3) The incorrect scaling of topography when density contrasts are scaled (rather than densities); In isostatically supported models, scaling of density contrasts requires an adjustment of the scaled topography by applying a topographic correction factor. (4) The incorrect scaling of externally imposed boundary conditions in isostatically supported experiments using the combined approach; When externally imposed forces are too high, this creates topography that is too high. Other processes that also affect surface topography in laboratory models but not in nature (or only in a negligible way) include surface tension (for models using fluids) and shear zone dilatation (for models using granular material), but these will generally only affect the model surface topography on relatively short horizontal length scales of the order of several mm across material boundaries and shear zones, respectively.
DESIGN OF A SURFACTANT REMEDIATION FIELD DEMONSTRATION BASED ON LABORATORY AND MODELINE STUDIES
Surfactant-enhanced subsurface remediation is being evaluated as an innovative technology for expediting ground-water remediation. This paper reports on laboratory and modeling studies conducted in preparation for a pilot-scale field test of surfactant-enhanced subsurface remedia...
ERIC Educational Resources Information Center
Duarte, B. P. M.; Coelho Pinheiro, M. N.; Silva, D. C. M.; Moura, M. J.
2006-01-01
The experiment described is an excellent opportunity to apply theoretical concepts of distillation, thermodynamics of mixtures and process simulation at laboratory scale, and simultaneously enhance the ability of students to operate, control and monitor complex units.
Measured acoustic characteristics of ducted supersonic jets at different model scales
NASA Technical Reports Server (NTRS)
Jones, R. R., III; Ahuja, K. K.; Tam, Christopher K. W.; Abdelwahab, M.
1993-01-01
A large-scale (about a 25x enlargement) model of the Georgia Tech Research Institute (GTRI) hardware was installed and tested in the Propulsion Systems Laboratory of the NASA Lewis Research Center. Acoustic measurements made in these two facilities are compared and the similarity in acoustic behavior over the scale range under consideration is highlighted. The study provide the acoustic data over a relatively large-scale range which may be used to demonstrate the validity of scaling methods employed in the investigation of this phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juanes, Ruben
The overall goals of this research are: (1) to determine the physical fate of single and multiple methane bubbles emitted to the water column by dissociating gas hydrates at seep sites deep within the hydrate stability zone or at the updip limit of gas hydrate stability, and (2) to quantitatively link theoretical and laboratory findings on methane transport to the analysis of real-world field-scale methane plume data placed within the context of the degrading methane hydrate province on the US Atlantic margin. The project is arranged to advance on three interrelated fronts (numerical modeling, laboratory experiments, and analysis of field-basedmore » plume data) simultaneously. The fundamental objectives of each component are the following: Numerical modeling: Constraining the conditions under which rising bubbles become armored with hydrate, the impact of hydrate armoring on the eventual fate of a bubble’s methane, and the role of multiple bubble interactions in survival of methane plumes to very shallow depths in the water column. Laboratory experiments: Exploring the parameter space (e.g., bubble size, gas saturation in the liquid phase, “proximity” to the stability boundary) for formation of a hydrate shell around a free bubble in water, the rise rate of such bubbles, and the bubble’s acoustic characteristics using field-scale frequencies. Field component: Extending the results of numerical modeling and laboratory experiments to the field-scale using brand new, existing, public-domain, state-of-the-art real world data on US Atlantic margin methane seeps, without acquiring new field data in the course of this particular project. This component quantitatively analyzes data on Atlantic margin methane plumes and place those new plumes and their corresponding seeps within the context of gas hydrate degradation processes on this margin.« less
The design of dapog rice seeder model for laboratory scale
NASA Astrophysics Data System (ADS)
Purba, UI; Rizaldi, T.; Sumono; Sigalingging, R.
2018-02-01
The dapog system is seeding rice seeds using a special nursery tray. Rice seedings with dapog systems can produce seedlings in the form of higher quality and uniform seed rolls. This study aims to reduce the cost of making large-scale apparatus by designing models for small-scale and can be used for learning in the laboratory. Parameters observed were soil uniformity, seeds and fertilizers, soil looses, seeds and fertilizers, effective capacity of apparatus, and power requirements. The results showed a high uniformity in soil, seed and fertilizer respectively 92.8%, 1-3 seeds / cm2 and 82%. The scattered materials for soil, seed and fertilizer were respectively 6.23%, 2.7% and 2.23%. The effective capacity of apparatus was 360 boxes / hour with 237.5 kWh of required power.
Photonically enabled Ka-band radar and infrared sensor subscale testbed
NASA Astrophysics Data System (ADS)
Lohr, Michele B.; Sova, Raymond M.; Funk, Kevin B.; Airola, Marc B.; Dennis, Michael L.; Pavek, Richard E.; Hollenbeck, Jennifer S.; Garrison, Sean K.; Conard, Steven J.; Terry, David H.
2014-10-01
A subscale radio frequency (RF) and infrared (IR) testbed using novel RF-photonics techniques for generating radar waveforms is currently under development at The Johns Hopkins University Applied Physics Laboratory (JHU/APL) to study target scenarios in a laboratory setting. The linearity of Maxwell's equations allows the use of millimeter wavelengths and scaled-down target models to emulate full-scale RF scene effects. Coupled with passive IR and visible sensors, target motions and heating, and a processing and algorithm development environment, this testbed provides a means to flexibly and cost-effectively generate and analyze multi-modal data for a variety of applications, including verification of digital model hypotheses, investigation of correlated phenomenology, and aiding system capabilities assessment. In this work, concept feasibility is demonstrated for simultaneous RF, IR, and visible sensor measurements of heated, precessing, conical targets and of a calibration cylinder. Initial proof-of-principle results are shown of the Ka-band subscale radar, which models S-band for 1/10th scale targets, using stretch processing and Xpatch models.
Posttest analysis of the 1:6-scale reinforced concrete containment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, P.A.; Kennedy, J.M.; Marchertas, A.H.
A prediction of the response of the Sandia National Laboratories 1:6- scale reinforced concrete containment model test was made by Argonne National Laboratory. ANL along with nine other organizations performed a detailed nonlinear response analysis of the 1:6-scale model containment subjected to overpressurization in the fall of 1986. The two-dimensional code TEMP-STRESS and the three-dimensional NEPTUNE code were utilized (1) to predict the global response of the structure, (2) to identify global failure sites and the corresponding failure pressures and (3) to identify some local failure sites and pressure levels. A series of axisymmetric models was studied with the two-dimensionalmore » computer program TEMP-STRESS. The comparison of these pretest computations with test data from the containment model has provided a test for the capability of the respective finite element codes to predict global failure modes, and hence serves as a validation of these codes. Only the two-dimensional analyses will be discussed in this paper. 3 refs., 10 figs.« less
2016-05-24
experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been studied over the...is to obtain high-fidelity experimental data critically needed to validate research codes at relevant conditions, and to develop systematic and...validated with experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been
NASA Astrophysics Data System (ADS)
Higgins, N.; Lapusta, N.
2014-12-01
Many large earthquakes on natural faults are preceded by smaller events, often termed foreshocks, that occur close in time and space to the larger event that follows. Understanding the origin of such events is important for understanding earthquake physics. Unique laboratory experiments of earthquake nucleation in a meter-scale slab of granite (McLaskey and Kilgore, 2013; McLaskey et al., 2014) demonstrate that sample-scale nucleation processes are also accompanied by much smaller seismic events. One potential explanation for these foreshocks is that they occur on small asperities - or bumps - on the fault interface, which may also be the locations of smaller critical nucleation size. We explore this possibility through 3D numerical simulations of a heterogeneous 2D fault embedded in a homogeneous elastic half-space, in an attempt to qualitatively reproduce the laboratory observations of foreshocks. In our model, the simulated fault interface is governed by rate-and-state friction with laboratory-relevant frictional properties, fault loading, and fault size. To create favorable locations for foreshocks, the fault surface heterogeneity is represented as patches of increased normal stress, decreased characteristic slip distance L, or both. Our simulation results indicate that one can create a rate-and-state model of the experimental observations. Models with a combination of higher normal stress and lower L at the patches are closest to matching the laboratory observations of foreshocks in moment magnitude, source size, and stress drop. In particular, we find that, when the local compression is increased, foreshocks can occur on patches that are smaller than theoretical critical nucleation size estimates. The additional inclusion of lower L for these patches helps to keep stress drops within the range observed in experiments, and is compatible with the asperity model of foreshock sources, since one would expect more compressed spots to be smoother (and hence have lower L). In this heterogeneous rate-and-state fault model, the foreshocks interact with each other and with the overall nucleation process through their postseismic slip. The interplay amongst foreshocks, and between foreshocks and the larger-scale nucleation process, is a topic of our future work.
NASA Astrophysics Data System (ADS)
Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao
2017-01-01
Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.
Yan, Shoubao; Chen, Xiangsong; Wu, Jingyong; Wang, Pingchao
2013-07-01
The aim of this study was to develop a bioprocess to produce ethanol from food waste at laboratory, semipilot and pilot scales. Laboratory tests demonstrated that ethanol fermentation with reducing sugar concentration of 200 g/L, inoculum size of 2 % (Initial cell number was 2 × 10⁶ CFU/mL) and addition of YEP (3 g/L of yeast extract and 5 g/L of peptone) was the best choice. The maximum ethanol concentration in laboratory scale (93.86 ± 1.15 g/L) was in satisfactory with semipilot scale (93.79 ± 1.11 g/L), but lower than that (96.46 ± 1.12 g/L) of pilot-scale. Similar ethanol yield and volumetric ethanol productivity of 0.47 ± 0.02 g/g, 1.56 ± 0.03 g/L/h and 0.47 ± 0.03 g/g, 1.56 ± 0.03 g/L/h after 60 h of fermentation in laboratory and semipilot fermentors, respectively, however, both were lower than that (0.48 ± 0.02 g/g, 1.79 ± 0.03 g/L/h) of pilot reactor. In addition, simple models were developed to predict the fermentation kinetics during the scale-up process and they were successfully applied to simulate experimental results.
Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe
2017-01-01
Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship’s navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign. PMID:29109379
Chlor-Alkali Industry: A Laboratory Scale Approach
ERIC Educational Resources Information Center
Sanchez-Sanchez, C. M.; Exposito, E.; Frias-Ferrer, A.; Gonzalez-Garaia, J.; Monthiel, V.; Aldaz, A.
2004-01-01
A laboratory experiment for students in the last year of degree program in chemical engineering, chemistry, or industrial chemistry is presented. It models the chlor-alkali process, one of the most important industrial applications of electrochemical technology and the second largest industrial consumer of electricity after aluminium industry.
Interaction Effects of Simultaneous Torsional and Compressional Cyclic Loading of Sand.
1979-12-01
loading 3a AftrRACT (rwo si .v1W f9111 t "Ofslr -d IderufI ST *lack ""iha)ln experimental research program based on laboratory test studies and scaled...experimental research program based on laboratory test studies and scaled slope model tests was conducted with specimens of Monterey No. 0 sand. The principal...objective of the research was to study the effects of interactive coupling during combined compression (normal) and shear loading on the response of
NASA Astrophysics Data System (ADS)
Altun, F.; Birdal, F.
2012-12-01
In this study, a 1:3 scaled, three-storey, FRP (Fiber Reinforced Polymer) retrofitted reinforced concrete model structure whose behaviour and crack development were identified experimentally in the laboratory was investigated analytically. Determination of structural behaviour under earthquake load is only possible in a laboratory environment with a specific scale, as carrying out structural experiments is difficult due to the evaluation of increased parameter numbers and because it requires an expensive laboratory setup. In an analytical study, structure was modelled using ANSYS Finite Element Package Program (2007), and its behaviour and crack development were revealed. When experimental difficulties are taken into consideration, analytical investigation of structure behaviour is more economic and much faster. At the end of the study, experimental results of structural behaviour and crack development were compared with analytical data. It was concluded that in a model structure retrofitted with FRP, the behaviour and cracking model can be determined without testing by determining the reasons for the points where analytical results are not converged with experimental data. Better understanding of structural behaviour is analytically enabled with the study.
Attempting to bridge the gap between laboratory and seismic estimates of fracture energy
McGarr, A.; Fletcher, Joe B.; Beeler, N.M.
2004-01-01
To investigate the behavior of the fracture energy associated with expanding the rupture zone of an earthquake, we have used the results of a large-scale, biaxial stick-slip friction experiment to set the parameters of an equivalent dynamic rupture model. This model is determined by matching the fault slip, the static stress drop and the apparent stress. After confirming that the fracture energy associated with this model earthquake is in reasonable agreement with corresponding laboratory values, we can use it to determine fracture energies for earthquakes as functions of stress drop, rupture velocity and fault slip. If we take account of the state of stress at seismogenic depths, the model extrapolation to larger fault slips yields fracture energies that agree with independent estimates by others based on dynamic rupture models for large earthquakes. For fixed stress drop and rupture speed, the fracture energy scales linearly with fault slip.
Inquiry in the Physical Geology Classroom: Supporting Students' Conceptual Model Development
ERIC Educational Resources Information Center
Miller, Heather R.; McNeal, Karen S.; Herbert, Bruce E.
2010-01-01
This study characterizes the impact of an inquiry-based learning (IBL) module versus a traditionally structured laboratory exercise. Laboratory sections were randomized into experimental and control groups. The experimental group was taught using IBL pedagogical techniques and included manipulation of large-scale data-sets, use of multiple…
ERIC Educational Resources Information Center
Evans, Alexandra L.; Messersmith, Reid E.; Green, David B.; Fritsch, Joseph M.
2011-01-01
We present an integrative laboratory investigation incorporating skills from inorganic chemistry, analytical instrumentation, and physical chemistry applied to a laboratory-scale model of the environmental problem of chlorinated ethylenes in groundwater. Perchloroethylene (C[subscript 2]Cl[subscript 4], PCE) a common dry cleaning solvent,…
Strain localization in models and nature: bridging the gaps.
NASA Astrophysics Data System (ADS)
Burov, E.; Francois, T.; Leguille, J.
2012-04-01
Mechanisms of strain localization and their role in tectonic evolution are still largely debated. Indeed, the laboratory data on strain localization processes are not abundant, they do not cover the entire range of possible mechanisms and have to be extrapolated, sometimes with greatest uncertainties, to geological scales while the observations of localization processes at outcrop scale are scarce, not always representative, and usually are difficult to quantify. Numerical thermo-mechanical models allow us to investigate the relative importance of some of the localization processes whether they are hypothesized or observed at laboratory or outcrop scale. The numerical models can test different observationally or analytically derived laws in terms of their applicability to natural scales and tectonic processes. The models are limited, however, in their capacity of reproduction of physical mechanisms, and necessary simplify the softening laws leading to "numerical" localization. Numerical strain localization is also limited by grid resolution and the ability of specific numerical codes to handle large strains and the complexity of the associated physical phenomena. Hence, multiple iterations between observations and models are needed to elucidate the causes of strain localization in nature. We here investigate the relative impact of different weakening laws on localization of deformation using large-strain thermo-mechanical models. We test using several "generic" rifting and collision settings, the implications of structural softening, tectonic heritage, shear heating, friction angle and cohesion softening, ductile softening (mimicking grain-size reduction) as well as of a number of other mechanisms such as fluid-assisted phase changes. The results suggest that different mechanisms of strain localization may interfere in nature, yet it most cases it is not evident to establish quantifiable links between the laboratory data and the best-fitting parameters of the effective softening laws that allow to reproduce large scale tectonic evolution. For example, one of most effective and widely used mechanisms of "numerical" strain localization is friction angle softening. Yet, namely this law appears to be most difficult to justify from physical and observational grounds.
Innovative mathematical modeling in environmental remediation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Gour T.; National Central Univ.; Univ. of Central Florida
2013-05-01
There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out aremore » used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g.,Ni, Cr, Co).The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation.The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium.« less
Post Audit of a Field Scale Reactive Transport Model of Uranium at a Former Mill Site
NASA Astrophysics Data System (ADS)
Curtis, G. P.
2015-12-01
Reactive transport of hexavalent uranium (U(VI)) in a shallow alluvial aquifer at a former uranium mill tailings site near Naturita CO has been monitored for nearly 30 years by the US Department of Energy and the US Geological Survey. Groundwater at the site has high concentrations of chloride, alkalinity and U(VI) as a owing to ore processing at the site from 1941 to 1974. We previously calibrated a multicomponent reactive transport model to data collected at the site from 1986 to 2001. A two dimensional nonreactive transport model used a uniform hydraulic conductivity which was estimated from observed chloride concentrations and tritium helium age dates. A reactive transport model for the 2km long site was developed by including an equilibrium U(VI) surface complexation model calibrated to laboratory data and calcite equilibrium. The calibrated model reproduced both nonreactive tracers as well as the observed U(VI), pH and alkalinity. Forward simulations for the period 2002-2015 conducted with the calibrated model predict significantly faster natural attenuation of U(VI) concentrations than has been observed by the persistent high U(VI) concentrations at the site. Alternative modeling approaches are being evaluating evaluated using recent data to determine if the persistence can be explained by multirate mass transfer models developed from experimental observations at the column scale(~0.2m), the laboratory tank scale (~2m), the field tracer test scale (~1-4m) or geophysical observation scale (~1-5m). Results of this comparison should provide insight into the persistence of U(VI) plumes and improved management options.
Laboratory and theoretical models of planetary-scale instabilities and waves
NASA Technical Reports Server (NTRS)
Hart, John E.; Toomre, Juri
1990-01-01
Meteorologists and planetary astronomers interested in large-scale planetary and solar circulations recognize the importance of rotation and stratification in determining the character of these flows. In the past it has been impossible to accurately model the effects of sphericity on these motions in the laboratory because of the invariant relationship between the uni-directional terrestrial gravity and the rotation axis of an experiment. Researchers studied motions of rotating convecting liquids in spherical shells using electrohydrodynamic polarization forces to generate radial gravity, and hence centrally directed buoyancy forces, in the laboratory. The Geophysical Fluid Flow Cell (GFFC) experiments performed on Spacelab 3 in 1985 were analyzed. Recent efforts at interpretation led to numerical models of rotating convection with an aim to understand the possible generation of zonal banding on Jupiter and the fate of banana cells in rapidly rotating convection as the heating is made strongly supercritical. In addition, efforts to pose baroclinic wave experiments for future space missions using a modified version of the 1985 instrument led to theoretical and numerical models of baroclinic instability. Rather surprising properties were discovered, which may be useful in generating rational (rather than artificially truncated) models for nonlinear baroclinic instability and baroclinic chaos.
Preliminary design, analysis, and costing of a dynamic scale model of the NASA space station
NASA Technical Reports Server (NTRS)
Gronet, M. J.; Pinson, E. D.; Voqui, H. L.; Crawley, E. F.; Everman, M. R.
1987-01-01
The difficulty of testing the next generation of large flexible space structures on the ground places an emphasis on other means for validating predicted on-orbit dynamic behavior. Scale model technology represents one way of verifying analytical predictions with ground test data. This study investigates the preliminary design, scaling and cost trades for a Space Station dynamic scale model. The scaling of nonlinear joint behavior is studied from theoretical and practical points of view. Suspension system interaction trades are conducted for the ISS Dual Keel Configuration and Build-Up Stages suspended in the proposed NASA/LaRC Large Spacecraft Laboratory. Key issues addressed are scaling laws, replication vs. simulation of components, manufacturing, suspension interactions, joint behavior, damping, articulation capability, and cost. These issues are the subject of parametric trades versus the scale model factor. The results of these detailed analyses are used to recommend scale factors for four different scale model options, each with varying degrees of replication. Potential problems in constructing and testing the scale model are identified, and recommendations for further study are outlined.
Simulating flow in karst aquifers at laboratory and sub-regional scales using MODFLOW-CFP
NASA Astrophysics Data System (ADS)
Gallegos, Josue Jacob; Hu, Bill X.; Davis, Hal
2013-12-01
Groundwater flow in a well-developed karst aquifer dominantly occurs through bedding planes, fractures, conduits, and caves created by and/or enlarged by dissolution. Conventional groundwater modeling methods assume that groundwater flow is described by Darcian principles where primary porosity (i.e. matrix porosity) and laminar flow are dominant. However, in well-developed karst aquifers, the assumption of Darcian flow can be questionable. While Darcian flow generally occurs in the matrix portion of the karst aquifer, flow through conduits can be non-laminar where the relation between specific discharge and hydraulic gradient is non-linear. MODFLOW-CFP is a relatively new modeling program that accounts for non-laminar and laminar flow in pipes, like karst caves, within an aquifer. In this study, results from MODFLOW-CFP are compared to those from MODFLOW-2000/2005, a numerical code based on Darcy's law, to evaluate the accuracy that CFP can achieve when modeling flows in karst aquifers at laboratory and sub-regional (Woodville Karst Plain, Florida, USA) scales. In comparison with laboratory experiments, simulation results by MODFLOW-CFP are more accurate than MODFLOW 2005. At the sub-regional scale, MODFLOW-CFP was more accurate than MODFLOW-2000 for simulating field measurements of peak flow at one spring and total discharges at two springs for an observed storm event.
Laboratory development and testing of spacecraft diagnostics
NASA Astrophysics Data System (ADS)
Amatucci, William; Tejero, Erik; Blackwell, Dave; Walker, Dave; Gatling, George; Enloe, Lon; Gillman, Eric
2017-10-01
The Naval Research Laboratory's Space Chamber experiment is a large-scale laboratory device dedicated to the creation of large-volume plasmas with parameters scaled to realistic space plasmas. Such devices make valuable contributions to the investigation of space plasma phenomena under controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. However, in addition to investigations such as plasma wave and instability studies, such devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this talk, we will describe how the laboratory simulation of space plasmas made this development path possible. Work sponsored by the US Naval Research Laboratory Base Program.
NASA Astrophysics Data System (ADS)
Gorrick, S.; Rodriguez, J. F.
2011-12-01
A movable bed physical model was designed in a laboratory flume to simulate both bed and suspended load transport in a mildly sinuous sand-bed stream. Model simulations investigated the impact of different vegetation arrangements along the outer bank to evaluate rehabilitation options. Preserving similitude in the 1:16 laboratory model was very important. In this presentation the scaling approach, as well as the successes and challenges of the strategy are outlined. Firstly a near-bankfull flow event was chosen for laboratory simulation. In nature, bankfull events at the field site deposit new in-channel features but cause only small amounts of bank erosion. Thus the fixed banks in the model were not a drastic simplification. Next, and as in other studies, the flow velocity and turbulence measurements were collected in separate fixed bed experiments. The scaling of flow in these experiments was simply maintained by matching the Froude number and roughness levels. The subsequent movable bed experiments were then conducted under similar hydrodynamic conditions. In nature, the sand-bed stream is fairly typical; in high flows most sediment transport occurs in suspension and migrating dunes cover the bed. To achieve similar dynamics in the model equivalent values of the dimensionless bed shear stress and the particle Reynolds number were important. Close values of the two dimensionless numbers were achieved with lightweight sediments (R=0.3) including coal and apricot pips with a particle size distribution similar to that of the field site. Overall the moveable bed experiments were able to replicate the dominant sediment dynamics present in the stream during a bankfull flow and yielded relevant information for the analysis of the effects of riparian vegetation. There was a potential conflict in the strategy, in that grain roughness was exaggerated with respect to nature. The advantage of this strategy is that although grain roughness is exaggerated, the similarity of bedforms and resulting drag can return similar levels of roughness to those in the field site.
Characterization and Scaling of Heave Plates for Ocean Wave Energy Converters
NASA Astrophysics Data System (ADS)
Rosenberg, Brian; Mundon, Timothy
2016-11-01
Ocean waves present a tremendous, untapped source of renewable energy, capable of providing half of global electricity demand by 2040. Devices developed to extract this energy are known as wave energy converters (WECs) and encompass a wide range of designs. A somewhat common archetype is a two-body point-absorber, in which a surface float reacts against a submerged "heave" plate to extract energy. Newer WEC's are using increasingly complex geometries for the submerged plate and an emerging challenge in creating low-order models lies in accurately determining the hydrodynamic coefficients (added mass and drag) in the corresponding oscillatory flow regime. Here we present experiments in which a laboratory-scale heave plate is sinusoidally forced in translation (heave) and rotation (pitch) to characterize the hydrodynamic coefficients as functions of the two governing nondimensional parameters, Keulegan-Carpenter number (amplitude) and Reynolds number. Comparisons against CFD simulations are offered. As laboratory-scale physical model tests remain the standard for testing wave energy devices, effects and implications of scaling (with respect to a full-scale device) are also investigated.
Shumba, Edwin; Nzombe, Phoebe; Mbinda, Absolom; Simbi, Raiva; Mangwanya, Douglas; Kilmarx, Peter H; Luman, Elizabeth T; Zimuto, Sibongile N
2014-01-01
In 2010, the Zimbabwe Ministry of Health and Child Welfare (MoHCW) adopted the Strengthening Laboratory Management Toward Accreditation (SLMTA) programme as a tool for laboratory quality systems strengthening. To evaluate the financial costs of SLMTA implementation using two models (external facilitators; and internal local or MoHCW facilitators) from the perspective of the implementing partner and to estimate resources needed to scale up the programme nationally in all 10 provinces. The average expenditure per laboratory was calculated based on accounting records; calculations included implementing partner expenses but excluded in-kind contributions and salaries of local facilitators and trainees. We also estimated theoretical financial costs, keeping all contextual variables constant across the two models. Resource needs for future national expansion were estimated based on a two-phase implementation plan, in which 12 laboratories in each of five provinces would implement SLMTA per phase; for the internal facilitator model, 20 facilitators would be trained at the beginning of each phase. The average expenditure to implement SLMTA in 11 laboratories using external facilitators was approximately US$5800 per laboratory; expenditure in 19 laboratories using internal facilitators was approximately $6000 per laboratory. The theoretical financial cost of implementing a 12-laboratory SLMTA cohort keeping all contextual variables constant would be approximately $58 000 using external facilitators; or $15 000 using internal facilitators, plus $86 000 to train 20 facilitators. The financial cost for subsequent SLMTA cohorts using the previously-trained internal facilitators would be approximately $15 000, yielding a break-even point of 2 cohorts, at $116 000 for either model. Estimated resources required for national implementation in 120 laboratories would therefore be $580 000 using external facilitators ($58 000 per province) and $322 000 using internal facilitators ($86 000 for facilitator training in each of two phases plus $15 000 for SLMTA implementation in each province). Investing in training of internal facilitators will result in substantial savings over the scale-up of the programme. Our study provides information to assist policy makers to develop strategic plans for investing in laboratory strengthening.
2017 GTO Project review Laboratory Evaluation of EGS Shear Stimulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Stephen J.
The objectives and purpose of this research has been to produce laboratory-based experimental and numerical analyses to provide a physics-based understanding of shear stimulation phenomena (hydroshearing) and its evolution during stimulation. Water was flowed along fractures in hot and stressed fractured rock, to promote slip. The controlled laboratory experiments provide a high resolution/high quality data resource for evaluation of analysis methods developed by DOE to assess EGS “behavior” during this stimulation process. Segments of the experimental program will provide data sets for model input parameters, i.e., material properties, and other segments of the experimental program will represent small scale physicalmore » models of an EGS system, which may be modeled. The coupled lab/analysis project has been a study of the response of a fracture in hot, water-saturated fractured rock to shear stress experiencing fluid flow. Under this condition, the fracture experiences a combination of potential pore pressure changes and fracture surface cooling, resulting in slip along the fracture. The laboratory work provides a means to assess the role of “hydroshearing” on permeability enhancement in reservoir stimulation. Using the laboratory experiments and results to define boundary and input/output conditions of pore pressure, thermal stress, fracture shear deformation and fluid flow, and models were developed and simulations completed by the University of Oklahoma team. The analysis methods are ones used on field scale problems. The sophisticated numerical models developed contain parameters present in the field. The analysis results provide insight into the role of fracture slip on permeability enhancement-“hydroshear” is to be obtained. The work will provide valuable input data to evaluate stimulation models, thus helping design effective EGS.« less
NASA Astrophysics Data System (ADS)
Bultreys, Tom; Boone, Marijn A.; Boone, Matthieu N.; De Schryver, Thomas; Masschaele, Bert; Van Hoorebeke, Luc; Cnudde, Veerle
2016-09-01
Over the past decade, the wide-spread implementation of laboratory-based X-ray micro-computed tomography (micro-CT) scanners has revolutionized both the experimental and numerical research on pore-scale transport in geological materials. The availability of these scanners has opened up the possibility to image a rock's pore space in 3D almost routinely to many researchers. While challenges do persist in this field, we treat the next frontier in laboratory-based micro-CT scanning: in-situ, time-resolved imaging of dynamic processes. Extremely fast (even sub-second) micro-CT imaging has become possible at synchrotron facilities over the last few years, however, the restricted accessibility of synchrotrons limits the amount of experiments which can be performed. The much smaller X-ray flux in laboratory-based systems bounds the time resolution which can be attained at these facilities. Nevertheless, progress is being made to improve the quality of measurements performed on the sub-minute time scale. We illustrate this by presenting cutting-edge pore scale experiments visualizing two-phase flow and solute transport in real-time with a lab-based environmental micro-CT set-up. To outline the current state of this young field and its relevance to pore-scale transport research, we critically examine its current bottlenecks and their possible solutions, both on the hardware and the software level. Further developments in laboratory-based, time-resolved imaging could prove greatly beneficial to our understanding of transport behavior in geological materials and to the improvement of pore-scale modeling by providing valuable validation.
Sensitivity of CEAP cropland simulations to the parameterization of the APEX model
USDA-ARS?s Scientific Manuscript database
For large scale applications like the U.S. National Scale Conservation Effects Assessment Project (CEAP), soil hydraulic characteristics data are not readily available and therefore need to be estimated. Field soil water properties are commonly approximated using laboratory soil water retention meas...
Filtration of micron-sized particles for coal liquids: carbonaceous precoats. [5 refs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, B.R.
Carbonaceous precoats, such as bituminous coal and char from hydrocarbonization, are shown to be effective, inexpensive substitutes for traditional diatomaceous earth materials, both at laboratory-scale and bench-scale. Model equations are developed for filtration of Solvent Refined Coal-Unfiltered Oil (SRC-UFO).
Scale-model charge-transfer technique for measuring enhancement factors
NASA Technical Reports Server (NTRS)
Kositsky, J.; Nanevicz, J. E.
1991-01-01
Determination of aircraft electric field enhancement factors is crucial when using airborne field mill (ABFM) systems to accurately measure electric fields aloft. SRI used the scale model charge transfer technique to determine enhancement factors of several canonical shapes and a scale model Learjet 36A. The measured values for the canonical shapes agreed with known analytic solutions within about 6 percent. The laboratory determined enhancement factors for the aircraft were compared with those derived from in-flight data gathered by a Learjet 36A outfitted with eight field mills. The values agreed to within experimental error (approx. 15 percent).
Gawande, Nitin A; Reinhart, Debra R; Yeh, Gour-Tsyh
2010-02-01
Biodegradation process modeling of municipal solid waste (MSW) bioreactor landfills requires the knowledge of various process reactions and corresponding kinetic parameters. Mechanistic models available to date are able to simulate biodegradation processes with the help of pre-defined species and reactions. Some of these models consider the effect of critical parameters such as moisture content, pH, and temperature. Biomass concentration is a vital parameter for any biomass growth model and often not compared with field and laboratory results. A more complex biodegradation model includes a large number of chemical and microbiological species. Increasing the number of species and user defined process reactions in the simulation requires a robust numerical tool. A generalized microbiological and chemical model, BIOKEMOD-3P, was developed to simulate biodegradation processes in three-phases (Gawande et al. 2009). This paper presents the application of this model to simulate laboratory-scale MSW bioreactors under anaerobic conditions. BIOKEMOD-3P was able to closely simulate the experimental data. The results from this study may help in application of this model to full-scale landfill operation.
A Large Scale, High Resolution Agent-Based Insurgency Model
2013-09-30
CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation
Drift of continental rafts with asymmetric heating.
Knopoff, L; Poehls, K A; Smith, R C
1972-06-02
A laboratory model of a lithospheric raft is propelled through a viscous asthenospheric layer with constant velocity of scaled magnitude appropriate to continental drift. The propulsion is due to differential heat concentration in the model oceanic and continental crusts.
Los Alamos National Laboratory Economic Analysis Capability Overview
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo; Edwards, Brian Keith; Pasqualini, Donatella
Los Alamos National Laboratory has developed two types of models to compute the economic impact of infrastructure disruptions. FastEcon is a fast running model that estimates first-order economic impacts of large scale events such as hurricanes and floods and can be used to identify the amount of economic activity that occurs in a specific area. LANL’s Computable General Equilibrium (CGE) model estimates more comprehensive static and dynamic economic impacts of a broader array of events and captures the interactions between sectors and industries when estimating economic impacts.
Laboratory meter-scale seismic monitoring of varying water levels in granular media
NASA Astrophysics Data System (ADS)
Pasquet, S.; Bodet, L.; Bergamo, P.; Guérin, R.; Martin, R.; Mourgues, R.; Tournat, V.
2016-12-01
Laboratory physical modelling and non-contacting ultrasonic techniques are frequently proposed to tackle theoretical and methodological issues related to geophysical prospecting. Following recent developments illustrating the ability of seismic methods to image spatial and/or temporal variations of water content in the vadose zone, we developed laboratory experiments aimed at testing the sensitivity of seismic measurements (i.e., pressure-wave travel times and surface-wave phase velocities) to water saturation variations. Ultrasonic techniques were used to simulate typical seismic acquisitions on small-scale controlled granular media presenting different water levels. Travel times and phase velocity measurements obtained at the dry state were validated with both theoretical models and numerical simulations and serve as reference datasets. The increasing water level clearly affects the recorded wave field in both its phase and amplitude, but the collected data cannot yet be inverted in the absence of a comprehensive theoretical model for such partially saturated and unconsolidated granular media. The differences in travel time and phase velocity observed between the dry and wet models show patterns that are interestingly coincident with the observed water level and depth of the capillary fringe, thus offering attractive perspectives for studying soil water content variations in the field.
Perturbations and gradients as fundamental tests for modeling the soil carbon cycle
NASA Astrophysics Data System (ADS)
Bond-Lamberty, B. P.; Bailey, V. L.; Becker, K.; Fansler, S.; Hinkle, C.; Liu, C.
2013-12-01
An important step in matching process-level knowledge to larger-scale measurements and model results is to challenge those models with site-specific perturbations and/or changing environmental conditions. Here we subject modified versions of an ecosystem process model to two stringent tests: replicating a long-term climate change dryland experiment (Rattlesnake Mountain) and partitioning the carbon fluxes of a soil drainage gradient in the northern Everglades (Disney Wilderness Preserve). For both sites, on-site measurements were supplemented by laboratory incubations of soil columns. We used a parameter-space search algorithm to optimize, within observational limits, the model's influential inputs, so that the spun-up carbon stocks and fluxes matched observed values. Modeled carbon fluxes (net primary production and net ecosystem exchange) agreed with measured values, within observational error limits, but the model's partitioning of soil fluxes (autotrophic versus heterotrophic), did not match laboratory measurements from either site. Accounting for site heterogeneity at DWP, modeled carbon exchange was reasonably consistent with values from eddy covariance. We discuss the implications of this work for ecosystem- to global scale modeling of ecosystems in a changing climate.
SRNL PARTICIPATION IN THE MULTI-SCALE ENSEMBLE EXERCISES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, R
2007-10-29
Consequence assessment during emergency response often requires atmospheric transport and dispersion modeling to guide decision making. A statistical analysis of the ensemble of results from several models is a useful way of estimating the uncertainty for a given forecast. ENSEMBLE is a European Union program that utilizes an internet-based system to ingest transport results from numerous modeling agencies. A recent set of exercises required output on three distinct spatial and temporal scales. The Savannah River National Laboratory (SRNL) uses a regional prognostic model nested within a larger-scale synoptic model to generate the meteorological conditions which are in turn used inmore » a Lagrangian particle dispersion model. A discussion of SRNL participation in these exercises is given, with particular emphasis on requirements for provision of results in a timely manner with regard to the various spatial scales.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, Fatima
2014-07-31
Large-scale magnetic fields have been observed in widely different types of astrophysical objects. These magnetic fields are believed to be caused by the so-called dynamo effect. Could a large-scale magnetic field grow out of turbulence (i.e. the alpha dynamo effect)? How could the topological properties and the complexity of magnetic field as a global quantity, the so called magnetic helicity, be important in the dynamo effect? In addition to understanding the dynamo mechanism in astrophysical accretion disks, anomalous angular momentum transport has also been a longstanding problem in accretion disks and laboratory plasmas. To investigate both dynamo and momentum transport,more » we have performed both numerical modeling of laboratory experiments that are intended to simulate nature and modeling of configurations with direct relevance to astrophysical disks. Our simulations use fluid approximations (Magnetohydrodynamics - MHD model), where plasma is treated as a single fluid, or two fluids, in the presence of electromagnetic forces. Our major physics objective is to study the possibility of magnetic field generation (so called MRI small-scale and large-scale dynamos) and its role in Magneto-rotational Instability (MRI) saturation through nonlinear simulations in both MHD and Hall regimes.« less
NASA Astrophysics Data System (ADS)
Coddington, Odele; Lean, Judith; Rottman, Gary; Pilewskie, Peter; Snow, Martin; Lindholm, Doug
2016-04-01
We present a climate data record of Total Solar Irradiance (TSI) and Solar Spectral Irradiance (SSI), with associated time and wavelength dependent uncertainties, from 1610 to the present. The data record was developed jointly by the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado Boulder and the Naval Research Laboratory (NRL) as part of the National Oceanographic and Atmospheric Administration's (NOAA) National Centers for Environmental Information (NCEI) Climate Data Record (CDR) Program, where the data record, source code, and supporting documentation are archived. TSI and SSI are constructed from models that determine the changes from quiet Sun conditions arising from bright faculae and dark sunspots on the solar disk using linear regression of proxies of solar magnetic activity with observations from the SOlar Radiation and Climate Experiment (SORCE) Total Irradiance Monitor (TIM), Spectral Irradiance Monitor (SIM), and SOlar Stellar Irradiance Comparison Experiment (SOLSTICE). We show that TSI can be separately modeled to within TIM's measurement accuracy from solar rotational to solar cycle time scales and we assume that SSI measurements are reliable on solar rotational time scales. We discuss the model formulation, uncertainty estimates, and operational implementation and present comparisons of the modeled TSI and SSI with the measurement record and with other solar irradiance models. We also discuss ongoing work to assess the sensitivity of the modeled irradiances to model assumptions, namely, the scaling of solar variability from rotational-to-cycle time scales and the representation of the sunspot darkening index.
Customer Satisfaction Assessment at the Pacific Northwest National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale N.; Sours, Mardell L.
2000-03-20
The Pacific Northwest National Laboratory (PNNL) is developing and implementing a customer satisfaction assessment program (CSAP) to assess the quality of research and development provided by the laboratory. We present the customer survey component of the PNNL CSAP. The customer survey questionnaire is composed of 2 major sections, Strategic Value and Project Performance. The Strategic Value section of the questionnaire consists of 5 questions that can be answered with a 5 point Likert scale response. These questions are designed to determine if a project is directly contributing to critical future national needs. The Project Performance section of the questionnaire consistsmore » of 9 questions that can be answered with a 5 point Likert scale response. These questions determine PNNL performance in meeting customer expectations. Many approaches could be used to analyze customer survey data. We present a statistical model that can accurately capture the random behavior of customer survey data. The properties of this statistical model can be used to establish a "gold standard'' or performance expectation for the laboratory, and then assess progress. The gold standard is defined from input from laboratory management --- answers to 4 simple questions, in terms of the information obtained from the CSAP customer survey, define the standard: *What should the average Strategic Value be for the laboratory project portfolio? *What Strategic Value interval should include most of the projects in the laboratory portfolio? *What should average Project Performance be for projects with a Strategic Value of about 2? *What should average Project Performance be for projects with a Strategic Value of about 4? We discuss how to analyze CSAP customer survey data with this model. Our discussion will include "lessons learned" and issues that can invalidate this type of assessment.« less
Dogan, Eda; Hearst, R. Jason
2017-01-01
A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to ‘simulate’ high Reynolds number wall–turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167584
Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram
2017-03-13
A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
The United States Environmental Protection Agency's (EPA) National Expsoure Research Laboratory (NERL) has initiated a project to improve the methodology for modeling urban-scale human exposure to mobile source emissions. The modeling project has started by considering the nee...
Modeling Supernova Shocks with Intense Lasers.
NASA Astrophysics Data System (ADS)
Blue, Brent
2006-04-01
Large-scale directional outflows of supersonic plasma are ubiquitous phenomena in astrophysics, with specific application to supernovae. The traditional approach to understanding such phenomena is through theoretical analysis and numerical simulations. However, theoretical analysis might not capture all the relevant physics and numerical simulations have limited resolution and fail to scale correctly in Reynolds number and perhaps other key dimensionless parameters. Recent advances in high energy density physics using large inertial confinement fusion devices now allow controlled laboratory experiments on macroscopic volumes of plasma of direct relevance to astrophysics. This talk will present an overview of these facilities as well as results from current laboratory astrophysics experiments designed to study hydrodynamic jets and Rayleigh-Taylor mixing. This work is performed under the auspices of the U. S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48, Los Alamos National Laboratory under Contract No. W-7405-ENG-36, and the Laboratory for Laser Energetics under Contract No. DE-FC03-92SF19460.
Accuracy of finite-difference modeling of seismic waves : Simulation versus laboratory measurements
NASA Astrophysics Data System (ADS)
Arntsen, B.
2017-12-01
The finite-difference technique for numerical modeling of seismic waves is still important and for some areas extensively used.For exploration purposes is finite-difference simulation at the core of both traditional imaging techniques such as reverse-time migration and more elaborate Full-Waveform Inversion techniques.The accuracy and fidelity of finite-difference simulation of seismic waves are hard to quantify and meaningfully error analysis is really onlyeasily available for simplistic media. A possible alternative to theoretical error analysis is provided by comparing finite-difference simulated data with laboratory data created using a scale model. The advantage of this approach is the accurate knowledge of the model, within measurement precision, and the location of sources and receivers.We use a model made of PVC immersed in water and containing horizontal and tilted interfaces together with several spherical objects to generateultrasonic pressure reflection measurements. The physical dimensions of the model is of the order of a meter, which after scaling represents a model with dimensions of the order of 10 kilometer and frequencies in the range of one to thirty hertz.We find that for plane horizontal interfaces the laboratory data can be reproduced by the finite-difference scheme with relatively small error, but for steeply tilted interfaces the error increases. For spherical interfaces the discrepancy between laboratory data and simulated data is sometimes much more severe, to the extent that it is not possible to simulate reflections from parts of highly curved bodies. The results are important in view of the fact that finite-difference modeling is often at the core of imaging and inversion algorithms tackling complicatedgeological areas with highly curved interfaces.
Full-Scale Test Program for a Shower Wastewater Recycling System: Technical Evaluation
1987-01-01
taken, each using sn average 10.6 gal of water. Hathers were permitted to use their own choice of soap and shampoo , a condition expected to exist in a...Laboratories in Richmond, VA, using a Beckman Total Organic Carbon Analyzer, Model 915. Sulfate The Hach SulfaVer 4 Sulfate Reagent was used to measure... sulfate content by the turbidimetric metal.. Turbidity A Hach Laboratory Turbidimeter, Model 2100A, was used to measure turbidity. pH A Photovolt pH meter
Emergent dynamics of laboratory insect swarms
NASA Astrophysics Data System (ADS)
Kelley, Douglas H.; Ouellette, Nicholas T.
2013-01-01
Collective animal behaviour occurs at nearly every biological size scale, from single-celled organisms to the largest animals on earth. It has long been known that models with simple interaction rules can reproduce qualitative features of this complex behaviour. But determining whether these models accurately capture the biology requires data from real animals, which has historically been difficult to obtain. Here, we report three-dimensional, time-resolved measurements of the positions, velocities, and accelerations of individual insects in laboratory swarms of the midge Chironomus riparius. Even though the swarms do not show an overall polarisation, we find statistical evidence for local clusters of correlated motion. We also show that the swarms display an effective large-scale potential that keeps individuals bound together, and we characterize the shape of this potential. Our results provide quantitative data against which the emergent characteristics of animal aggregation models can be benchmarked.
FILTRATION MODEL FOR COAL FLY ASH WITH GLASS FABRICS
The report describes a new mathematical model for predicting woven glass filter performance with coal fly ash aerosols from utility boilers. Its data base included: an extensive bench- and pilot-scale laboratory investigation of several dust/fabric combinations; field data from t...
COTHERM: Geophysical Modeling of High Enthalpy Geothermal Systems
NASA Astrophysics Data System (ADS)
Grab, Melchior; Maurer, Hansruedi; Greenhalgh, Stewart
2014-05-01
In recent years geothermal heating and electricity generation have become an attractive alternative energy resource, especially natural high enthalpy geothermal systems such as in Iceland. However, the financial risk of installing and operating geothermal power plants is still high and more needs to be known about the geothermal processes and state of the reservoir in the subsurface. A powerful tool for probing the underground system structure is provided by geophysical techniques, which are able to detect flow paths and fracture systems without drilling. It has been amply demonstrated that small-scale features can be well imaged at shallow depths, but only gross structures can be delineated for depths of several kilometers, where most high enthalpy systems are located. Therefore a major goal of our study is to improve geophysical mapping strategies by multi-method geophysical simulations and synthetic data inversions, to better resolve structures at greater depth, characterize the reservoir and monitor any changes within it. The investigation forms part of project COTHERM - COmbined hydrological, geochemical and geophysical modeling of geoTHERMal systems - in which a holistic and synergistic approach is being adopted to achieve multidisciplinary cooperation and mutual benefit. The geophysical simulations are being performed in combination with hydrothermal fluid flow modeling and chemical fluid rock interaction modeling, to provide realistic constraints on lithology, pressure, temperature and fluid conditions of the subsurface. Two sites in Iceland have been selected for the study, Krafla and Reykjanes. As a starting point for the geophysical modeling, we seek to establish petrophysical relations, connecting rock properties and reservoir conditions with geophysical parameters such as seismic wave speed, attenuation, electrical conductivity and magnetic susceptibility with a main focus on seismic properties. Therefore, we follow a comprehensive approach involving three components: (1) A literature study to find relevant, existing theoretical models, (2) laboratory determinations to confirm their validity for Icelandic rocks of interest and (3) a field campaign to obtain in-situ, shallow rock properties from seismic and resistivity tomography surveys over a fossilized and exhumed geothermal system. Theoretical models describing physical behavior for rocks with strong inhomogeneities, complex pore structure and complicated fluid-rock interaction mechanisms are often poorly constrained and require the knowledge about a wide range of parameters that are difficult to quantify. Therefore we calibrate the theoretical models by laboratory measurements on samples of rocks, forming magmatic geothermal reservoirs. Since the samples used in the laboratory are limited in size, and laboratory equipment operates at much higher frequency than the instruments used in the field, the results need to be up-scaled from the laboratory scale to field scale. This is not a simple process and entails many uncertainties.
Numerical assessment of bureau of mines electric arc melter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paik, S.; Hawkes, G.; Nguyen, H.D.
1994-12-31
An electric arc melter used for the waste treatment process at Idaho National Engineering Laboratory (INEL) in cooperation with the U.S. Bureau of Mines (USBM) has been numerically studied. The arc melter is being used for vitrification of thermally oxidized, buried, transuranic (TRU) contaminated wastes by INEL in conjunction with the USBM as a part of the Buried Waste Integrated Demonstration project. The purpose of this study is to numerically investigate the performance of the laboratory-scale arc melter simulating the USBM arc melter. Initial results of modeling the full-scale USBM arc melter are also reported in this paper.
Multiscale Computation. Needs and Opportunities for BER Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheibe, Timothy D.; Smith, Jeremy C.
2015-01-01
The Environmental Molecular Sciences Laboratory (EMSL), a scientific user facility managed by Pacific Northwest National Laboratory for the U.S. Department of Energy, Office of Biological and Environmental Research (BER), conducted a one-day workshop on August 26, 2014 on the topic of “Multiscale Computation: Needs and Opportunities for BER Science.” Twenty invited participants, from various computational disciplines within the BER program research areas, were charged with the following objectives; Identify BER-relevant models and their potential cross-scale linkages that could be exploited to better connect molecular-scale research to BER research at larger scales and; Identify critical science directions that will motivate EMSLmore » decisions regarding future computational (hardware and software) architectures.« less
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
NASA Astrophysics Data System (ADS)
Cassani, Mary Kay Kuhr
The objective of this study was to evaluate the effect of two pedagogical models used in general education science on non-majors' science teaching self-efficacy. Science teaching self-efficacy can be influenced by inquiry and cooperative learning, through cognitive mechanisms described by Bandura (1997). The Student Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) model of inquiry and cooperative learning incorporates cooperative learning and inquiry-guided learning in large enrollment combined lecture-laboratory classes (Oliver-Hoyo & Beichner, 2004). SCALE-UP was adopted by a small but rapidly growing public university in the southeastern United States in three undergraduate, general education science courses for non-science majors in the Fall 2006 and Spring 2007 semesters. Students in these courses were compared with students in three other general education science courses for non-science majors taught with the standard teaching model at the host university. The standard model combines lecture and laboratory in the same course, with smaller enrollments and utilizes cooperative learning. Science teaching self-efficacy was measured using the Science Teaching Efficacy Belief Instrument - B (STEBI-B; Bleicher, 2004). A science teaching self-efficacy score was computed from the Personal Science Teaching Efficacy (PTSE) factor of the instrument. Using non-parametric statistics, no significant difference was found between teaching models, between genders, within models, among instructors, or among courses. The number of previous science courses was significantly correlated with PTSE score. Student responses to open-ended questions indicated that students felt the larger enrollment in the SCALE-UP room reduced individual teacher attention but that the large round SCALE-UP tables promoted group interaction. Students responded positively to cooperative and hands-on activities, and would encourage inclusion of more such activities in all of the courses. The large enrollment SCALE-UP model as implemented at the host university did not increase science teaching self-efficacy of non-science majors, as hypothesized. This was likely due to limited modification of standard cooperative activities according to the inquiry-guided SCALE-UP model. It was also found that larger SCALE-UP enrollments did not decrease science teaching self-efficacy when standard cooperative activities were used in the larger class.
Laboratory simulation of space plasma phenomena*
NASA Astrophysics Data System (ADS)
Amatucci, B.; Tejero, E. M.; Ganguli, G.; Blackwell, D.; Enloe, C. L.; Gillman, E.; Walker, D.; Gatling, G.
2017-12-01
Laboratory devices, such as the Naval Research Laboratory's Space Physics Simulation Chamber, are large-scale experiments dedicated to the creation of large-volume plasmas with parameters realistically scaled to those found in various regions of the near-Earth space plasma environment. Such devices make valuable contributions to the understanding of space plasmas by investigating phenomena under carefully controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. By working in collaboration with in situ experimentalists to create realistic conditions scaled to those found during the observations of interest, the microphysics responsible for the observed events can be investigated in detail not possible in space. To date, numerous investigations of phenomena such as plasma waves, wave-particle interactions, and particle energization have been successfully performed in the laboratory. In addition to investigations such as plasma wave and instability studies, the laboratory devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this presentation, we will describe several examples of the laboratory investigation of space plasma waves and instabilities and diagnostic development. *This work supported by the NRL Base Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bateman, K. J.; Capson, D. D.
2004-03-29
Argonne National Laboratory (ANL) has developed a process to immobilize waste salt containing fission products, uranium, and transuranic elements as chlorides in a glass-bonded ceramic waste form. This salt was generated in the electrorefining operation used in the electrometallurgical treatment of spent Experimental Breeder Reactor-II (EBR-II) fuel. The ceramic waste process culminates with an elevated temperature operation. The processing conditions used by the furnace, for demonstration scale and production scale operations, are to be developed at Argonne National Laboratory-West (ANL-West). To assist in selecting the processing conditions of the furnace and to reduce the number of costly experiments, a finitemore » difference model was developed to predict the consolidation of the ceramic waste. The model accurately predicted the heating as well as the bulk density of the ceramic waste form. The methodology used to develop the computer model and a comparison of the analysis to experimental data is presented.« less
NASA Technical Reports Server (NTRS)
Ivanov, B. A.
1986-01-01
Main concepts and theoretical models which are used for studying the mechanics of cratering are discussed. Numerical two-dimensional calculations are made of explosions near a surface and high-speed impact. Models are given for the motion of a medium during cratering. Data from laboratory modeling are given. The effect of gravitational force and scales of cratering phenomena is analyzed.
NASA Astrophysics Data System (ADS)
Hadadzadeh, Amir; Wells, Mary
Although the Twin Roll Casting (TRC) process has been used in the aluminum sheet production industry for more than 60 years, the usage of this process to fabricate magnesium sheets is still at its early stages. Similar to other manufacturing processes, the development of the TRC process for magnesium alloys has followed a typical route of preliminary studies using a laboratory-scale facility, followed by pilot-scale testing and most recently attempting to use an industrial-scale twin roll caster. A powerful tool to understand and quantify the trends of the processing conditions and effects of scaling up from a laboratory size TRC machine to an industrial scale one is develop a mathematical model of the process. This can elucidate the coupled fluid-thermo-mechanical behavior of the cast strip during the solidification and then deformation stages of the process. In the present study a Thermal-Fluid-Stress model has been developed for TRC of AZ31 magnesium alloy for three roll diameters by employing the FEM commercial package ALSIM. The roll diameters were chosen as 355mm, 600mm and 1150mm. The effect of casting speed for each diameter was studied in terms of fluid flow, thermal history and stress-strain evolution in the cast strip in the roll bite region.
Impact Flash Physics: Modeling and Comparisons With Experimental Results
NASA Astrophysics Data System (ADS)
Rainey, E.; Stickle, A. M.; Ernst, C. M.; Schultz, P. H.; Mehta, N. L.; Brown, R. C.; Swaminathan, P. K.; Michaelis, C. H.; Erlandson, R. E.
2015-12-01
Hypervelocity impacts frequently generate an observable "flash" of light with two components: a short-duration spike due to emissions from vaporized material, and a long-duration peak due to thermal emissions from expanding hot debris. The intensity and duration of these peaks depend on the impact velocity, angle, and the target and projectile mass and composition. Thus remote sensing measurements of planetary impact flashes have the potential to constrain the properties of impacting meteors and improve our understanding of impact flux and cratering processes. Interpreting impact flash measurements requires a thorough understanding of how flash characteristics correlate with impact conditions. Because planetary-scale impacts cannot be replicated in the laboratory, numerical simulations are needed to provide this insight for the solar system. Computational hydrocodes can produce detailed simulations of the impact process, but they lack the radiation physics required to model the optical flash. The Johns Hopkins University Applied Physics Laboratory (APL) developed a model to calculate the optical signature from the hot debris cloud produced by an impact. While the phenomenology of the optical signature is understood, the details required to accurately model it are complicated by uncertainties in material and optical properties and the simplifications required to numerically model radiation from large-scale impacts. Comparisons with laboratory impact experiments allow us to validate our approach and to draw insight regarding processes that occur at all scales in impact events, such as melt generation. We used Sandia National Lab's CTH shock physics hydrocode along with the optical signature model developed at APL to compare with a series of laboratory experiments conducted at the NASA Ames Vertical Gun Range. The experiments used Pyrex projectiles to impact pumice powder targets with velocities ranging from 1 to 6 km/s at angles of 30 and 90 degrees with respect to horizontal. High-speed radiometer measurements were made of the time-dependent impact flash at wavelengths of 350-1100 nm. We will present comparisons between these measurements and the output of APL's model. The results of this validation allow us to determine basic relationships between observed optical signatures and impact conditions.
A Method for Estimating Noise from Full-Scale Distributed Exhaust Nozzles
NASA Technical Reports Server (NTRS)
Kinzie, Kevin W.; Schein, David B.
2004-01-01
A method to estimate the full-scale noise suppression from a scale model distributed exhaust nozzle (DEN) is presented. For a conventional scale model exhaust nozzle, Strouhal number scaling using a scale factor related to the nozzle exit area is typically applied that shifts model scale frequency in proportion to the geometric scale factor. However, model scale DEN designs have two inherent length scales. One is associated with the mini-nozzles, whose size do not change in going from model scale to full scale. The other is associated with the overall nozzle exit area which is much smaller than full size. Consequently, lower frequency energy that is generated by the coalesced jet plume should scale to lower frequency, but higher frequency energy generated by individual mini-jets does not shift frequency. In addition, jet-jet acoustic shielding by the array of mini-nozzles is a significant noise reduction effect that may change with DEN model size. A technique has been developed to scale laboratory model spectral data based on the premise that high and low frequency content must be treated differently during the scaling process. The model-scale distributed exhaust spectra are divided into low and high frequency regions that are then adjusted to full scale separately based on different physics-based scaling laws. The regions are then recombined to create an estimate of the full-scale acoustic spectra. These spectra can then be converted to perceived noise levels (PNL). The paper presents the details of this methodology and provides an example of the estimated noise suppression by a distributed exhaust nozzle compared to a round conic nozzle.
Memory Transmission in Small Groups and Large Networks: An Agent-Based Model.
Luhmann, Christian C; Rajaram, Suparna
2015-12-01
The spread of social influence in large social networks has long been an interest of social scientists. In the domain of memory, collaborative memory experiments have illuminated cognitive mechanisms that allow information to be transmitted between interacting individuals, but these experiments have focused on small-scale social contexts. In the current study, we took a computational approach, circumventing the practical constraints of laboratory paradigms and providing novel results at scales unreachable by laboratory methodologies. Our model embodied theoretical knowledge derived from small-group experiments and replicated foundational results regarding collaborative inhibition and memory convergence in small groups. Ultimately, we investigated large-scale, realistic social networks and found that agents are influenced by the agents with which they interact, but we also found that agents are influenced by nonneighbors (i.e., the neighbors of their neighbors). The similarity between these results and the reports of behavioral transmission in large networks offers a major theoretical insight by linking behavioral transmission to the spread of information. © The Author(s) 2015.
Scaling of coupled dilatancy-diffusion processes in space and time
NASA Astrophysics Data System (ADS)
Main, I. G.; Bell, A. F.; Meredith, P. G.; Brantut, N.; Heap, M.
2012-04-01
Coupled dilatancy-diffusion processes resulting from microscopically brittle damage due to precursory cracking have been observed in the laboratory and suggested as a mechanism for earthquake precursors. One reason precursors have proven elusive may be the scaling in space: recent geodetic and seismic data placing strong limits on the spatial extent of the nucleation zone for recent earthquakes. Another may be the scaling in time: recent laboratory results on axi-symmetric samples show both a systematic decrease in circumferential extensional strain at failure and a delayed and a sharper acceleration of acoustic emission event rate as strain rate is decreased. Here we examine the scaling of such processes in time from laboratory to field conditions using brittle creep (constant stress loading) to failure tests, in an attempt to bridge part of the strain rate gap to natural conditions, and discuss the implications for forecasting the failure time. Dilatancy rate is strongly correlated to strain rate, and decreases to zero in the steady-rate creep phase at strain rates around 10-9 s-1 for a basalt from Mount Etna. The data are well described by a creep model based on the linear superposition of transient (decelerating) and accelerating micro-crack growth due to stress corrosion. The model produces good fits to the failure time in retrospect using the accelerating acoustic emission event rate, but in prospective tests on synthetic data with the same properties we find failure-time forecasting is subject to systematic epistemic and aleatory uncertainties that degrade predictability. The next stage is to use the technology developed to attempt failure forecasting in real time, using live streamed data and a public web-based portal to quantify the prospective forecast quality under such controlled laboratory conditions.
Role of Laboratory Plasma Experiments in exploring the Physics of Solar Eruptions
NASA Astrophysics Data System (ADS)
Tripathi, S.
2017-12-01
Solar eruptive events are triggered over a broad range of spatio-temporal scales by a variety of fundamental processes (e.g., force-imbalance, magnetic-reconnection, electrical-current driven instabilities) associated with arched magnetoplasma structures in the solar atmosphere. Contemporary research on solar eruptive events is at the forefront of solar and heliospheric physics due to its relevance to space weather. Details on the formation of magnetized plasma structures on the Sun, storage of magnetic energy in such structures over a long period (several Alfven transit times), and their impulsive eruptions have been recorded in numerous observations and simulated in computer models. Inherent limitations of space observations and uncontrolled nature of solar eruptions pose significant challenges in testing theoretical models and developing the predictive capability for space-weather. The pace of scientific progress in this area can be significantly boosted by tapping the potential of appropriately scaled laboratory plasma experiments to compliment solar observations, theoretical models, and computer simulations. To give an example, recent results from a laboratory plasma experiment on arched magnetic flux ropes will be presented and future challenges will be discussed. (Work supported by National Science Foundation, USA under award number 1619551)
NASA Astrophysics Data System (ADS)
Saif, S.; Brownlee, S. J.
2017-12-01
Compositional and structural heterogeneity in the continental crust are factors that contribute to the complex expression of crustal seismic anisotropy. Understanding deformation and flow in the crust using seismic anisotropy has thus proven difficult. Seismic anisotropy is affected by rock microstructure and mineralogy, and a number of studies have begun to characterize the full elastic tensors of crustal rocks in an attempt to increase our understanding of these intrinsic factors. However, there is still a large gap in length-scale between laboratory characterization on the scale of centimeters and seismic wavelengths on the order of kilometers. To address this length-scale gap we are developing a 3D crustal model that will help us determine the effects of rotating laboratory-scale elastic tensors into field-scale structures. The Chester gneiss dome in southeast Vermont is our primary focus. The model combines over 2000 structural data points from field measurements and published USGS structural data with elastic tensors of Chester dome rocks derived from electron backscatter diffraction data. We created a uniformly spaced grid by averaging structural measurements together in equally spaced grid boxes. The surface measurements are then projected into the third dimension using existing subsurface interpretations. A measured elastic tensor for the specific rock type is rotated according to its unique structural input at each point in the model. The goal is to use this model to generate artificial seismograms using existing numerical wave propagation codes. Once completed, the model input can be varied to examine the effects of different subsurface structure interpretations, as well as heterogeneity in rock composition and elastic tensors. Our goal is to be able to make predictions for how specific structures will appear in seismic data, and how that appearance changes with variations in rock composition.
Lausch, Angela; Pause, Marion; Merbach, Ines; Zacharias, Steffen; Doktor, Daniel; Volk, Martin; Seppelt, Ralf
2013-02-01
Remote sensing is an important tool for studying patterns in surface processes on different spatiotemporal scales. However, differences in the spatiospectral and temporal resolution of remote sensing data as well as sensor-specific surveying characteristics very often hinder comparative analyses and effective up- and downscaling analyses. This paper presents a new methodical framework for combining hyperspectral remote sensing data on different spatial and temporal scales. We demonstrate the potential of using the "One Sensor at Different Scales" (OSADIS) approach for the laboratory (plot), field (local), and landscape (regional) scales. By implementing the OSADIS approach, we are able (1) to develop suitable stress-controlled vegetation indices for selected variables such as the Leaf Area Index (LAI), chlorophyll, photosynthesis, water content, nutrient content, etc. over a whole vegetation period. Focused laboratory monitoring can help to document additive and counteractive factors and processes of the vegetation and to correctly interpret their spectral response; (2) to transfer the models obtained to the landscape level; (3) to record imaging hyperspectral information on different spatial scales, achieving a true comparison of the structure and process results; (4) to minimize existing errors from geometrical, spectral, and temporal effects due to sensor- and time-specific differences; and (5) to carry out a realistic top- and downscaling by determining scale-dependent correction factors and transfer functions. The first results of OSADIS experiments are provided by controlled whole vegetation experiments on barley under water stress on the plot scale to model LAI using the vegetation indices Normalized Difference Vegetation Index (NDVI) and green NDVI (GNDVI). The regression model ascertained from imaging hyperspectral AISA-EAGLE/HAWK (DUAL) data was used to model LAI. This was done by using the vegetation index GNDVI with an R (2) of 0.83, which was transferred to airborne hyperspectral data on the local and regional scales. For this purpose, hyperspectral imagery was collected at three altitudes over a land cover gradient of 25 km within a timeframe of a few minutes, yielding a spatial resolution from 1 to 3 m. For all recorded spatial scales, both the LAI and the NDVI were determined. The spatial properties of LAI and NDVI of all recorded hyperspectral images were compared using semivariance metrics derived from the variogram. The first results show spatial differences in the heterogeneity of LAI and NDVI from 1 to 3 m with the recorded hyperspectral data. That means that differently recorded data on different scales might not sufficiently maintain the spatial properties of high spatial resolution hyperspectral images.
The Use of Experiments and Modeling to Evaluate ...
Symposium Paper This paper reports on a study to examine the thermal decomposition of surrogate CWAs (in this case, Malathion) in a laboratory reactor, analysis of the results using reactor design theory, and subsequent scale-up of the results to a computersimulation of a full-scale commercial hazardous waste incinerator processing ceiling tile contaminated with residual Malathion.
Hydrodynamic Scalings: from Astrophysics to Laboratory
NASA Astrophysics Data System (ADS)
Ryutov, D. D.; Remington, B. A.
2000-05-01
A surprisingly general hydrodynamic similarity has been recently described in Refs. [1,2]. One can call it the Euler similarity because it works for the Euler equations (with MHD effects included). Although the dissipation processes are assumed to be negligible, the presence of shocks is allowed. For the polytropic medium (i.e., the medium where the energy density is proportional to the pressure), an evolution of an arbitrarily chosen 3D initial state can be scaled to another system, if a single dimensionless parameter (the Euler number) is the same for both initial states. The Euler similarity allows one to properly design laboratory experiments modeling astrophysical phenomena. We discuss several examples of such experiments related to the physics of supernovae [3]. For the problems with a single spatial scale, the condition of the smallness of dissipative processes can be adequately described in terms of the Reynolds, Peclet, and magnetic Reynolds numbers related to this scale (all three numbers must be large). However, if the system develops small-scale turbulence, dissipation may become important at these smaller scales, thereby affecting the gross behavior of the system. We analyze the corresponding constraints. We discuss also constraints imposed by the presence of interfaces between the substances with different polytropic index. Another set of similarities governs evolution of photoevaporation fronts in astrophysics. Convenient scaling laws exist in situations where the density of the ablated material is very low compared to the bulk density. We conclude that a number of hydrodynamical problems related to such objects as the Eagle Nebula can be adequately simulated in the laboratory. We discuss also possible scalings for radiative astrophysical jets (see Ref. [3] and references therein). This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract W-7405-Eng-48. 1. D.D. Ryutov, R.P. Drake, J. Kane, E. Liang, B. A. Remington, and W.M. Wood-Vasey. "Similarity criteria for the laboratory simulation of supernova hydrodynamics." Astrophysical Journal, v. 518, p. 821 (1999). 2. D.D. Ryutov, R.P. Drake, B.A. Remington. "Criteria for scaled laboratory simulations of astrophysical MHD phenomena." To appear in Astrophysical Journal - Supplement, April 2000. 3. Remington, B.A., Phys. Plasmas, 7, # 5 (2000).
Effect of nacelle on wake meandering in a laboratory scale wind turbine using LES
NASA Astrophysics Data System (ADS)
Foti, Daniel; Yang, Xiaolei; Guala, Michele; Sotiropoulos, Fotis
2015-11-01
Wake meandering, large scale motion in the wind turbine wakes, has considerable effects on the velocity deficit and turbulence intensity in the turbine wake from the laboratory scale to utility scale wind turbines. In the dynamic wake meandering model, the wake meandering is assumed to be caused by large-scale atmospheric turbulence. On the other hand, Kang et al. (J. Fluid Mech., 2014) demonstrated that the nacelle geometry has a significant effect on the wake meandering of a hydrokinetic turbine, through the interaction of the inner wake of the nacelle vortex with the outer wake of the tip vortices. In this work, the significance of the nacelle on the wake meandering of a miniature wind turbine previously used in experiments (Howard et al., Phys. Fluid, 2015) is demonstrated with large eddy simulations (LES) using immersed boundary method with fine enough grids to resolve the turbine geometric characteristics. The three dimensionality of the wake meandering is analyzed in detail through turbulent spectra and meander reconstruction. The computed flow fields exhibit wake dynamics similar to those observed in the wind tunnel experiments and are analyzed to shed new light into the role of the energetic nacelle vortex on wake meandering. This work was supported by Department of Energy DOE (DE-EE0002980, DE-EE0005482 and DE-AC04-94AL85000), and Sandia National Laboratories. Computational resources were provided by Sandia National Laboratories and the University of Minnesota Supercomputing.
Motion sickness in cats - A symptom rating scale used in laboratory and flight tests
NASA Technical Reports Server (NTRS)
Suri, K. B.; Daunton, N. G.; Crampton, G. H.
1979-01-01
The cat is proposed as a model for the study of motion and space sickness. Development of a scale for rating the motion sickness severity in the cat is described. The scale is used to evaluate an antimotion sickness drug, d-amphetamine plus scopolamine, and to determine whether it is possible to predict sickness susceptibility during parabolic flight, including zero-G maneuvers, from scores obtained during ground based trials.
Probing the frontiers of particle physics with tabletop-scale experiments.
DeMille, David; Doyle, John M; Sushkov, Alexander O
2017-09-08
The field of particle physics is in a peculiar state. The standard model of particle theory successfully describes every fundamental particle and force observed in laboratories, yet fails to explain properties of the universe such as the existence of dark matter, the amount of dark energy, and the preponderance of matter over antimatter. Huge experiments, of increasing scale and cost, continue to search for new particles and forces that might explain these phenomena. However, these frontiers also are explored in certain smaller, laboratory-scale "tabletop" experiments. This approach uses precision measurement techniques and devices from atomic, quantum, and condensed-matter physics to detect tiny signals due to new particles or forces. Discoveries in fundamental physics may well come first from small-scale experiments of this type. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
ERIC Educational Resources Information Center
DePino, Andrew, Jr.
1994-01-01
Describes the relationships a high school built with neighborhood industry, a national laboratory, a national museum, and a large university while trying to build a scale model of the original atomic pile. Provides suggestions for teachers. (MVL)
NASA Astrophysics Data System (ADS)
Light, B.; Krembs, C.
2003-12-01
Laboratory-based studies of the physical and biological properties of sea ice are an essential link between high latitude field observations and existing numerical models. Such studies promote improved understanding of climatic variability and its impact on sea ice and the structure of ice-dependent marine ecosystems. Controlled laboratory experiments can help identify feedback mechanisms between physical and biological processes and their response to climate fluctuations. Climatically sensitive processes occurring between sea ice and the atmosphere and sea ice and the ocean determine surface radiative energy fluxes and the transfer of nutrients and mass across these boundaries. High temporally and spatially resolved analyses of sea ice under controlled environmental conditions lend insight to the physics that drive these transfer processes. Techniques such as optical probing, thin section photography, and microscopy can be used to conduct experiments on natural sea ice core samples and laboratory-grown ice. Such experiments yield insight on small scale processes from the microscopic to the meter scale and can be powerful interdisciplinary tools for education and model parameterization development. Examples of laboratory investigations by the authors include observation of the response of sea ice microstructure to changes in temperature, assessment of the relationships between ice structure and the partitioning of solar radiation by first-year sea ice covers, observation of pore evolution and interfacial structure, and quantification of the production and impact of microbial metabolic products on the mechanical, optical, and textural characteristics of sea ice.
NASA Astrophysics Data System (ADS)
Chen, X.; Zachara, J. M.; Vermeul, V. R.; Freshley, M.; Hammond, G. E.
2015-12-01
The behavior of a persistent uranium plume in an extended groundwater- river water (GW-SW) interaction zone at the DOE Hanford site is dominantly controlled by river stage fluctuations in the adjacent Columbia River. The plume behavior is further complicated by substantial heterogeneity in physical and geochemical properties of the host aquifer sediments. Multi-scale field and laboratory experiments and reactive transport modeling were integrated to understand the complex plume behavior influenced by highly variable hydrologic and geochemical conditions in time and space. In this presentation we (1) describe multiple data sets from field-scale uranium adsorption and desorption experiments performed at our experimental well-field, (2) develop a reactive transport model that incorporates hydrologic and geochemical heterogeneities characterized from multi-scale and multi-type datasets and a surface complexation reaction network based on laboratory studies, and (3) compare the modeling and observation results to provide insights on how to refine the conceptual model and reduce prediction uncertainties. The experimental results revealed significant spatial variability in uranium adsorption/desorption behavior, while modeling demonstrated that ambient hydrologic and geochemical conditions and heterogeneities in sediment physical and chemical properties both contributed to complex plume behavior and its persistence. Our analysis provides important insights into the characterization, understanding, modeling, and remediation of groundwater contaminant plumes influenced by surface water and groundwater interactions.
Carlyle, Harriet F; Tellam, John H; Parker, Karen E
2004-01-01
An attempt has been made to estimate quantitatively cation concentration changes as estuary water invades a Triassic Sandstone aquifer in northwest England. Cation exchange capacities and selectivity coefficients for Na(+), K(+), Ca(2+), and Mg(2+) were measured in the laboratory using standard techniques. Selectivity coefficients were also determined using a method involving optimized back-calculation from flushing experiments, thus permitting better representation of field conditions; in all cases, the Gaines-Thomas/constant cation exchange capacity (CEC) model was found to be a reasonable, though not perfect, first description. The exchange parameters interpreted from the laboratory experiments were used in a one-dimensional reactive transport mixing cell model, and predictions compared with field pumping well data (Cl and hardness spanning a period of around 40 years, and full major ion analyses in approximately 1980). The concentration patterns predicted using Gaines-Thomas exchange with calcite equilibrium were similar to the observed patterns, but the concentrations of the divalent ions were significantly overestimated, as were 1980 sulphate concentrations, and 1980 alkalinity concentrations were underestimated. Including representation of sulphate reduction in the estuarine alluvium failed to replicate 1980 HCO(3) and pH values. However, by including partial CO(2) degassing following sulphate reduction, a process for which there is 34S and 18O evidence from a previous study, a good match for SO(4), HCO(3), and pH was attained. Using this modified estuary water and averaged values from the laboratory ion exchange parameter determinations, good predictions for the field cation data were obtained. It is concluded that the Gaines-Thomas/constant exchange capacity model with averaged parameter values can be used successfully in ion exchange predictions in this aquifer at a regional scale and over extended time scales, despite the numerous assumptions inherent in the approach; this has also been found to be the case in the few other published studies of regional ion exchanging flow.
NASA Astrophysics Data System (ADS)
Carlyle, Harriet F.; Tellam, John H.; Parker, Karen E.
2004-01-01
An attempt has been made to estimate quantitatively cation concentration changes as estuary water invades a Triassic Sandstone aquifer in northwest England. Cation exchange capacities and selectivity coefficients for Na +, K +, Ca 2+, and Mg 2+ were measured in the laboratory using standard techniques. Selectivity coefficients were also determined using a method involving optimized back-calculation from flushing experiments, thus permitting better representation of field conditions; in all cases, the Gaines-Thomas/constant cation exchange capacity (CEC) model was found to be a reasonable, though not perfect, first description. The exchange parameters interpreted from the laboratory experiments were used in a one-dimensional reactive transport mixing cell model, and predictions compared with field pumping well data (Cl and hardness spanning a period of around 40 years, and full major ion analyses in ˜1980). The concentration patterns predicted using Gaines-Thomas exchange with calcite equilibrium were similar to the observed patterns, but the concentrations of the divalent ions were significantly overestimated, as were 1980 sulphate concentrations, and 1980 alkalinity concentrations were underestimated. Including representation of sulphate reduction in the estuarine alluvium failed to replicate 1980 HCO 3 and pH values. However, by including partial CO 2 degassing following sulphate reduction, a process for which there is 34S and 18O evidence from a previous study, a good match for SO 4, HCO 3, and pH was attained. Using this modified estuary water and averaged values from the laboratory ion exchange parameter determinations, good predictions for the field cation data were obtained. It is concluded that the Gaines-Thomas/constant exchange capacity model with averaged parameter values can be used successfully in ion exchange predictions in this aquifer at a regional scale and over extended time scales, despite the numerous assumptions inherent in the approach; this has also been found to be the case in the few other published studies of regional ion exchanging flow.
Bounds on low scale gravity from RICE data and cosmogenic neutrino flux models
NASA Astrophysics Data System (ADS)
Hussain, Shahid; McKay, Douglas W.
2006-03-01
We explore limits on low scale gravity models set by results from the Radio Ice Cherenkov Experiment's (RICE) ongoing search for cosmic ray neutrinos in the cosmogenic, or GZK, energy range. The bound on M, the fundamental scale of gravity, depends upon cosmogenic flux model, black hole formation and decay treatments, inclusion of graviton mediated elastic neutrino processes, and the number of large extra dimensions, d. Assuming proton-based cosmogenic flux models that cover a broad range of flux possibilities, we find bounds in the interval 0.9 TeV
2014-09-01
semiempirical and ray-optical models. For example, the semiempirical COST-Walfisch- Ikegami model (3) estimates the received power predominantly on the...Books: Philadelphia, PA, 1965. 2. Rick, T .; Mathur, R. Fast Edge-Diffraction-Based Radio Wave Propagation Model for Graphics Hardware. Proceedings of
Traveling Crossow Instability for HIFiRE-5 in a Quiet Hypersonic Wind Tunnel (Postprint)
2013-06-01
scale model of the 2:1 elliptic cone HIFiRE-5 flight vehicle was used to investigate the traveling crossflow instability at Mach 6 in Purdue...Force Research Laboratory, Air Vehicles Directorate 2130 8th St., WPAFB, OH 45433-7542, USA Abstract A scale model of the 2:1 elliptic cone HIFiRE-5...flight vehicle was used to investigate the traveling crossflow instability at Mach 6 in Purdue University’s Mach-6 quiet wind tunnel. Traveling crossflow
A numerical model of surfactant enhanced solubilization was developed and applied to the simulation of nonaqueous phase liquid recovery in two-dimensional heterogeneous laboratory sand tank systems. Model parameters were derived from independent, small-scale, ...
NASA Astrophysics Data System (ADS)
Buijze, Loes; Guo, Yanhuang; Niemeijer, André R.; Ma, Shengli; Spiers, Christopher J.
2017-04-01
Faults in the upper crust cross-cut many different lithologies, which cause the composition of the fault rocks to vary. Each different fault rock segment may have specific mechanical properties, e.g. there may be stronger and weaker segments, and segments prone to unstable slip or creeping. This leads to heterogeneous deformation and stresses along such faults, and a heterogeneous distribution of seismic events. We address the influence of fault variability on stress, deformation, and seismicity using a combination of scaled laboratory fault and numerical modeling. A vertical fault was created along the diagonal of a 30 x 20 x 5 cm block of PMMA, along which a 2 mm thick gouge layer was deposited. Gouge materials of different characteristics were used to create various segments along the fault; quartz (average strength, stable sliding), kaolinite (weak, stable sliding), and gypsum (average strength, unstable sliding). The sample assembly was placed in a horizontal biaxial deformation apparatus, and shear displacement was enforced along the vertical fault. Multiple observations were made: 1) Acoustic emissions were continuously recorded at 3 MHz to observe the occurrence of stick-slips (micro-seismicity), 2) Photo-elastic effects (indicative of the differential stress) were recorded in the transparent set of PMMA wall-rocks using a high-speed camera, and 3) particle tracking was conducted on a speckle painted set of PMMA wall-rocks to study the deformation in the wall-rocks flanking the fault. All three observation methods show how the heterogeneous fault gouge exerts a strong control on the fault behavior. For example, a strong, unstable segment of gypsum flanked by two weaker kaolinite segments show strong stress concentrations develop near the edges of the strong segment, with at the same time most of acoustic emissions being located at the edge of this strong segment. The measurements of differential stress, strain and acoustic emissions provide a strong means to compare the scaled experiment to modeling results. In a finite-element model we reproduce the laboratory experiments, and compare the modeled stresses and strains to the observations and we compare the nucleation of seismic instability to the location of acoustic emissions. The model aids in understanding how the stresses and strains may vary as a result of fault heterogeneity, but also as a result of the boundary conditions inherent to a laboratory setup. The scaled experimental setup and modeling results also provide a means explain and compare with observations made at a larger scale, for example geodetic and seismological measurements along crustal scale faults.
Behavior of U 3Si 2 Fuel and FeCrAl Cladding under Normal Operating and Accident Reactor Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, Kyle Allan Lawrence; Hales, Jason Dean; Barani, Tommaso
2016-09-01
As part of the Department of Energy's Nuclear Energy Advanced Modeling and Simulation program, an Accident Tolerant Fuel High Impact Problem was initiated at the beginning of fiscal year 2015 to investigate the behavior of \\usi~fuel and iron-chromium-aluminum (FeCrAl) claddings under normal operating and accident reactor conditions. The High Impact Problem was created in response to the United States Department of Energy's renewed interest in accident tolerant materials after the events that occurred at the Fukushima Daiichi Nuclear Power Plant in 2011. The High Impact Problem is a multinational laboratory and university collaborative research effort between Idaho National Laboratory, Losmore » Alamos National Laboratory, Argonne National Laboratory, and the University of Tennessee, Knoxville. This report primarily focuses on the engineering scale research in fiscal year 2016 with brief summaries of the lower length scale developments in the areas of density functional theory, cluster dynamics, rate theory, and phase field being presented.« less
Shock Waves and Defects in Energetic Materials, a Match Made in MD Heaven
NASA Astrophysics Data System (ADS)
Wood, Mitchell; Kittell, David; Yarrington, Cole; Thompson, Aidan
2017-06-01
Shock wave interactions with defects, such as pores, are known to play a key role in the chemical initiation of energetic materials. In this talk the shock response of Hexanitrostilbene (HNS) is studied through large scale reactive molecular dynamics (RMD) simulations. These RMD simulations provide a unique opportunity to elucidate mechanisms of viscoplastic pore collapse which are often neglected in larger scale hydrodynamic models. A discussion of the macroscopic effects of this viscoplastic material response, such as its role in hot spot formation and eventual initiation, will be provided. Through this work we have been able to map a transition from purely viscoplastic to fluid-like pore collapse that is a function of shock strength, pore size and material strength. In addition, these findings are important reference data for the validation of future multi-scale modeling efforts of the shock response of heterogeneous materials. Examples of how these RMD results are translated into mesoscale models will also be addressed. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US DOE NNSA under Contract No. DE- AC04-94AL85000.
Model Scaling of Hydrokinetic Ocean Renewable Energy Systems
NASA Astrophysics Data System (ADS)
von Ellenrieder, Karl; Valentine, William
2013-11-01
Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).
ERIC Educational Resources Information Center
Kurbanoglu, N. Izzet; Akin, Ahmet
2010-01-01
The aim of this study is to examine the relationships between chemistry laboratory anxiety, chemistry attitudes, and self-efficacy. Participants were 395 university students. Participants completed the Chemistry Laboratory Anxiety Scale, the Chemistry Attitudes Scale, and the Self-efficacy Scale. Results showed that chemistry laboratory anxiety…
Rathfelder, K M; Abriola, L M; Taylor, T P; Pennell, K D
2001-04-01
A numerical model of surfactant enhanced solubilization was developed and applied to the simulation of nonaqueous phase liquid recovery in two-dimensional heterogeneous laboratory sand tank systems. Model parameters were derived from independent, small-scale, batch and column experiments. These parameters included viscosity, density, solubilization capacity, surfactant sorption, interfacial tension, permeability, capillary retention functions, and interphase mass transfer correlations. Model predictive capability was assessed for the evaluation of the micellar solubilization of tetrachloroethylene (PCE) in the two-dimensional systems. Predicted effluent concentrations and mass recovery agreed reasonably well with measured values. Accurate prediction of enhanced solubilization behavior in the sand tanks was found to require the incorporation of pore-scale, system-dependent, interphase mass transfer limitations, including an explicit representation of specific interfacial contact area. Predicted effluent concentrations and mass recovery were also found to depend strongly upon the initial NAPL entrapment configuration. Numerical results collectively indicate that enhanced solubilization processes in heterogeneous, laboratory sand tank systems can be successfully simulated using independently measured soil parameters and column-measured mass transfer coefficients, provided that permeability and NAPL distributions are accurately known. This implies that the accuracy of model predictions at the field scale will be constrained by our ability to quantify soil heterogeneity and NAPL distribution.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
NASA Astrophysics Data System (ADS)
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Isazadeh, Siavash; Feng, Min; Urbina Rivas, Luis Enrique; Frigon, Dominic
2014-04-15
Two pilot-scale activated sludge reactors were operated for 98 days to provide the necessary data to develop and validate a new mathematical model predicting the reduction of biosolids production by ozonation of the return activated sludge (RAS). Three ozone doses were tested during the study. In addition to the pilot-scale study, laboratory-scale experiments were conducted with mixed liquor suspended solids and with pure cultures to parameterize the biomass inactivation process during exposure to ozone. The experiments revealed that biomass inactivation occurred even at the lowest doses, but that it was not associated with extensive COD solubilization. For validation, the model was used to simulate the temporal dynamics of the pilot-scale operational data. Increasing the description accuracy of the inactivation process improved the precision of the model in predicting the operational data. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Iniguez, J.; Raposo, V.
2009-01-01
In this paper we analyse the behaviour of a small-scale model of a magnetic levitation system based on the Inductrack concept. Drag and lift forces acting on our prototype, moving above a continuous copper track, are studied analytically following a simple low-speed approach. The experimental results are in good agreement with the theoretical…
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Marchand, R.; Ackerman, T. P.
2016-12-01
Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A
Web-Based Virtual Laboratory for Food Analysis Course
NASA Astrophysics Data System (ADS)
Handayani, M. N.; Khoerunnisa, I.; Sugiarti, Y.
2018-02-01
Implementation of learning on food analysis course in Program Study of Agro-industrial Technology Education faced problems. These problems include the availability of space and tools in the laboratory that is not comparable with the number of students also lack of interactive learning tools. On the other hand, the information technology literacy of students is quite high as well the internet network is quite easily accessible on campus. This is a challenge as well as opportunities in the development of learning media that can help optimize learning in the laboratory. This study aims to develop web-based virtual laboratory as one of the alternative learning media in food analysis course. This research is R & D (research and development) which refers to Borg & Gall model. The results showed that assessment’s expert of web-based virtual labs developed, in terms of software engineering aspects; visual communication; material relevance; usefulness and language used, is feasible as learning media. The results of the scaled test and wide-scale test show that students strongly agree with the development of web based virtual laboratory. The response of student to this virtual laboratory was positive. Suggestions from students provided further opportunities for improvement web based virtual laboratory and should be considered for further research.
Upscaling of reaction rates in reactive transport using pore-scale reactive transport model
NASA Astrophysics Data System (ADS)
Yoon, H.; Dewers, T. A.; Arnold, B. W.; Major, J. R.; Eichhubl, P.; Srinivasan, S.
2013-12-01
Dissolved CO2 during geological CO2 storage may react with minerals in fractured rocks, confined aquifers, or faults, resulting in mineral precipitation and dissolution. The overall rate of reaction can be affected by coupled processes among hydrodynamics, transport, and reactions at the (sub) pore-scale. In this research pore-scale modeling of coupled fluid flow, reactive transport, and heterogeneous reaction at the mineral surface is applied to account for permeability alterations caused by precipitation-induced pore-blocking. This work is motivated by the observed CO2 seeps from a natural analog to geologic CO2 sequestration at Crystal Geyser, Utah. A key observation is the lateral migration of CO2 seep sites at a scale of ~ 100 meters over time. A pore-scale model provides fundamental mechanistic explanations of how calcite precipitation alters flow paths by pore plugging under different geochemical compositions and pore configurations. In addition, response function of reaction rates will be constructed from pore-scale simulations which account for a range of reaction regimes characterized by the Damkohler and Peclet numbers. Newly developed response functions will be used in a continuum scale model that may account for large-scale phenomena mimicking lateral migration of surface CO2 seeps. Comparison of field observations and simulations results will provide mechanistic explanations of the lateral migration and enhance our understanding of subsurface processes associated with the CO2 injection. This work is supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Space Laboratory on a Table Top: A Next Generative ECLSS design and diagnostic tool
NASA Technical Reports Server (NTRS)
Ramachandran, N.
2005-01-01
This paper describes the development plan for a comprehensive research and diagnostic tool for aspects of advanced life support systems in space-based laboratories. Specifically it aims to build a high fidelity tabletop model that can be used for the purpose of risk mitigation, failure mode analysis, contamination tracking, and testing reliability. We envision a comprehensive approach involving experimental work coupled with numerical simulation to develop this diagnostic tool. It envisions a 10% scale transparent model of a space platform such as the International Space Station that operates with water or a specific matched index of refraction liquid as the working fluid. This allows the scaling of a 10 ft x 10 ft x 10 ft room with air flow to 1 ft x 1 ft x 1 ft tabletop model with water/liquid flow. Dynamic similitude for this length scale dictates model velocities to be 67% of full-scale and thereby the time scale of the model to represent 15% of the full- scale system; meaning identical processes in the model are completed in 15% of the full- scale-time. The use of an index matching fluid (fluid that matches the refractive index of cast acrylic, the model material) allows making the entire model (with complex internal geometry) transparent and hence conducive to non-intrusive optical diagnostics. So using such a system one can test environment control parameters such as core flows (axial flows), cross flows (from registers and diffusers), potential problem areas such as flow short circuits, inadequate oxygen content, build up of other gases beyond desirable levels, test mixing processes within the system at local nodes or compartments and assess the overall system performance. The system allows quantitative measurements of contaminants introduced in the system and allows testing and optimizing the tracking process and removal of contaminants. The envisaged system will be modular and hence flexible for quick configuration change and subsequent testing. The data and inferences from the tests will allow for improvements in the development and design of next generation life support systems and configurations. Preliminary experimental and modeling work in this area will be presented. This involves testing of a single inlet-exit model with detailed 3-D flow visualization and quantitative diagnostics and computational modeling of the system.
EDITORIAL: Interrelationship between plasma phenomena in the laboratory and in space
NASA Astrophysics Data System (ADS)
Koepke, Mark
2008-07-01
The premise of investigating basic plasma phenomena relevant to space is that an alliance exists between both basic plasma physicists, using theory, computer modelling and laboratory experiments, and space science experimenters, using different instruments, either flown on different spacecraft in various orbits or stationed on the ground. The intent of this special issue on interrelated phenomena in laboratory and space plasmas is to promote the interpretation of scientific results in a broader context by sharing data, methods, knowledge, perspectives, and reasoning within this alliance. The desired outcomes are practical theories, predictive models, and credible interpretations based on the findings and expertise available. Laboratory-experiment papers that explicitly address a specific space mission or a specific manifestation of a space-plasma phenomenon, space-observation papers that explicitly address a specific laboratory experiment or a specific laboratory result, and theory or modelling papers that explicitly address a connection between both laboratory and space investigations were encouraged. Attention was given to the utility of the references for readers who seek further background, examples, and details. With the advent of instrumented spacecraft, the observation of waves (fluctuations), wind (flows), and weather (dynamics) in space plasmas was approached within the framework provided by theory with intuition provided by the laboratory experiments. Ideas on parallel electric field, magnetic topology, inhomogeneity, and anisotropy have been refined substantially by laboratory experiments. Satellite and rocket observations, theory and simulations, and laboratory experiments have contributed to the revelation of a complex set of processes affecting the accelerations of electrons and ions in the geospace plasma. The processes range from meso-scale of several thousands of kilometers to micro-scale of a few meters to kilometers. Papers included in this special issue serve to synthesise our current understanding of processes related to the coupling and feedback at disparate scales. Categories of topics included here are (1) ionospheric physics and (2) Alfvén-wave physics, both of which are related to the particle acceleration responsible for auroral displays, (3) whistler-mode triggering mechanism, which is relevant to radiation-belt dynamics, (4) plasmoid encountering a barrier, which has applications throughout the realm of space and astrophysical plasmas, and (5) laboratory investigations of the entire magnetosphere or the plasma surrounding the magnetosphere. The papers are ordered from processes that take place nearest the Earth to processes that take place at increasing distances from Earth. Many advances in understanding space plasma phenomena have been linked to insight derived from theoretical modeling and/or laboratory experiments. Observations from space-borne instruments are typically interpreted using theoretical models developed to predict the properties and dynamics of space and astrophysical plasmas. The usefulness of customized laboratory experiments for providing confirmation of theory by identifying, isolating, and studying physical phenomena efficiently, quickly, and economically has been demonstrated in the past. The benefits of laboratory experiments to investigating space-plasma physics are their reproducibility, controllability, diagnosability, reconfigurability, and affordability compared to a satellite mission or rocket campaign. Certainly, the plasma being investigated in a laboratory device is quite different from that being measured by a spaceborne instrument; nevertheless, laboratory experiments discover unexpected phenomena, benchmark theoretical models, develop physical insight, establish observational signatures, and pioneer diagnostic techniques. Explicit reference to such beneficial laboratory contributions is occasionally left out of the citations in the space-physics literature in favor of theory-paper counterparts and, thus, the scientific support that laboratory results can provide to the development of space-relevant theoretical models is often under-recognized. It is unrealistic to expect the dimensional parameters corresponding to space plasma to be matchable in the laboratory. However, a laboratory experiment is considered well designed if the subset of parameters relevant to a specific process shares the same phenomenological regime as the subset of analogous space parameters, even if less important parameters are mismatched. Regime boundaries are assigned by normalizing a dimensional parameter to an appropriate reference or scale value to make it dimensionless and noting the values at which transitions occur in the physical behavior or approximations. An example of matching regimes for cold-plasma waves is finding a 45° diagonal line on the log--log CMA diagram along which lie both a laboratory-observed wave and a space-observed wave. In such a circumstance, a space plasma and a lab plasma will support the same kind of modes if the dimensionless parameters are scaled properly (Bellan 2006 Fundamentals of Plasma Physics (Cambridge: Cambridge University Press) p 227). The plasma source, configuration geometry, and boundary conditions associated with a specific laboratory experiment are characteristic elements that affect the plasma and plasma processes that are being investigated. Space plasma is not exempt from an analogous set of constraining factors that likewise influence the phenomena that occur. Typically, each morphologically distinct region of space has associated with it plasma that is unique by virtue of the various mechanisms responsible for the plasma's presence there, as if the plasma were produced by a unique source. Boundary effects that typically constrain the possible parameter values to lie within one or more restricted ranges are inescapable in laboratory plasma. The goal of a laboratory experiment is to examine the relevant physics within these ranges and extrapolate the results to space conditions that may or may not be subject to any restrictions on the values of the plasma parameters. The interrelationship between laboratory and space plasma experiments has been cultivated at a low level and the potential scientific benefit in this area has yet to be realized. The few but excellent examples of joint papers, joint experiments, and directly relevant cross-disciplinary citations are a direct result of the emphasis placed on this interrelationship two decades ago. Building on this special issue Plasma Physics and Controlled Fusion plans to create a dedicated webpage to highlight papers directly relevant to this field published either in the recent past or in the future. It is hoped that this resource will appeal to the readership in the laboratory-experiment and space-plasma communities and improve the cross-fertilization between them.
Developments in a methodology for the design of engineered invert traps in combined sewer systems.
Buxton, A; Tait, S; Stovin, V; Saul, A
2002-01-01
Sediments within sewers can have a significant effect on the operation of the sewer system and on the surrounding natural and urban environment. One possible method for the management of sewer sediments is the use of slotted invert traps. Although invert traps can be used to selectively trap only inorganic bedload material, little is known with regard to the design of these structures. This paper presents results from a laboratory investigation comparing the trapping performance of three slot size configurations of a laboratory-scale invert trap. The paper also presents comparative results from a two-dimensional computational model utilising stochastic particle tracking. This investigation shows that particle tracking consistently over-predicts sediment retention efficiencies observed within the laboratory model.
Constructing constitutive relationships for seismic and aseismic fault slip
Beeler, N.M.
2009-01-01
For the purpose of modeling natural fault slip, a useful result from an experimental fault mechanics study would be a physically-based constitutive relation that well characterizes all the relevant observations. This report describes an approach for constructing such equations. Where possible the construction intends to identify or, at least, attribute physical processes and contact scale physics to the observations such that the resulting relations can be extrapolated in conditions and scale between the laboratory and the Earth. The approach is developed as an alternative but is based on Ruina (1983) and is illustrated initially by constructing a couple of relations from that study. In addition, two example constitutive relationships are constructed; these describe laboratory observations not well-modeled by Ruina's equations: the unexpected shear-induced weakening of silica-rich rocks at high slip speed (Goldsby and Tullis, 2002) and fault strength in the brittle ductile transition zone (Shimamoto, 1986). The examples, provided as illustration, may also be useful for quantitative modeling.
2009-11-04
air, low-temperature plasma chemistry kinetic model Nonequilibrium Thermodynamics Laboratories The Ohio State University • Air plasma model...problems require separate analysis: • Nsec pulse plasma / sheath models cannot incorporate detailed reactive plasma chemistry : too many species ( 100...and reactions ( 1 000)~ ~ , • Detailed plasma chemistry models (quasi-neutral) cannot incorporate repetitive, nsec time scale sheath dynamics and plasma
Biogeochemical metabolic modeling of methanogenesis by Methanosarcina barkeri
NASA Astrophysics Data System (ADS)
Jensvold, Z. D.; Jin, Q.
2015-12-01
Methanogenesis, the biological process of methane production, is the final step of natural organic matter degradation. In studying natural methanogenesis, important questions include how fast methanogenesis proceeds and how methanogens adapt to the environment. To address these questions, we propose a new approach - biogeochemical reaction modeling - by simulating the metabolic networks of methanogens. Biogeochemical reaction modeling combines geochemical reaction modeling and genome-scale metabolic modeling. Geochemical reaction modeling focuses on the speciation of electron donors and acceptors in the environment, and therefore the energy available to methanogens. Genome-scale metabolic modeling predicts microbial rates and metabolic strategies. Specifically, this approach describes methanogenesis using an enzyme network model, and computes enzyme rates by accounting for both the kinetics and thermodynamics. The network model is simulated numerically to predict enzyme abundances and rates of methanogen metabolism. We applied this new approach to Methanosarcina barkeri strain fusaro, a model methanogen that makes methane by reducing carbon dioxide and oxidizing dihydrogen. The simulation results match well with the results of previous laboratory experiments, including the magnitude of proton motive force and the kinetic parameters of Methanosarcina barkeri. The results also predict that in natural environments, the configuration of methanogenesis network, including the concentrations of enzymes and metabolites, differs significantly from that under laboratory settings.
A perspective on modeling the multiscale response of energetic materials
NASA Astrophysics Data System (ADS)
Rice, Betsy M.
2017-01-01
The response of an energetic material to insult is perhaps one of the most difficult processes to model due to concurrent chemical and physical phenomena occurring over scales ranging from atomistic to continuum. Unraveling the interdependencies of these complex processes across the scales through modeling can only be done within a multiscale framework. In this paper, I will describe progress in the development of a predictive, experimentally validated multiscale reactive modeling capability for energetic materials at the Army Research Laboratory. I will also describe new challenges and research opportunities that have arisen in the course of our development which should be pursued in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Marte
The research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to: 1) Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation. 2) Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator. 3) Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the resultsmore » to improve understand of proppant flow and transport. 4) Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production. 5) Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include: 1) A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS, 2) Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock, 3) Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications, and 4) Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Marte
2013-12-31
This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understandmore » of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less
Prediction of Gas Injection Performance for Heterogeneous Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blunt, Martin J.; Orr, Franklin M.
This report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1997 - September 1998 under the second year of a three-year grant from the Department of Energy on the "Prediction of Gas Injection Performance for Heterogeneous Reservoirs." The research effort is an integrated study of the factors affecting gas injection, from the pore scale to the field scale, and involves theoretical analysis, laboratory experiments, and numerical simulation. The original proposal described research in four areas: (1) Pore scale modeling of three phase flow in porous media; (2) Laboratory experiments and analysis of factorsmore » influencing gas injection performance at the core scale with an emphasis on the fundamentals of three phase flow; (3) Benchmark simulations of gas injection at the field scale; and (4) Development of streamline-based reservoir simulator. Each state of the research is planned to provide input and insight into the next stage, such that at the end we should have an integrated understanding of the key factors affecting field scale displacements.« less
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
Effects of pore-scale physics on uranium geochemistry in Hanford sediments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Qinhong; Ewing, Robert P.
Overall, this work examines a key scientific issue, mass transfer limitations at the pore-scale, using both new instruments with high spatial resolution, and new conceptual and modeling paradigms. The complementary laboratory and numerical approaches connect pore-scale physics to macroscopic measurements, providing a previously elusive scale integration. This Exploratory research project produced five peer-reviewed journal publications and eleven scientific presentations. This work provides new scientific understanding, allowing the DOE to better incorporate coupled physical and chemical processes into decision making for environmental remediation and long-term stewardship.
NASA Astrophysics Data System (ADS)
Sobolik, S. R.; Gomez, S. P.; Matteo, E. N.; Stormont, J.
2014-12-01
This paper will present the results of large-scale three-dimensional calculations simulating the hydrological-mechanical behavior of a CO2injection reservoir and the resulting effects on wellbore casings and sealant repair materials. A critical aspect of designing effective wellbore seal repair materials is predicting thermo-mechanical perturbations in local stress that can compromise seal integrity. The DOE-NETL project "Wellbore Seal Repair Using Nanocomposite Materials," is interested in the stress-strain history of abandoned wells, as well as changes in local pressure, stress, and temperature conditions that accompany carbon dioxide injection or brine extraction. Two distinct computational models comprise the current modeling effort. The first is a field scale model that uses the stratigraphy, material properties, and injection history from a pilot CO2injection operation in Cranfield, MS to develop a stress-strain history for wellbore locations from 100 to 400 meters from an injection well. The results from the field scale model are used as input to a more detailed model of a wellbore casing. The 3D wellbore model examines the impacts of various loading scenarios on a casing structure. This model has been developed in conjunction with bench-top experiments of an integrated seal system in an idealized scaled wellbore mock-up being used to test candidate seal repair materials. The results from these models will be used to estimate the necessary mechanical properties needed for a successful repair material. This material is based upon work supported by the US Department of Energy (DOE) National Energy Technology Laboratory (NETL) under Grant Number DE-FE0009562. This project is managed and administered by the Storage Division of the NETL and funded by DOE/NETL and cost-sharing partners. This work was funded in part by the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Office of Basic Energy Sciences under Award DE-SC-0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Sobolik, S. R.; Matteo, E. N.; Dewers, T. A.; Newell, P.; Gomez, S. P.; Stormont, J.
2014-12-01
This paper will present the results of large-scale three-dimensional calculations simulating the hydrological-mechanical behavior of a CO2 injection reservoir and the resulting effects on wellbore casings and sealant repair materials. A critical aspect of designing effective wellbore seal repair materials is predicting thermo-mechanical perturbations in local stress that can compromise seal integrity. The DOE-NETL project "Wellbore Seal Repair Using Nanocomposite Materials," is interested in the stress-strain history of abandoned wells, as well as changes in local pressure, stress, and temperature conditions that accompany carbon dioxide injection or brine extraction. Two distinct computational models comprise the current modeling effort. The first is a field scale model that uses the stratigraphy, material properties, and injection history from a pilot CO2 injection operation in Cranfield, MS to develop a stress-strain history for wellbore locations from 100 to 400 meters from an injection well. The results from the field scale model are used as input to a more detailed model of a wellbore casing. The 3D wellbore model examines the impacts of various loading scenarios on a casing structure. This model has been developed in conjunction with bench-top experiments of an integrated seal system in an idealized scaled wellbore mock-up being used to test candidate seal repair materials. The results from these models will be used to estimate the necessary mechanical properties needed for a successful repair material. This material is based upon work supported by the U.S. Department of Energy (DOE) National Energy Technology Laboratory (NETL) under Grant Number DE-FE0009562. This project is managed and administered by the University of New Mexico and funded by DOE/NETL and cost-sharing partners. This work was funded in part by the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award DE-SC-0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Sobolik, S. R.; Gomez, S. P.; Matteo, E. N.; Stormont, J.
2015-12-01
This paper will present the results of large-scale three-dimensional calculations simulating the hydrological-mechanical behavior of a CO2injection reservoir and the resulting effects on wellbore casings and sealant repair materials. A critical aspect of designing effective wellbore seal repair materials is predicting thermo-mechanical perturbations in local stress that can compromise seal integrity. The DOE-NETL project "Wellbore Seal Repair Using Nanocomposite Materials," is interested in the stress-strain history of abandoned wells, as well as changes in local pressure, stress, and temperature conditions that accompany carbon dioxide injection or brine extraction. Two distinct computational models comprise the current modeling effort. The first is a field scale model that uses the stratigraphy, material properties, and injection history from a pilot CO2injection operation in Cranfield, MS to develop a stress-strain history for wellbore locations from 100 to 400 meters from an injection well. The results from the field scale model are used as input to a more detailed model of a wellbore casing. The 3D wellbore model examines the impacts of various loading scenarios on a casing structure. This model has been developed in conjunction with bench-top experiments of an integrated seal system in an idealized scaled wellbore mock-up being used to test candidate seal repair materials. The results from these models will be used to estimate the necessary mechanical properties needed for a successful repair material. This material is based upon work supported by the US Department of Energy (DOE) National Energy Technology Laboratory (NETL) under Grant Number DE-FE0009562. This project is managed and administered by the Storage Division of the NETL and funded by DOE/NETL and cost-sharing partners. This work was funded in part by the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Office of Basic Energy Sciences under Award DE-SC-0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clauss, D.B.
Analyses of a 1:6-scale reinforced concrete containment model that will be tested to failure at Sandia National Laboratories in the spring of 1987 were conducted by the following organizations in the United States and Europe: Sandia National Laboratories (USA), Argonne National Laboratory (USA), Electric Power Research Institute (USA), Commissariat a L'Energie Atomique (France), HM Nuclear Installations Inspectorate (UK), Comitato Nazionale per la ricerca e per lo sviluppo dell'Energia Nucleare e delle Energie Alternative (Italy), UK Atomic Energy Authority, Safety and Reliability Directorate (UK), Gesellschaft fuer Reaktorsicherheit (FRG), Brookhaven National Laboratory (USA), and Central Electricity Generating Board (UK). Each organization wasmore » supplied with a standard information package, which included construction drawings and actual material properties for most of the materials used in the model. Each organization worked independently using their own analytical methods. This report includes descriptions of the various analytical approaches and pretest predictions submitted by each organization. Significant milestones that occur with increasing pressure, such as damage to the concrete (cracking and crushing) and yielding of the steel components, and the failure pressure (capacity) and failure mechanism are described. Analytical predictions for pressure histories of strain in the liner and rebar and displacements are compared at locations where experimental results will be available after the test. Thus, these predictions can be compared to one another and to experimental results after the test.« less
High subsonic flow tests of a parallel pipe followed by a large area ratio diffuser
NASA Technical Reports Server (NTRS)
Barna, P. S.
1975-01-01
Experiments were performed on a pilot model duct system in order to explore its aerodynamic characteristics. The model was scaled from a design projected for the high speed operation mode of the Aircraft Noise Reduction Laboratory. The test results show that the model performed satisfactorily and therefore the projected design will most likely meet the specifications.
NASA Astrophysics Data System (ADS)
Lin, Shian-Jiann; Harris, Lucas; Chen, Jan-Huey; Zhao, Ming
2014-05-01
A multi-scale High-Resolution Atmosphere Model (HiRAM) is being developed at NOAA/Geophysical Fluid Dynamics Laboratory. The model's dynamical framework is the non-hydrostatic extension of the vertically Lagrangian finite-volume dynamical core (Lin 2004, Monthly Wea. Rev.) constructed on a stretchable (via Schmidt transformation) cubed-sphere grid. Physical parametrizations originally designed for IPCC-type climate predictions are in the process of being modified and made more "scale-aware", in an effort to make the model suitable for multi-scale weather-climate applications, with horizontal resolution ranging from 1 km (near the target high-resolution region) to as low as 400 km (near the antipodal point). One of the main goals of this development is to enable simulation of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously thought impossible. We will present preliminary results, covering a very wide spectrum of temporal-spatial scales, ranging from simulation of tornado genesis (hours), Madden-Julian Oscillations (intra-seasonal), topical cyclones (seasonal), to Quasi Biennial Oscillations (intra-decadal), using the same global multi-scale modeling system.
NASA Astrophysics Data System (ADS)
Bonde, Jeffrey
2018-04-01
The dynamics of a magnetized, expanding plasma with a high ratio of kinetic energy density to ambient magnetic field energy density, or β, are examined by adapting a model of gaseous bubbles expanding in liquids as developed by Lord Rayleigh. New features include scale magnitudes and evolution of the electric fields in the system. The collisionless coupling between the expanding and ambient plasma due to these fields is described as well as the relevant scaling relations. Several different responses of the ambient plasma to the expansion are identified in this model, and for most laboratory experiments, ambient ions should be pulled inward, against the expansion due to the dominance of the electrostatic field.
NASA Astrophysics Data System (ADS)
Mount, G. J.; Comas, X.
2015-12-01
Subsurface water flow within the Biscayne aquifer is controlled by the heterogeneous distribution of porosity and permeability in the karst Miami Limestone and the presence of numerous dissolution and mega-porous features. The dissolution features and other high porosity areas can create preferential flow paths and direct recharge to the aquifer, which may not be accurately conceptualized in groundwater flow models. As hydrologic conditions are undergoing restoration in the Everglades, understanding the distribution of these high porosity areas within the subsurface would create a better understanding of subsurface flow. This research utilizes ground penetrating radar to estimate the spatial variability of porosity and dielectric permittivity of the Miami Limestone at centimeter scale resolution at the laboratory scale. High frequency GPR antennas were used to measure changes in electromagnetic wave velocity through limestone samples under varying volumetric water contents. The Complex Refractive Index Model (CRIM) was then applied in order to estimate porosity and dielectric permittivity of the solid phase of the limestone. Porosity estimates ranged from 45.2-66.0% from the CRIM model and correspond well with estimates of porosity from analytical and digital image techniques. Dielectric permittivity values of the limestone solid phase ranged from 7.0 and 13.0, which are similar to values in the literature. This research demonstrates the ability of GPR to identify the cm scale spatial variability of aquifer properties that influence subsurface water flow which could have implications for groundwater flow models in the Biscayne and potentially other shallow karst aquifers.
The U.S. EPA National Health and Environmental Effects Research Laboratory's (NHEERL) Wildlife Research Strategy was developed to provide methods, models and data to address concerns related to toxic chemicals and habitat alteration in the context of wildlife risk assessment and ...
NASA Technical Reports Server (NTRS)
Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark
2013-01-01
As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.
Integrated Energy Solutions | NREL
Transitions A man and woman standing in front of a large, color 3D visualization screen that spans the height a woman and a man testing a scaled model of a microgrid controller in a laboratory setting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, V.; Fannin, K.F.; Biljetina, R.
1986-07-01
The Institute of Gas Technology (IGT) conducted a comprehensive laboratory-scale research program to develop and optimize the anaerobic digestion process for producing methane from water hyacinth and sludge blends. This study focused on digester design and operating techniques, which gave improved methane yields and production rates over those observed using conventional digesters. The final digester concept and the operating experience was utilized to design and operate a large-scale experimentla test unit (ETU) at Walt Disney World, Florida. This paper describes the novel digester design, operating techniques, and the results obtained in the laboratory. The paper also discusses a kinetic modelmore » which predicts methane yield, methane production rate, and digester effluent solids as a function of retention time. This model was successfully utilized to predict the performance of the ETU. 15 refs., 6 figs., 6 tabs.« less
NASA Astrophysics Data System (ADS)
Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.
2012-12-01
The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siranosian, Antranik Antonio; Schembri, Philip Edward; Luscher, Darby Jon
The Los Alamos National Laboratory's Weapon Systems Engineering division's Advanced Engineering Analysis group employs material constitutive models of composites for use in simulations of components and assemblies of interest. Experimental characterization, modeling and prediction of the macro-scale (i.e. continuum) behaviors of these composite materials is generally difficult because they exhibit nonlinear behaviors on the meso- (e.g. micro-) and macro-scales. Furthermore, it can be difficult to measure and model the mechanical responses of the individual constituents and constituent interactions in the composites of interest. Current efforts to model such composite materials rely on semi-empirical models in which meso-scale properties are inferredmore » from continuum level testing and modeling. The proposed approach involves removing the difficulties of interrogating and characterizing micro-scale behaviors by scaling-up the problem to work with macro-scale composites, with the intention of developing testing and modeling capabilities that will be applicable to the mesoscale. This approach assumes that the physical mechanisms governing the responses of the composites on the meso-scale are reproducible on the macro-scale. Working on the macro-scale simplifies the quantification of composite constituents and constituent interactions so that efforts can be focused on developing material models and the testing techniques needed for calibration and validation. Other benefits to working with macro-scale composites include the ability to engineer and manufacture—potentially using additive manufacturing techniques—composites that will support the application of advanced measurement techniques such as digital volume correlation and three-dimensional computed tomography imaging, which would aid in observing and quantifying complex behaviors that are exhibited in the macro-scale composites of interest. Ultimately, the goal of this new approach is to develop a meso-scale composite modeling framework, applicable to many composite materials, and the corresponding macroscale testing and test data interrogation techniques to support model calibration.« less
Evaluation of the laboratory mouse model for screening topical mosquito repellents.
Rutledge, L C; Gupta, R K; Wirtz, R A; Buescher, M D
1994-12-01
Eight commercial repellents were tested against Aedes aegypti 0 and 4 h after application in serial dilution to volunteers and laboratory mice. Results were analyzed by multiple regression of percentage of biting (probit scale) on dose (logarithmic scale) and time. Empirical correction terms for conversion of values obtained in tests on mice to values expected in tests on human volunteers were calculated from data obtained on 4 repellents and evaluated with data obtained on 4 others. Corrected values from tests on mice did not differ significantly from values obtained in tests on volunteers. Test materials used in the study were dimethyl phthalate, butopyronoxyl, butoxy polypropylene glycol, MGK Repellent 11, deet, ethyl hexanediol, Citronyl, and dibutyl phthalate.
Scientific management and implementation of the geophysical fluid flow cell for Spacelab missions
NASA Technical Reports Server (NTRS)
Hart, J.; Toomre, J.
1980-01-01
Scientific support for the spherical convection experiment to be flown on Spacelab 3 was developed. This experiment takes advantage of the zero gravity environment of the orbiting space laboratory to conduct fundamental fluid flow studies concerned with thermally driven motions inside a rotating spherical shell with radial gravity. Such a system is a laboratory analog of large scale atmospheric and solar circulations. The radial body force necessary to model gravity correctly is obtained by using dielectric polarization forces in a radially varying electric field to produce radial accelerations proportional to temperature. This experiment will answer fundamental questions concerned with establishing the preferred modes of large scale motion in planetary and stellar atmospheres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2018-01-23
Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.
1992-04-01
Manager, ARDE, INC. as Prime Contractor/Principal Investigator, Physics International as subcontractor for failure modeling computer calculations and Air...Force Astronautics Laboratory (EAFB) for full scale composite vessel testing. In addition, important contributions were made by Dr. Yen Pan...34 0 AND 22" 0 PSC VESSEL TESTS E-i E.1 - FULL SCALE COMPOSITE VESSEL E-1 TESTS - INSTRUMENTATION REQUIREMENTS/PROCEDURES E.2 - 16" 0 AND 22" 0 PSC
Advanced core-analyses for subsurface characterization
NASA Astrophysics Data System (ADS)
Pini, R.
2017-12-01
The heterogeneity of geological formations varies over a wide range of length scales and represents a major challenge for predicting the movement of fluids in the subsurface. Although they are inherently limited in the accessible length-scale, laboratory measurements on reservoir core samples still represent the only way to make direct observations on key transport properties. Yet, properties derived on these samples are of limited use and should be regarded as sample-specific (or `pseudos'), if the presence of sub-core scale heterogeneities is not accounted for in data processing and interpretation. The advent of imaging technology has significantly reshaped the landscape of so-called Special Core Analysis (SCAL) by providing unprecedented insight on rock structure and processes down to the scale of a single pore throat (i.e. the scale at which all reservoir processes operate). Accordingly, improved laboratory workflows are needed that make use of such wealth of information by e.g., referring to the internal structure of the sample and in-situ observations, to obtain accurate parameterisation of both rock- and flow-properties that can be used to populate numerical models. We report here on the development of such workflow for the study of solute mixing and dispersion during single- and multi-phase flows in heterogeneous porous systems through a unique combination of two complementary imaging techniques, namely X-ray Computed Tomography (CT) and Positron Emission Tomography (PET). The experimental protocol is applied to both synthetic and natural porous media, and it integrates (i) macroscopic observations (tracer effluent curves), (ii) sub-core scale parameterisation of rock heterogeneities (e.g., porosity, permeability and capillary pressure), and direct 3D observation of (iii) fluid saturation distribution and (iv) the dynamic spreading of the solute plumes. Suitable mathematical models are applied to reproduce experimental observations, including both 1D and 3D numerical schemes populated with the parameterisation above. While it validates the core-flooding experiments themselves, the calibrated mathematical model represents a key element for extending them to conditions prevalent in the subsurface, which would be otherwise not attainable in the laboratory.
Sediment Scaling for Mud Mountain Fish Barrier Structure
2017-06-28
2nd Int. Conf. on the Application of Physical Modeling to Port and Coastal Protection – Coastlab ’08, International Association for Hydro...Structure by Jeremy A. Sharp, Gary L. Brown, and Gary L. Bell PURPOSE: This Coastal and Hydraulics Laboratory technical note describes the process of... Coastal and Hydraulics Laboratory. Questions about this technical note can be addressed to Mr. Sharp at 601-634-4212 or Jeremy.A.Sharp@usace.army.mil
Numerical modeling of seismic anomalies at impact craters on a laboratory scale
NASA Astrophysics Data System (ADS)
Wuennemann, K.; Grosse, C. U.; Hiermaier, S.; Gueldemeister, N.; Moser, D.; Durr, N.
2011-12-01
Almost all terrestrial impact craters exhibit a typical geophysical signature. The usually observed circular negative gravity anomaly and reduced seismic velocities in the vicinity of crater structures are presumably related to an approximately hemispherical zone underneath craters where rocks have experienced intense brittle plastic deformation and fracturing during formation (see Fig.1). In the framework of the "MEMIN" (multidisciplinary experimental and modeling impact crater research network) project we carried out hypervelocity cratering experiments at the Fraunhofer Institute for High-Speed Dynamics on a decimeter scale to study the spatiotemporal evolution of the damage zone using ultrasound, acoustic emission techniques, and numerical modeling of crater formation. 2.5-10 mm iron projectiles were shot at 2-5.5 km/s on dry and water-saturated sandstone targets. The target material was characterized before, during and after the impact with high spatial resolution acoustic techniques to detect the extent of the damage zone, the state of rocks therein and to record the growth of cracks. The ultrasound measurements are applied analog to seismic surveys at natural craters but used on a different - i.e. much smaller - scale. We compare the measured data with dynamic models of crater formation, shock, plastic and elastic wave propagation, and tensile/shear failure of rocks in the impacted sandstone blocks. The presence of porosity and pore water significantly affects the propagation of waves. In particular the crushing of pores due to shock compression has to be taken into account. We present preliminary results showing good agreement between experiments and numerical model. In a next step we plan to use the numerical models to upscale the results from laboratory dimensions to the scale of natural impact craters.
A finite-element model for moving contact line problems in immiscible two-phase flow
NASA Astrophysics Data System (ADS)
Kucala, Alec
2017-11-01
Accurate modeling of moving contact line (MCL) problems is imperative in predicting capillary pressure vs. saturation curves, permeability, and preferential flow paths for a variety of applications, including geological carbon storage (GCS) and enhanced oil recovery (EOR). The macroscale movement of the contact line is dependent on the molecular interactions occurring at the three-phase interface, however most MCL problems require resolution at the meso- and macro-scale. A phenomenological model must be developed to account for the microscale interactions, as resolving both the macro- and micro-scale would render most problems computationally intractable. Here, a model for the moving contact line is presented as a weak forcing term in the Navier-Stokes equation and applied directly at the location of the three-phase interface point. The moving interface is tracked with the level set method and discretized using the conformal decomposition finite element method (CDFEM), allowing for the surface tension and the wetting model to be computed at the exact interface location. A variety of verification test cases for simple two- and three-dimensional geometries are presented to validate the current MCL model, which can exhibit grid independence when a proper scaling for the slip length is chosen. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.
NASA Astrophysics Data System (ADS)
Wietsma, T. W.; Oostrom, M.; Foster, N. S.
2003-12-01
Intermediate-scale experiments (ISEs) for flow and transport are a valuable tool for simulating subsurface features and conditions encountered in the field at government and private sites. ISEs offer the ability to study, under controlled laboratory conditions, complicated processes characteristic of mixed wastes and heterogeneous subsurface environments, in multiple dimensions and at different scales. ISEs may, therefore, result in major cost savings if employed prior to field studies. A distinct advantage of ISEs is that researchers can design physical and/or chemical heterogeneities in the porous media matrix that better approximate natural field conditions and therefore address research questions that contain the additional complexity of processes often encountered in the natural environment. A new Subsurface Flow and Transport Laboratory (SFTL) has been developed for ISE users in the Environmental Spectroscopy & Biogeochemistry Facility in the Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory (PNNL). The SFTL offers a variety of columns and flow cells, a new state-of-the-art dual-energy gamma system, a fully automated saturation-pressure apparatus, and analytical equipment for sample processing. The new facility, including qualified staff, is available for scientists interested in collaboration on conducting high-quality flow and transport experiments, including contaminant remediation. Close linkages exist between the SFTL and numerical modelers to aid in experimental design and interpretation. This presentation will discuss the facility and outline the procedures required to submit a proposal to use this unique facility for research purposes. The W. R. Wiley Environmental Molecular Sciences Laboratory, a national scientific user facility, is sponsored by the U.S. Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory.
NASA Astrophysics Data System (ADS)
Miller, M. A.; Miller, N. L.; Sale, M. J.; Springer, E. P.; Wesely, M. L.; Bashford, K. E.; Conrad, M. E.; Costigan, K. R.; Kemball-Cook, S.; King, A. W.; Klazura, G. E.; Lesht, B. M.; Machavaram, M. V.; Sultan, M.; Song, J.; Washington-Allen, R.
2001-12-01
A multi-laboratory Department of Energy (DOE) team (Argonne National Laboratory, Brookhaven National Laboratory, Los Alamos National Laboratory, Lawrence Berkeley National Laboratory, Oak Ridge National Laboratory) has begun an investigation of hydrometeorological processes at the Whitewater subbasin of the Walnut River Watershed in Kansas. The Whitewater sub-basin is viewed as a DOE long-term hydrologic research watershed and resides within the well-instrumented Atmospheric Radiation Measurement/Cloud Radiation Atmosphere Testbed (ARM/CART) and the proposed Arkansas-Red River regional hydrologic testbed. The focus of this study is the development and evaluation of coupled regional to watershed scale models that simulate atmospheric, land surface, and hydrologic processes as systems with linkages and feedback mechanisms. This pilot is the precursor to the proposed DOE Water Cycle Dynamics Prediction Program. An important new element is the introduction of water isotope budget equations into mesoscale and hydrologic modeling. Two overarching hypotheses are part of this pilot study: (1) Can the predictability of the regional water balance be improved using high-resolution model simulations that are constrained and validated using new water isotope and hydrospheric water measurements? (2) Can water isotopic tracers be used to segregate different pathways through the water cycle and predict a change in regional climate patterns? Initial results of the pilot will be presented along with a description and copies of the proposed DOE Water Cycle Dynamics Prediction Program.
Size Comparison: Three Generations of Mars Rovers
2008-11-19
Full-scale models of three generations of NASA Mars rovers show the increase in size from the Sojourner rover of the Mars Pathfinder project, to the twin Mars Exploration Rovers Spirit and Opportunity, to the Mars Science Laboratory rover.
ERIC Educational Resources Information Center
Eaton, Bruce G., Ed.
1977-01-01
Presents four short articles on: a power supply for the measurement of the charge-to-mass ratio of the electron; a modified centripetal force apparatus; a black box electronic unknown for the scientific instruments laboratory; and a simple scaling model for biological systems. (MLH)
NASA Astrophysics Data System (ADS)
Du, Qiang; Li, Yanjun
2015-06-01
In this paper, a multi-scale as-cast grain size prediction model is proposed to predict as-cast grain size of inoculated aluminum alloys melt solidified under non-isothermal condition, i.e., the existence of temperature gradient. Given melt composition, inoculation and heat extraction boundary conditions, the model is able to predict maximum nucleation undercooling, cooling curve, primary phase solidification path and final as-cast grain size of binary alloys. The proposed model has been applied to two Al-Mg alloys, and comparison with laboratory and industrial solidification experimental results have been carried out. The preliminary conclusion is that the proposed model is a promising suitable microscopic model used within the multi-scale casting simulation modelling framework.
NASA Astrophysics Data System (ADS)
Therssen, E.; Delfosse, L.
1995-08-01
The design and setting up of a pulverized solid injection system for use in laboratory burners is presented. The original dual system consists of a screw feeder coupled to an acoustic sower. This laboratory device allows a good regularity and stability of the particle-gas mixture transported to the burner in a large scale of mass powder and gas vector rate flow. The thermal history of the particles has been followed by optical measurements. The quality of the particle cloud injected in the burner has been validated by the good agreement between experimental and modeling particle temperature.
RANS Simulation (Rotating Reference Frame Model [RRF]) of Single Lab-Scaled DOE RM1 MHK Turbine
Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph
2014-04-15
Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study taking advantage of the symmetry of lab-scaled DOE RM1 geometry, only half of the geometry is models using (Single) Rotating Reference Frame model [RRF]. In this model RANS equations, coupled with k-\\omega turbulence closure model, are solved in the rotating reference frame. The actual geometry of the turbine blade is included and the turbulent boundary layer along the blade span is simulated using wall-function approach. The rotation of the blade is modeled by applying periodic boundary condition to sets of plane of symmetry. This case study simulates the performance and flow field in the near and far wake of the device at the desired operating conditions. The results of these simulations were validated against in-house experimental data. Please see the attached paper.
Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.
2011-01-01
The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service. PMID:21460443
Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M
2011-04-01
The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.
Persistence in soil of Miscanthus biochar in laboratory and field conditions
Budai, Alice; O’Toole, Adam; Ma, Xingzhu; Rumpel, Cornelia; Abiven, Samuel
2017-01-01
Evaluating biochars for their persistence in soil under field conditions is an important step towards their implementation for carbon sequestration. Current evaluations might be biased because the vast majority of studies are short-term laboratory incubations of biochars produced in laboratory-scale pyrolyzers. Here our objective was to investigate the stability of a biochar produced with a medium-scale pyrolyzer, first through laboratory characterization and stability tests and then through field experiment. We also aimed at relating properties of this medium-scale biochar to that of a laboratory-made biochar with the same feedstock. Biochars were made of Miscanthus biomass for isotopic C-tracing purposes and produced at temperatures between 600 and 700°C. The aromaticity and degree of condensation of aromatic rings of the medium-scale biochar was high, as was its resistance to chemical oxidation. In a 90-day laboratory incubation, cumulative mineralization was 0.1% for the medium-scale biochar vs. 45% for the Miscanthus feedstock, pointing to the absence of labile C pool in the biochar. These stability results were very close to those obtained for biochar produced at laboratory-scale, suggesting that upscaling from laboratory to medium-scale pyrolyzers had little effect on biochar stability. In the field, the medium-scale biochar applied at up to 25 t C ha-1 decomposed at an estimated 0.8% per year. In conclusion, our biochar scored high on stability indices in the laboratory and displayed a mean residence time > 100 years in the field, which is the threshold for permanent removal in C sequestration projects. PMID:28873471
Syntrophic acetate oxidation in two-phase (acid-methane) anaerobic digesters.
Shimada, T; Morgenroth, E; Tandukar, M; Pavlostathis, S G; Smith, A; Raskin, L; Kilian, R E
2011-01-01
The microbial processes involved in two-phase anaerobic digestion were investigated by operating a laboratory-scale acid-phase (AP) reactor and analyzing two full-scale, two-phase anaerobic digesters operated under mesophilic (35 °C) conditions. The digesters received a blend of primary sludge and waste activated sludge (WAS). Methane levels of 20% in the laboratory-scale reactor indicated the presence of methanogenic activity in the AP. A phylogenetic analysis of an archaeal 16S rRNA gene clone library of one of the full-scale AP digesters showed that 82% and 5% of the clones were affiliated with the orders Methanobacteriales and Methanosarcinales, respectively. These results indicate that substantial levels of aceticlastic methanogens (order Methanosarcinales) were not maintained at the low solids retention times and acidic conditions (pH 5.2-5.5) of the AP, and that methanogenesis was carried out by hydrogen-utilizing methanogens of the order Methanobacteriales. Approximately 43, 31, and 9% of the archaeal clones from the methanogenic phase (MP) digester were affiliated with the orders Methanosarcinales, Methanomicrobiales, and Methanobacteriales, respectively. A phylogenetic analysis of a bacterial 16S rRNA gene clone library suggested the presence of acetate-oxidizing bacteria (close relatives of Thermacetogenium phaeum, 'Syntrophaceticus schinkii,' and Clostridium ultunense). The high abundance of hydrogen consuming methanogens and the presence of known acetate-oxidizing bacteria suggest that acetate utilization by acetate oxidizing bacteria in syntrophic interaction with hydrogen-utilizing methanogens was an important pathway in the second-stage of the two-phase digestion, which was operated at high ammonium-N concentrations (1.0 and 1.4 g/L). A modified version of the IWA Anaerobic Digestion Model No. 1 (ADM1) with extensions for syntrophic acetate oxidation and weak-acid inhibition adequately described the dynamic profiles of volatile acid production/degradation and methane generation observed in the laboratory-scale AP reactor. The model was validated with historical data from the full-scale digesters.
Using standard and institutional mentorship models to implement SLMTA in Kenya
Mwalili, Samuel; Basiye, Frank L.; Zeh, Clement; Emonyi, Wilfred I.; Langat, Raphael; Luman, Elizabeth T.; Mwangi, Jane
2014-01-01
Background Kenya is home to several high-performing internationally-accredited research laboratories, whilst most public sector laboratories have historically lacked functioning quality management systems. In 2010, Kenya enrolled an initial eight regional and four national laboratories into the Strengthening Laboratory Management Toward Accreditation (SLMTA) programme. To address the challenge of a lack of mentors for the regional laboratories, three were paired, or ‘twinned’, with nearby accredited research laboratories to provide institutional mentorship, whilst the other five received standard mentorship. Objectives This study examines results from the eight regional laboratories in the initial SLMTA group, with a focus on mentorship models. Methods Three SLMTA workshops were interspersed with three-month periods of improvement project implementation and mentorship. Progress was evaluated at baseline, mid-term, and exit using the Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) audit checklist and scores were converted into a zero- to five-star scale. Results At baseline, the mean score for the eight laboratories was 32%; all laboratories were below the one-star level. At mid-term, all laboratories had measured improvements. However, the three twinned laboratories had increased an average of 32 percentage points and reached one to three stars; whilst the five non-twinned laboratories increased an average of 10 percentage points and remained at zero stars. At exit, twinned laboratories had increased an average 12 additional percentage points (44 total), reaching two to four stars; non-twinned laboratories increased an average of 28 additional percentage points (38 total), reaching one to three stars. Conclusion The partnership used by the twinning model holds promise for future collaborations between ministries of health and state-of-the-art research laboratories in their regions for laboratory quality improvement. Where they exist, such laboratories may be valuable resources to be used judiciously so as to accelerate sustainable quality improvement initiated through SLMTA. PMID:29043191
Innovative mathematical modeling in environmental remediation.
Yeh, Gour-Tsyh; Gwo, Jin-Ping; Siegel, Malcolm D; Li, Ming-Hsu; Fang, Yilin; Zhang, Fan; Luo, Wensui; Yabusaki, Steve B
2013-05-01
There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out are used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g., Ni, Cr, Co). The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Data Services and Transnational Access for European Geosciences Multi-Scale Laboratories
NASA Astrophysics Data System (ADS)
Funiciello, Francesca; Rosenau, Matthias; Sagnotti, Leonardo; Scarlato, Piergiorgio; Tesei, Telemaco; Trippanera, Daniele; Spires, Chris; Drury, Martyn; Kan-Parker, Mirjam; Lange, Otto; Willingshofer, Ernst
2016-04-01
The EC policy for research in the new millennium supports the development of european-scale research infrastructures. In this perspective, the existing research infrastructures are going to be integrated with the objective to increase their accessibility and to enhance the usability of their multidisciplinary data. Building up integrating Earth Sciences infrastructures in Europe is the mission of the Implementation Phase (IP) of the European Plate Observing System (EPOS) project (2015-2019). The integration of european multiscale laboratories - analytical, experimental petrology and volcanology, magnetic and analogue laboratories - plays a key role in this context and represents a specific task of EPOS IP. In the frame of the WP16 of EPOS IP working package 16, European geosciences multiscale laboratories aims to be linked, merging local infrastructures into a coherent and collaborative network. In particular, the EPOS IP WP16-task 4 "Data services" aims at standardize data and data products, already existing and newly produced by the participating laboratories, and made them available through a new digital platform. The following data and repositories have been selected for the purpose: 1) analytical and properties data a) on volcanic ash from explosive eruptions, of interest to the aviation industry, meteorological and government institutes, b) on magmas in the context of eruption and lava flow hazard evaluation, and c) on rock systems of key importance in mineral exploration and mining operations; 2) experimental data describing: a) rock and fault properties of importance for modelling and forecasting natural and induced subsidence, seismicity and associated hazards, b) rock and fault properties relevant for modelling the containment capacity of rock systems for CO2, energy sources and wastes, c) crustal and upper mantle rheology as needed for modelling sedimentary basin formation and crustal stress distributions, d) the composition, porosity, permeability, and frackability of reservoir rocks of interest in relation to unconventional resources and geothermal energy; 3) repository of analogue models on tectonic processes, from the plate to the reservoir scale, relevant to the understanding of Earth dynamics, geo-hazards and geo-energy; 4) paleomagnetic data, that are crucial a) for understanding the evolution of sedimentary basins and associated resources, and b) for charting geo-hazard frequency. EPOS IP WP16 - task 5 aims to create mechanisms and procedures for easy trans-national access to multiscale laboratory facilities. Moreover, the same task will coordinate all the activities in a pilot phase to test, validate and consolidate the over mentioned services and to provide a proof of concept for what will be offered beyond the completion of the EPOS IP.
Scaled laboratory experiments explain the kink behaviour of the Crab Nebula jet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C. K.; Tzeferacos, P.; Lamb, D.
X-ray images from the Chandra X-ray Observatory show that the South-East jet in the Crab nebula changes direction every few years. This remarkable phenomenon is also observed in jets associated with pulsar wind nebulae and other astrophysical objects, and therefore is a fundamental feature of astrophysical jet evolution that needs to be understood. Theoretical modeling and numerical simulations have suggested that this phenomenon may be a consequence of magnetic fields (B) and current-driven magnetohydrodynamic (MHD) instabilities taking place in the jet, but until now there has been no verification of this process in a controlled laboratory environment. Here we reportmore » the first such experiments, using scaled laboratory plasma jets generated by high-power lasers to model the Crab jet and monoenergetic-proton radiography to provide direct visualization and measurement of magnetic fields and their behavior. The toroidal magnetic field embedded in the supersonic jet triggered plasma instabilities and resulted in considerable deflections throughout the jet propagation, mimicking the kinks in the Crab jet. We also demonstrated that these kinks are stabilized by high jet velocity, consistent with the observation that instabilities alter the jet orientation but do not disrupt the overall jet structure. We successfully modeled these laboratory experiments with a validated three-dimensional (3D) numerical simulation, which in conjunction with the experiments provide compelling evidence that we have an accurate model of the most important physics of magnetic fields and MHD instabilities in the observed, kinked jet in the Crab nebula. The experiments initiate a novel approach in the laboratory for visualizing fields and instabilities associated with jets observed in various astrophysical objects, ranging from stellar to extragalactic systems. We expect that future work along this line will have important impact on the study and understanding of such fundamental astrophysical phenomena.« less
Scaled laboratory experiments explain the kink behaviour of the Crab Nebula jet
Li, C. K.; Tzeferacos, P.; Lamb, D.; ...
2016-10-07
X-ray images from the Chandra X-ray Observatory show that the South-East jet in the Crab nebula changes direction every few years. This remarkable phenomenon is also observed in jets associated with pulsar wind nebulae and other astrophysical objects, and therefore is a fundamental feature of astrophysical jet evolution that needs to be understood. Theoretical modeling and numerical simulations have suggested that this phenomenon may be a consequence of magnetic fields (B) and current-driven magnetohydrodynamic (MHD) instabilities taking place in the jet, but until now there has been no verification of this process in a controlled laboratory environment. Here we reportmore » the first such experiments, using scaled laboratory plasma jets generated by high-power lasers to model the Crab jet and monoenergetic-proton radiography to provide direct visualization and measurement of magnetic fields and their behavior. The toroidal magnetic field embedded in the supersonic jet triggered plasma instabilities and resulted in considerable deflections throughout the jet propagation, mimicking the kinks in the Crab jet. We also demonstrated that these kinks are stabilized by high jet velocity, consistent with the observation that instabilities alter the jet orientation but do not disrupt the overall jet structure. We successfully modeled these laboratory experiments with a validated three-dimensional (3D) numerical simulation, which in conjunction with the experiments provide compelling evidence that we have an accurate model of the most important physics of magnetic fields and MHD instabilities in the observed, kinked jet in the Crab nebula. The experiments initiate a novel approach in the laboratory for visualizing fields and instabilities associated with jets observed in various astrophysical objects, ranging from stellar to extragalactic systems. We expect that future work along this line will have important impact on the study and understanding of such fundamental astrophysical phenomena.« less
Laboratory and theoretical models of planetary-scale instabilities and waves
NASA Technical Reports Server (NTRS)
Hart, John E.; Toomre, Juri
1991-01-01
Meteorologists and planetary astronomers interested in large-scale planetary and solar circulations recognize the importance of rotation and stratification in determining the character of these flows. The two outstanding problems of interest are: (1) the origins and nature of chaos in baroclinically unstable flows; and (2) the physical mechanisms responsible for high speed zonal winds and banding on the giant planets. The methods used to study these problems, and the insights gained, are useful in more general atmospheric and climate dynamic settings. Because the planetary curvature or beta-effect is crucial in the large scale nonlinear dynamics, the motions of rotating convecting liquids in spherical shells were studied using electrohydrodynamic polarization forces to generate radial gravity and centrally directed buoyancy forces in the laboratory. The Geophysical Fluid Flow Cell (GFFC) experiments performed on Spacelab 3 in 1985 were analyzed. The interpretation and extension of these results have led to the construction of efficient numerical models of rotating convection with an aim to understand the possible generation of zonal banding on Jupiter and the fate of banana cells in rapidly rotating convection as the heating is made strongly supercritical. Efforts to pose baroclinic wave experiments for future space missions using a modified version of the 1985 instrument have led us to develop theoretical and numerical models of baroclinic instability. Some surprising properties of both these models were discovered.
NASA Astrophysics Data System (ADS)
Potirakis, Stelios M.; Contoyiannis, Yiannis; Kopanas, John; Kalimeris, Anastasios; Antonopoulos, George; Peratzakis, Athanasios; Eftaxias, Konstantinos; Nomicos, Constantinos
2014-05-01
Under natural conditions, it is practically impossible to install an experimental network on the geophysical scale using the same instrumentations as in laboratory experiments for understanding, through the states of stress and strain and their time variation, the laws that govern the friction during the last stages of EQ generation, or to monitor (much less to control) the principal characteristics of a fracture process. Fracture-induced electromagnetic emissions (EME) in a wide range of frequency bands are sensitive to the micro-structural chances. Thus, their study constitutes a nondestructive method for the monitoring of the evolution of damage process at the laboratory scale. It has been suggested that fracture induced MHz-kHz electromagnetic (EM) emissions, which emerge from a few days up to a few hours before the main seismic shock occurrence permit a real time monitoring of the damage process during the last stages of earthquake preparation, as it happens at the laboratory scale. Since the EME are produced both in the case of the laboratory scale fracture and the EQ preparation process (geophysical scale fracture) they should present similar characteristics in these two scales. Therefore, both the laboratory experimenting scientists and the experimental scientists studying the pre-earthquake EME could benefit from each- other's results. Importantly, it is noted that when studying the fracture process by means of laboratory experiments, the fault growth process normally occurs violently in a fraction of a second. However, a major difference between the laboratory and natural processes is the order-of-magnitude differences in scale (in space and time), allowing the possibility of experimental observation at the geophysical scale for a range of physical processes which are not observable at the laboratory scale. Therefore, the study of fracture-induced EME is expected to reveal more information, especially for the last stages of the fracture process, when it is conducted at the geophysical scale. As a characteristic example, we discuss about the case of electromagnetic silence before the global rupture that was first observed in preseismic EME and recently was also observed in the EME measured during laboratory fracture experiments, completely revising the earlier views about the fracture-induced electromagnetic emissions.
Updated RICE Bounds on Ultrahigh Energy Neutrino fluxes and interactions
NASA Astrophysics Data System (ADS)
Hussain, Shahid; McKay, Douglas
2006-04-01
We explore limits on low scale gravity models set by results from the Radio Ice Cherenkov Experiment's (RICE) ongoing search for cosmic ray neutrinos in the cosmogenic, or GZK, energy range. The bound on, MD, the fundamental scale of gravity, depends upon cosmogenic flux model, black hole formation and decay treatments, inclusion of graviton mediated elastic neutrino processes, and the number of large extra dimensions, d. We find bounds in the interval 0.9 TeV < MD < 10 TeV. Values d = 5, 6 and 7, for which laboratory and astrophysical bounds on LSG models are less restrictive, lead to essentially the same limits on MD.
NASA Astrophysics Data System (ADS)
Mistrík, Pavel; Ashmore, Jonathan
2009-02-01
We describe a large scale computational model of electrical current flow in the cochlea which is constructed by a flexible Modified Nodal Analysis algorithm to incorporate electrical components representing hair cells and the intercellular radial and longitudinal current flow. The model is used as a laboratory to study the effects of changing longitudinal gap junctional coupling, and shows the way in which cochlear microphonic spreads and tuning is affected. The process for incorporating mechanical longitudinal coupling and feedback is described. We find a difference in tuning and attenuation depending on whether longitudinal or radial couplings are altered.
Newest is Biggest: Three Generations of NASA Mars Rovers
2008-11-19
Full-scale models of three generations of NASA Mars rovers show the increase in size from the Sojourner rover of the Mars Pathfinder project, to the twin Mars Exploration Rovers Spirit and Opportunity, to the Mars Science Laboratory rover.
Developing integral projection models for aquatic ecotoxicology
Extrapolating laboratory measured effects of chemicals to ecologically relevant scales is a fundamental challenge in ecotoxicology. In addition to influencing survival in the wild (e.g., over-winter survival) size has been shown to control onset of reproduction for the toxicologi...
NASA Technical Reports Server (NTRS)
1972-01-01
Details are provided for scheduling, cost estimates, and support research and technology requirements for a space shuttle supported manned research laboratory to conduct selected communication and navigation experiments. A summary of the candidate program and its time phasing is included, as well as photographs of the 1/20 scale model of the shuttle supported Early Comm/Nav Research Lab showing the baseline, in-bay arrangement and the out-of-bay configuration.
Bertucco, Alberto; Beraldi, Mariaelena; Sforza, Eleonora
2014-08-01
In this work, the production of Scenedesmus obliquus in a continuous flat-plate laboratory-scale photobioreactor (PBR) under alternated day-night cycles was tested both experimentally and theoretically. Variation of light intensity according to the four seasons of the year were simulated experimentally by a tunable LED lamp, and effects on microalgal growth and productivity were measured to evaluate the conversion efficiency of light energy into biomass during the different seasons. These results were used to validate a mathematical model for algae growth that can be applied to simulate a large-scale production unit, carried out in a flat-plate PBR of similar geometry. The cellular concentration in the PBR was calculated in both steady-state and transient conditions, and the value of the maintenance kinetic term was correlated to experimental profiles. The relevance of this parameter was finally outlined.
“Modeling Trends in Air Pollutant Concentrations over the ...
Regional model calculations over annual cycles have pointed to the need for accurately representing impacts of long-range transport. Linking regional and global scale models have met with mixed success as biases in the global model can propagate and influence regional calculations and often confound interpretation of model results. Since transport is efficient in the free-troposphere and since simulations over Continental scales and annual cycles provide sufficient opportunity for “atmospheric turn-over”, i.e., exchange between the free-troposphere and the boundary-layer, a conceptual framework is needed wherein interactions between processes occurring at various spatial and temporal scales can be consistently examined. The coupled WRF-CMAQ model is expanded to hemispheric scales and model simulations over period spanning 1990-current are analyzed to examine changes in hemispheric air pollution resulting from changes in emissions over this period. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for pr
SCALE TSUNAMI Analysis of Critical Experiments for Validation of 233U Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don; Rearden, Bradley T
2009-01-01
Oak Ridge National Laboratory (ORNL) staff used the SCALE TSUNAMI tools to provide a demonstration evaluation of critical experiments considered for use in validation of current and anticipated operations involving {sup 233}U at the Radiochemical Development Facility (RDF). This work was reported in ORNL/TM-2008/196 issued in January 2009. This paper presents the analysis of two representative safety analysis models provided by RDF staff.
Upscaled soil-water retention using van Genuchten's function
Green, T.R.; Constantz, J.E.; Freyberg, D.L.
1996-01-01
Soils are often layered at scales smaller than the block size used in numerical and conceptual models of variably saturated flow. Consequently, the small-scale variability in water content within each block must be homogenized (upscaled). Laboratory results have shown that a linear volume average (LVA) of water content at a uniform suction is a good approximation to measured water contents in heterogeneous cores. Here, we upscale water contents using van Genuchten's function for both the local and upscaled soil-water-retention characteristics. The van Genuchten (vG) function compares favorably with LVA results, laboratory experiments under hydrostatic conditions in 3-cm cores, and numerical simulations of large-scale gravity drainage. Our method yields upscaled vG parameter values by fitting the vG curve to the LVA of water contents at various suction values. In practice, it is more efficient to compute direct averages of the local vG parameter values. Nonlinear power averages quantify a feasible range of values for each upscaled vG shape parameter; upscaled values of N are consistently less than the harmonic means, reflecting broad pore-size distributions of the upscaled soils. The vG function is useful for modeling soil-water retention at large scales, and these results provide guidance for its application.
Assessing sorbent injection mercury control effectiveness in flue gas streams
Carey, T.R.; Richardson, C.F.; Chang, R.; Meserole, F.B.; Rostam-Abadi, M.; Chen, S.
2000-01-01
One promising approach for removing mercury from coal-fired, utility flue gas involves the direct injection of mercury sorbents. Although this method has been effective at removing mercury in municipal waste incinerators, tests conducted to date on utility coal-fired boilers show that mercury removal is much more difficult in utility flue gas. EPRI is conducting research to investigate mercury removal using sorbents in this application. Bench-scale, pilot-scale, and field tests have been conducted to determine the ability of different sorbents to remove mercury in simulated and actual flue gas streams. This paper focuses on recent bench-scale and field test results evaluating the adsorption characteristics of activated carbon and fly ash and the use of these results to develop a predictive mercury removal model. Field tests with activated carbon show that adsorption characteristics measured in the lab agree reasonably well with characteristics measured in the field. However, more laboratory and field data will be needed to identify other gas phase components which may impact performance. This will allow laboratory tests to better simulate field conditions and provide improved estimates of sorbent performance for specific sites. In addition to activated carbon results, bench-scale and modeling results using fly ash are presented which suggest that certain fly ashes are capable of adsorbing mercury.
Scale-up synthesis of zinc borate from the reaction of zinc oxide and boric acid in aqueous medium
NASA Astrophysics Data System (ADS)
Kılınç, Mert; Çakal, Gaye Ö.; Yeşil, Sertan; Bayram, Göknur; Eroğlu, İnci; Özkar, Saim
2010-11-01
Synthesis of zinc borate was conducted in a laboratory and a pilot scale batch reactor to see the influence of process variables on the reaction parameters and the final product, 2ZnO·3B 2O 3·3.5H 2O. Effects of stirring speed, presence of baffles, amount of seed, particle size and purity of zinc oxide, and mole ratio of H 3BO 3:ZnO on the zinc borate formation reaction were examined at a constant temperature of 85 °C in a laboratory (4 L) and a pilot scale (85 L) reactor. Products obtained from the reaction in both reactors were characterized by chemical analysis, X-ray diffraction, particle size distribution analysis, thermal gravimetric analysis and scanning electron microscopy. The kinetic data for the zinc borate production reaction was fit by using the logistic model. The results revealed that the specific reaction rate, a model parameter, decreases with increase in particle size of zinc oxide and the presence of baffles, but increases with increase in stirring speed and purity of zinc oxide; however, it is unaffected with the changes in the amount of seed and reactants ratio. The reaction completion time is unaffected by scaling-up.
Earthquake source properties from instrumented laboratory stick-slip
Kilgore, Brian D.; McGarr, Arthur F.; Beeler, Nicholas M.; Lockner, David A.; Thomas, Marion Y.; Mitchell, Thomas M.; Bhat, Harsha S.
2017-01-01
Stick-slip experiments were performed to determine the influence of the testing apparatus on source properties, develop methods to relate stick-slip to natural earthquakes and examine the hypothesis of McGarr [2012] that the product of stiffness, k, and slip duration, Δt, is scale-independent and the same order as for earthquakes. The experiments use the double-direct shear geometry, Sierra White granite at 2 MPa normal stress and a remote slip rate of 0.2 µm/sec. To determine apparatus effects, disc springs were added to the loading column to vary k. Duration, slip, slip rate, and stress drop decrease with increasing k, consistent with a spring-block slider model. However, neither for the data nor model is kΔt constant; this results from varying stiffness at fixed scale.In contrast, additional analysis of laboratory stick-slip studies from a range of standard testing apparatuses is consistent with McGarr's hypothesis. kΔt is scale-independent, similar to that of earthquakes, equivalent to the ratio of static stress drop to average slip velocity, and similar to the ratio of shear modulus to wavespeed of rock. These properties result from conducting experiments over a range of sample sizes, using rock samples with the same elastic properties as the Earth, and scale-independent design practices.
RANS Simulation (Virtual Blade Model [VBM]) of Single Lab Scaled DOE RM1 MHK Turbine
Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph
2014-04-15
Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device. The required User Defined Functions (UDFs) and look-up table of lift and drag coefficients are included along with the .cas and .dat files.
Virus elimination in activated sludge systems: from batch tests to mathematical modeling.
Haun, Emma; Ulbricht, Katharina; Nogueira, Regina; Rosenwinkel, Karl-Heinz
2014-01-01
A virus tool based on Activated Sludge Model No. 3 for modeling virus elimination in activated sludge systems was developed and calibrated with the results from laboratory-scale batch tests and from measurements in a municipal wastewater treatment plant (WWTP). The somatic coliphages were used as an indicator for human pathogenic enteric viruses. The extended model was used to simulate the virus concentration in batch tests and in a municipal full-scale WWTP under steady-state and dynamic conditions. The experimental and modeling results suggest that both adsorption and inactivation processes, modeled as reversible first-order reactions, contribute to virus elimination in activated sludge systems. The model should be a useful tool to estimate the number of viruses entering water bodies from the discharge of treated effluents.
On the use of a laser ablation as a laboratory seismic source
NASA Astrophysics Data System (ADS)
Shen, Chengyi; Brito, Daniel; Diaz, Julien; Zhang, Deyuan; Poydenot, Valier; Bordes, Clarisse; Garambois, Stéphane
2017-04-01
Mimic near-surface seismic imaging conducted in well-controlled laboratory conditions is potentially a powerful tool to study large scale wave propagations in geological media by means of upscaling. Laboratory measurements are indeed particularly suited for tests of theoretical modellings and comparisons with numerical approaches. We have developed an automated Laser Doppler Vibrometer (LDV) platform, which is able to detect and register broadband nano-scale displacements on the surface of various materials. This laboratory equipment has already been validated in experiments where piezoelectric transducers were used as seismic sources. We are currently exploring a new seismic source in our experiments, a laser ablation, in order to compensate some drawbacks encountered with piezoelectric sources. The laser ablation source is considered to be an interesting ultrasound wave generator since the 1960s. It was believed to have numerous potential applications such as the Non-Destructive Testing (NDT) and the measurements of velocities and attenuations in solid samples. We aim at adapting and developing this technique into geophysical experimental investigations in order to produce and explore complete micro-seismic data sets in the laboratory. We will first present the laser characteristics including its mechanism, stability, reproducibility, and will evaluate in particular the directivity patterns of such a seismic source. We have started by applying the laser ablation source on the surfaces of multi-scale homogeneous aluminum samples and are now testing it on heterogeneous and fractured limestone cores. Some other results of data processing will also be shown, especially the 2D-slice V P and V S tomographic images obtained in limestone samples. Apart from the experimental records, numerical simulations will be carried out for both the laser source modelling and the wave propagation in different media. First attempts will be done to compare quantitatively the experimental data with simulations. Meanwhile, CT-scan X-ray images of these limestone cores will be used to check the relative pertinences of velocity tomography images produced by this newly developed laser ablation seismic source.
NASA Astrophysics Data System (ADS)
Saari, Markus; Rossi, Pekka; Blomberg von der Geest, Kalle; Mäkinen, Ari; Postila, Heini; Marttila, Hannu
2017-04-01
High metal concentrations in natural waters is one of the key environmental and health problems globally. Continuous in-situ analysis of metals from runoff water is technically challenging but essential for the better understanding of processes which lead to pollutant transport. Currently, typical analytical methods for monitoring elements in liquids are off-line laboratory methods such as ICP-OES (Inductively Coupled Plasma Optical Emission Spectroscopy) and ICP-MS (ICP combined with a mass spectrometer). Disadvantage of the both techniques is time consuming sample collection, preparation, and off-line analysis at laboratory conditions. Thus use of these techniques lack possibility for real-time monitoring of element transport. We combined a novel high resolution on-line metal concentration monitoring with catchment scale physical hydrological modelling in Mustijoki river in Southern Finland in order to study dynamics of processes and form a predictive warning system for leaching of metals. A novel on-line measurement technique based on micro plasma emission spectroscopy (MPES) is tested for on-line detection of selected elements (e.g. Na, Mg, Al, K, Ca, Fe, Ni, Cu, Cd and Pb) in runoff waters. The preliminary results indicate that MPES can sufficiently detect and monitor metal concentrations from river water. Water and Soil Assessment Tool (SWAT) catchment scale model was further calibrated with high resolution metal concentration data. We show that by combining high resolution monitoring and catchment scale physical based modelling, further process studies and creation of early warning systems, for example to optimization of drinking water uptake from rivers, can be achieved.
Analytical Solution for Reactive Solute Transport Considering Incomplete Mixing
NASA Astrophysics Data System (ADS)
Bellin, A.; Chiogna, G.
2013-12-01
The laboratory experiments of Gramling et al. (2002) showed that incomplete mixing at the pore scale exerts a significant impact on transport of reactive solutes and that assuming complete mixing leads to overestimation of product concentration in bimolecular reactions. We consider here the family of equilibrium reactions for which the concentration of the reactants and the product can be expressed as a function of the mixing ratio, the concentration of a fictitious non reactive solute. For this type of reactions we propose, in agreement with previous studies, to model the effect of incomplete mixing at scales smaller than the Darcy scale assuming that the mixing ratio is distributed within an REV according to a Beta distribution. We compute the parameters of the Beta model by imposing that the mean concentration is equal to the value that the concentration assumes at the continuum Darcy scale, while the variance decays with time as a power law. We show that our model reproduces the concentration profiles of the reaction product measured in the Gramling et al. (2002) experiments using the transport parameters obtained from conservative experiments and an instantaneous reaction kinetic. The results are obtained applying analytical solutions both for conservative and for reactive solute transport, thereby providing a method to handle the effect of incomplete mixing on multispecies reactive solute transport, which is simpler than other previously developed methods. Gramling, C. M., C. F. Harvey, and L. C. Meigs (2002), Reactive transport in porous media: A comparison of model prediction with laboratory visualization, Environ. Sci. Technol., 36(11), 2508-2514.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Lee, Seungsoo; An, Hyunuk; Kawaike, Kenji; Nakagawa, Hajime
2016-11-01
An urban flood is an integrated phenomenon that is affected by various uncertainty sources such as input forcing, model parameters, complex geometry, and exchanges of flow among different domains in surfaces and subsurfaces. Despite considerable advances in urban flood modeling techniques, limited knowledge is currently available with regard to the impact of dynamic interaction among different flow domains on urban floods. In this paper, an ensemble method for urban flood modeling is presented to consider the parameter uncertainty of interaction models among a manhole, a sewer pipe, and surface flow. Laboratory-scale experiments on urban flood and inundation are performed under various flow conditions to investigate the parameter uncertainty of interaction models. The results show that ensemble simulation using interaction models based on weir and orifice formulas reproduces experimental data with high accuracy and detects the identifiability of model parameters. Among interaction-related parameters, the parameters of the sewer-manhole interaction show lower uncertainty than those of the sewer-surface interaction. Experimental data obtained under unsteady-state conditions are more informative than those obtained under steady-state conditions to assess the parameter uncertainty of interaction models. Although the optimal parameters vary according to the flow conditions, the difference is marginal. Simulation results also confirm the capability of the interaction models and the potential of the ensemble-based approaches to facilitate urban flood simulation.
Black carbon absorption at the global scale is affected by particle-scale diversity in composition.
Fierce, Laura; Bond, Tami C; Bauer, Susanne E; Mena, Francisco; Riemer, Nicole
2016-09-01
Atmospheric black carbon (BC) exerts a strong, but uncertain, warming effect on the climate. BC that is coated with non-absorbing material absorbs more strongly than the same amount of BC in an uncoated particle, but the magnitude of this absorption enhancement (Eabs) is not well constrained. Modelling studies and laboratory measurements have found stronger absorption enhancement than has been observed in the atmosphere. Here, using a particle-resolved aerosol model to simulate diverse BC populations, we show that absorption is overestimated by as much as a factor of two if diversity is neglected and population-averaged composition is assumed across all BC-containing particles. If, instead, composition diversity is resolved, we find Eabs=1-1.5 at low relative humidity, consistent with ambient observations. This study offers not only an explanation for the discrepancy between modelled and observed absorption enhancement, but also demonstrates how particle-scale simulations can be used to develop relationships for global-scale models.
Large-scale functional models of visual cortex for remote sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E
Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less
Black Carbon Absorption at the Global Scale Is Affected by Particle-Scale Diversity in Composition
NASA Technical Reports Server (NTRS)
Fierce, Laura; Bond, Tami C.; Bauer, Susanne E.; Mena, Francisco; Riemer, Nicole
2016-01-01
Atmospheric black carbon (BC) exerts a strong, but uncertain, warming effect on the climate. BC that is coated with non-absorbing material absorbs more strongly than the same amount of BC in an uncoated particle, but the magnitude of this absorption enhancement (E(sub abs)) is not well constrained. Modelling studies and laboratory measurements have found stronger absorption enhancement than has been observed in the atmosphere. Here, using a particle-resolved aerosol model to simulate diverse BC populations, we show that absorption is overestimated by as much as a factor of two if diversity is neglected and population-averaged composition is assumed across all BC-containing particles. If, instead, composition diversity is resolved, we find E(sub abs) = 1 - 1.5 at low relative humidity, consistent with ambient observations. This study offers not only an explanation for the discrepancy between modelled and observed absorption enhancement, but also demonstrates how particle-scale simulations can be used to develop relationships for global-scale models.
Particle-In-Cell Modeling For MJ Dense Plasma Focus with Varied Anode Shape
NASA Astrophysics Data System (ADS)
Link, A.; Halvorson, C.; Schmidt, A.; Hagen, E. C.; Rose, D.; Welch, D.
2014-10-01
Megajoule scale dense plasma focus (DPF) Z-pinches with deuterium gas fill are compact devices capable of producing 1012 neutrons per shot but past predictive models of large-scale DPF have not included kinetic effects such as ion beam formation or anomalous resistivity. We report on progress of developing a predictive DPF model by extending our 2D axisymmetric collisional kinetic particle-in-cell (PIC) simulations to the 1 MJ, 2 MA Gemini DPF using the PIC code LSP. These new simulations incorporate electrodes, an external pulsed-power driver circuit, and model the plasma from insulator lift-off through the pinch phase. The simulations were performed using a new hybrid fluid-to-kinetic model transitioning from a fluid description to a fully kinetic PIC description during the run-in phase. Simulations are advanced through the final pinch phase using an adaptive variable time-step to capture the fs and sub-mm scales of the kinetic instabilities involved in the ion beam formation and neutron production. Results will be present on the predicted effects of different anode configurations. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory (LLNL) under Contract DE-AC52-07NA27344 and supported by the Laboratory Directed Research and Development Program (11-ERD-063) and the Computing Grand Challenge program at LLNL. This work supported by Office of Defense Nuclear Nonproliferation Research and Development within U.S. Department of Energy's National Nuclear Security Administration.
30 CFR 14.21 - Laboratory-scale flame test apparatus.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Laboratory-scale flame test apparatus. 14.21 Section 14.21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING... Technical Requirements § 14.21 Laboratory-scale flame test apparatus. The principal parts of the apparatus...
30 CFR 14.21 - Laboratory-scale flame test apparatus.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Laboratory-scale flame test apparatus. 14.21 Section 14.21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING... Technical Requirements § 14.21 Laboratory-scale flame test apparatus. The principal parts of the apparatus...
Quality Assessment of Physical and Organoleptic Instant Corn Rice on Scale-Up Process
NASA Astrophysics Data System (ADS)
Kumalasari, R.; Ekafitri, R.; Indrianti, N.
2017-12-01
Development of instant corn rice product has been successfully conducted on a laboratory scale. Corn has high carbohydrate content but low in fiber. The addition of fiber in instant corn rice, intended to improve the functioning of the product, and replace fiber loss during the process. Scale up process of Instant corn rice required to increase the production capacity. Scale up was the process to get identic output on a larger scale based on predetermined production scale. This study aimed to assess the changes and differences in the quality of instant corn rice during scale up. Instant corn rice scale up was done on production capacity 3 kg, 4 kg and 5 kg. Results showed that scale up of instant corn rice producing products with rehydration ratio ranges between 514% - 570%, the absorption rate ranged between 414% - 470%, swelling rate ranging between 119% - 134%, bulk density ranged from 0.3661 to 0.4745 (g/ml) and porosity ranging between 30-37%. The physical quality of instant corn rice on scale up were stable from the ones at laboratory scale on swelling rate, rehydration ratio, and absorption rate but not stable on bulk density and porosity. Organoleptic qualities were stable at increased scale compared on a laboratory scale. Bulk density was higher than those at laboratory scale, and the porosity was lower than those at laboratory scale.
Study and Development of an Air Conditioning System Operating on a Magnetic Heat Pump Cycle
NASA Technical Reports Server (NTRS)
Wang, Pao-Lien
1991-01-01
This report describes the design of a laboratory scale demonstration prototype of an air conditioning system operating on a magnetic heat pump cycle. Design parameters were selected through studies performed by a Kennedy Space Center (KSC) System Simulation Computer Model. The heat pump consists of a rotor turning through four magnetic fields that are created by permanent magnets. Gadolinium was selected as the working material for this demonstration prototype. The rotor was designed to be constructed of flat parallel disks of gadolinium with very little space in between. The rotor rotates in an aluminum housing. The laboratory scale demonstration prototype is designed to provide a theoretical Carnot Cycle efficiency of 62 percent and a Coefficient of Performance of 16.55.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osterman, Gordon; Keating, Kristina; Binley, Andrew
Here, we estimate parameters from the Katz and Thompson permeability model using laboratory complex electrical conductivity (CC) and nuclear magnetic resonance (NMR) data to build permeability models parameterized with geophysical measurements. We use the Katz and Thompson model based on the characteristic hydraulic length scale, determined from mercury injection capillary pressure estimates of pore throat size, and the intrinsic formation factor, determined from multisalinity conductivity measurements, for this purpose. Two new permeability models are tested, one based on CC data and another that incorporates CC and NMR data. From measurements made on forty-five sandstone cores collected from fifteen different formations,more » we evaluate how well the CC relaxation time and the NMR transverse relaxation times compare to the characteristic hydraulic length scale and how well the formation factor estimated from CC parameters compares to the intrinsic formation factor. We find: (1) the NMR transverse relaxation time models the characteristic hydraulic length scale more accurately than the CC relaxation time (R 2 of 0.69 and 0.33 and normalized root mean square errors (NRMSE) of 0.16 and 0.21, respectively); (2) the CC estimated formation factor is well correlated with the intrinsic formation factor (NRMSE50.23). We demonstrate that that permeability estimates from the joint-NMR-CC model (NRMSE50.13) compare favorably to estimates from the Katz and Thompson model (NRMSE50.074). Lastly, this model advances the capability of the Katz and Thompson model by employing parameters measureable in the field giving it the potential to more accurately estimate permeability using geophysical measurements than are currently possible.« less
Osterman, Gordon; Keating, Kristina; Binley, Andrew; ...
2016-03-18
Here, we estimate parameters from the Katz and Thompson permeability model using laboratory complex electrical conductivity (CC) and nuclear magnetic resonance (NMR) data to build permeability models parameterized with geophysical measurements. We use the Katz and Thompson model based on the characteristic hydraulic length scale, determined from mercury injection capillary pressure estimates of pore throat size, and the intrinsic formation factor, determined from multisalinity conductivity measurements, for this purpose. Two new permeability models are tested, one based on CC data and another that incorporates CC and NMR data. From measurements made on forty-five sandstone cores collected from fifteen different formations,more » we evaluate how well the CC relaxation time and the NMR transverse relaxation times compare to the characteristic hydraulic length scale and how well the formation factor estimated from CC parameters compares to the intrinsic formation factor. We find: (1) the NMR transverse relaxation time models the characteristic hydraulic length scale more accurately than the CC relaxation time (R 2 of 0.69 and 0.33 and normalized root mean square errors (NRMSE) of 0.16 and 0.21, respectively); (2) the CC estimated formation factor is well correlated with the intrinsic formation factor (NRMSE50.23). We demonstrate that that permeability estimates from the joint-NMR-CC model (NRMSE50.13) compare favorably to estimates from the Katz and Thompson model (NRMSE50.074). Lastly, this model advances the capability of the Katz and Thompson model by employing parameters measureable in the field giving it the potential to more accurately estimate permeability using geophysical measurements than are currently possible.« less
NASA Astrophysics Data System (ADS)
Rathi, Bhasker; Siade, Adam J.; Donn, Michael J.; Helm, Lauren; Morris, Ryan; Davis, James A.; Berg, Michael; Prommer, Henning
2017-12-01
Coal seam gas production involves generation and management of large amounts of co-produced water. One of the most suitable methods of management is injection into deep aquifers. Field injection trials may be used to support the predictions of anticipated hydrological and geochemical impacts of injection. The present work employs reactive transport modeling (RTM) for a comprehensive analysis of data collected from a trial where arsenic mobilization was observed. Arsenic sorption behavior was studied through laboratory experiments, accompanied by the development of a surface complexation model (SCM). A field-scale RTM that incorporated the laboratory-derived SCM was used to simulate the data collected during the field injection trial and then to predict the long-term fate of arsenic. We propose a new practical procedure which integrates laboratory and field-scale models using a Monte Carlo type uncertainty analysis and alleviates a significant proportion of the computational effort required for predictive uncertainty quantification. The results illustrate that both arsenic desorption under alkaline conditions and pyrite oxidation have likely contributed to the arsenic mobilization that was observed during the field trial. The predictive simulations show that arsenic concentrations would likely remain very low if the potential for pyrite oxidation is minimized through complete deoxygenation of the injectant. The proposed modeling and predictive uncertainty quantification method can be implemented for a wide range of groundwater studies that investigate the risks of metal(loid) or radionuclide contamination.
NASA Technical Reports Server (NTRS)
Marshall, B. A.; Nichols, M. E.
1984-01-01
An experimental investigation (Test OA-309) was conducted using 0.0405-scale Space Shuttle Orbiter Model 16-0 in the North American Aerodynamics Laboratory 7.75 x 11.00-foot Lowspeed Wind Tunnel. The primary purpose was to locate and study any flow conditions or vortices that might have caused damage to the Advanced Flexible Reusable Surface Insulation (AFRSI) during the Space Transportation System STS-6 mission. A secondary objective was to evaluate vortex generators to be used for Wind Tunnel Test OS-314. Flowfield visualization was obtained by means of smoke, tufts, and oil flow. The test was conducted at Mach numbers between 0.07 and 0.23 and at dynamic pressures between 7 and 35 pounds per square foot. The angle-of-attack range of the model was -5 degrees through 35 degrees at 0 or 2 degrees of sideslip, while roll angle was held constant at zero degrees. The vortex generators were studied at angles of 0, 5, 10, and 15 degrees.
LABORATORY-SCALE SIMULATION OF RUNOFF RESPONSE FROM PERVIOUS-IMPERVIOUS SYSTEMS
Urban development yields landscapes that are composites of impervious and pervious areas, with a consequent reduction in infiltration and increase in stormwater runoff. Although basic rainfall-runoff models are used in the vast majority of runoff prediction in urban landscapes, t...
NASA Astrophysics Data System (ADS)
Hall, Carlton Raden
A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf thickness Ltadj, LAI, and h (m). Its function is to translate leaf level estimates of diffuse absorption and backscatter to the canopy scale allowing the leaf optical properties to directly influence above canopy estimates of reflectance. The model was successfully modified and parameterized to operate in a canopy scale and a leaf scale mode. Canopy scale model simulations produced the best results. Simulations based on leaf derived coefficients produced calculated above canopy reflectance errors of 15% to 18%. A comprehensive sensitivity analyses indicated the most important parameters were beam to diffuse conversion c(lambda, m-1), diffuse absorption a(lambda, m-1), diffuse backscatter b(lambda, m-1), h (m), Q, and direct and diffuse irradiance. Sources of error include the estimation procedure for the direct beam to diffuse conversion and attenuation coefficients and other field and laboratory measurement and analysis errors. Applications of the model include creation of synthetic reflectance data sets for remote sensing algorithm development, simulations of stress and drought on vegetation reflectance signatures, and the potential to estimate leaf moisture and chemical status.
Analogue modeling for science outreach: glacier flows at Antarctic National Museum, Italy
NASA Astrophysics Data System (ADS)
Zeoli, A.; Corti, G.; Folco, L.; Ossola, C.
2012-12-01
Comprehension of internal deformation and of ice flow in the Antarctic ice sheet in relation with the bedrock topography and with the thickness variation induced by climatic variations represent an important target for the scientific community. Analogue modelling technique aims to analyze geological or geomorphological processes through physical models built at a reduced geometrical scale in laboratory and deformed at reasonable scale of times. Corti et al. (2003 and 2008) have shown that this technique could also be used successfully for ice flow dynamic. Moreover, this technique gives a three-dimensional view of the processes. The models, that obviously simplify the geometry and rheology of natural processes, represent a geometrically, cinematically, dynamically and rheologically scaled analogue of the natural glacial environment. Following a procedure described in previous papers, proper materials have been selected to simulate the rheological behaviour of ice. In particular, during the experiments a Polydimethilsyloxane (PDMS) has been used to simulate glacial flow. PDMS is a transparent Newtonian silicone with a viscosity of 1.4 104 Pa s and a density of 965 kg m-3 (see material properties in Weijermars, 1986). The scaling of the model to natural conditions let to obtain reliable results for a correct comparison with the glacial processes under investigation. Models have been built with a with a geometrical scaling ratio of ~1.5 10-5, such that 1 cm in the model represents ~700 m in nature. The physical models have been deformed in terrestrial gravity field by allowing the PDMS to flow inside a Plexiglas box. In particular, the silicone has been poured inside the Plexiglas box and allowed to settle in order to obtain a flat free surface; the box has been then inclined of some degrees in order to allow the silicone to flow. Several boxes illustrating different glacial processes have been realized; each of them could be easily performed in short time and in standard laboratories. One of the main aims of the Antarctic National Museum in Siena (Italy) is to establish a strategy to deliver results to a broader scientific community. Time and spatial small scale of the experiments lead the analogue modeling technique easy to be shown to non-technical audiences through direct participation during Museum visits. All these experiments engage both teachers and students from primary and secondary schools and the general public.
Modeling fast and slow earthquakes at various scales
IDE, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138
Modeling fast and slow earthquakes at various scales.
Ide, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.
Scaling methane oxidation: From laboratory incubation experiments to landfill cover field conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abichou, Tarek, E-mail: abichou@eng.fsu.edu; Mahieu, Koenraad; Chanton, Jeff
2011-05-15
Evaluating field-scale methane oxidation in landfill cover soils using numerical models is gaining interest in the solid waste industry as research has made it clear that methane oxidation in the field is a complex function of climatic conditions, soil type, cover design, and incoming flux of landfill gas from the waste mass. Numerical models can account for these parameters as they change with time and space under field conditions. In this study, we developed temperature, and water content correction factors for methane oxidation parameters. We also introduced a possible correction to account for the different soil structure under field conditions.more » These parameters were defined in laboratory incubation experiments performed on homogenized soil specimens and were used to predict the actual methane oxidation rates to be expected under field conditions. Water content and temperature corrections factors were obtained for the methane oxidation rate parameter to be used when modeling methane oxidation in the field. To predict in situ measured rates of methane with the model it was necessary to set the half saturation constant of methane and oxygen, K{sub m}, to 5%, approximately five times larger than laboratory measured values. We hypothesize that this discrepancy reflects differences in soil structure between homogenized soil conditions in the lab and actual aggregated soil structure in the field. When all of these correction factors were re-introduced into the oxidation module of our model, it was able to reproduce surface emissions (as measured by static flux chambers) and percent oxidation (as measured by stable isotope techniques) within the range measured in the field.« less
Logarithmic scaling for fluctuations of a scalar concentration in wall turbulence.
Mouri, Hideaki; Morinaga, Takeshi; Yagi, Toshimasa; Mori, Kazuyasu
2017-12-01
Within wall turbulence, there is a sublayer where the mean velocity and the variance of velocity fluctuations vary logarithmically with the height from the wall. This logarithmic scaling is also known for the mean concentration of a passive scalar. By using heat as such a scalar in a laboratory experiment of a turbulent boundary layer, the existence of the logarithmic scaling is shown here for the variance of fluctuations of the scalar concentration. It is reproduced by a model of energy-containing eddies that are attached to the wall.
Methodology for calculating shear stress in a meandering channel
Kyung-Seop Sin
2010-01-01
Shear stress in meandering channels is the key parameter to predict bank erosion and bend migration. A representative study reach of the Rio Grande River in central New Mexico has been modeled in the Hydraulics Laboratory at CSU. To determine the shear stress distribution in a meandering channel, the large scale (1:12) physical modeling study was conducted in the...
Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia
2016-09-01
In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil,Benny Manuel; Ballance, Robert; Haskell, Karen
Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model ismore » focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.« less
Pan, Xin; Qi, Jian-cheng; Long, Ming; Liang, Hao; Chen, Xiao; Li, Han; Li, Guang-bo; Zheng, Hao
2010-01-01
The close phylogenetic relationship between humans and non-human primates makes non-human primates an irreplaceable model for the study of human infectious diseases. In this study, we describe the development of a large-scale automatic multi-functional isolation chamber for use with medium-sized laboratory animals carrying infectious diseases. The isolation chamber, including the transfer chain, disinfection chain, negative air pressure isolation system, animal welfare system, and the automated system, is designed to meet all biological safety standards. To create an internal chamber environment that is completely isolated from the exterior, variable frequency drive blowers are used in the air-intake and air-exhaust system, precisely controlling the filtered air flow and providing an air-barrier protection. A double door transfer port is used to transfer material between the interior of the isolation chamber and the outside. A peracetic acid sterilizer and its associated pipeline allow for complete disinfection of the isolation chamber. All of the isolation chamber parameters can be automatically controlled by a programmable computerized menu, allowing for work with different animals in different-sized cages depending on the research project. The large-scale multi-functional isolation chamber provides a useful and safe system for working with infectious medium-sized laboratory animals in high-level bio-safety laboratories. PMID:20872984
A Future State for NASA Laboratories - Working in the 21st Century
NASA Technical Reports Server (NTRS)
Kegelman, Jerome T.; Harris, Charles E.; Antcliff, Richard R.; Bushnell, Dennis M.; Dwoyer, Douglas L.
2009-01-01
The name "21 st Century Laboratory" is an emerging concept of how NASA (and the world) will conduct research in the very near future. Our approach is to carefully plan for significant technological changes in products, organization, and society. The NASA mission can be the beneficiary of these changes, provided the Agency prepares for the role of 21st Century laboratories in research and technology development and its deployment in this new age. It has been clear for some time now that the technology revolutions, technology "mega-trends" that we are in the midst of now, all have a common element centered around advanced computational modeling of small scale physics. Whether it is nano technology, bio technology or advanced computational technology, all of these megatrends are converging on science at the very small scale where it is profoundly important to consider the quantum effects at play with physics at that scale. Whether it is the bio-technology creation of "nanites" designed to mimic our immune system or the creation of nanoscale infotechnology devices, allowing an order of magnitude increase in computational capability, all involve quantum physics that serves as the heart of these revolutionary changes.
NASA Astrophysics Data System (ADS)
Schlegel, N.; Seroussi, H. L.; Boening, C.; Larour, E. Y.; Limonadi, D.; Schodlok, M.; Watkins, M. M.
2017-12-01
The Jet Propulsion Laboratory-University of California at Irvine Ice Sheet System Model (ISSM) is a thermo-mechanical 2D/3D parallelized finite element software used to physically model the continental-scale flow of ice at high resolutions. Embedded into ISSM are uncertainty quantification (UQ) tools, based on the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) software. ISSM-DAKOTA offers various UQ methods for the investigation of how errors in model input impact uncertainty in simulation results. We utilize these tools to regionally sample model input and key parameters, based on specified bounds of uncertainty, and run a suite of continental-scale 100-year ISSM forward simulations of the Antarctic Ice Sheet. Resulting diagnostics (e.g., spread in local mass flux and regional mass balance) inform our conclusion about which parameters and/or forcing has the greatest impact on century-scale model simulations of ice sheet evolution. The results allow us to prioritize the key datasets and measurements that are critical for the minimization of ice sheet model uncertainty. Overall, we find that Antartica's total sea level contribution is strongly affected by grounding line retreat, which is driven by the magnitude of ice shelf basal melt rates and by errors in bedrock topography. In addition, results suggest that after 100 years of simulation, Thwaites glacier is the most significant source of model uncertainty, and its drainage basin has the largest potential for future sea level contribution. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.
Compliant Robotic Structures. Part 2
1986-07-01
Nonaxially Homogeneous Stresses and Strains 44 Parametric Studies 52 % References 65 III. LARGE DEFLECTIONS OF CONTINUOUS ELASTIC ’- STRUCTURES 66...APPENDIX C: Computer Program for the Element String 133 -° SUMMARY This is the second year report which is a part of a three- year study on compliant...ratios as high as 10/1 for laboratory-scale models and up to 3/1 for full-scale prototype arms. The first two years of this study have involved the
NASA Astrophysics Data System (ADS)
Guan, Mingfu; Ahilan, Sangaralingam; Yu, Dapeng; Peng, Yong; Wright, Nigel
2018-01-01
Fine sediment plays crucial and multiple roles in the hydrological, ecological and geomorphological functioning of river systems. This study employs a two-dimensional (2D) numerical model to track the hydro-morphological processes dominated by fine suspended sediment, including the prediction of sediment concentration in flow bodies, and erosion and deposition caused by sediment transport. The model is governed by 2D full shallow water equations with which an advection-diffusion equation for fine sediment is coupled. Bed erosion and sedimentation are updated by a bed deformation model based on local sediment entrainment and settling flux in flow bodies. The model is initially validated with the three laboratory-scale experimental events where suspended load plays a dominant role. Satisfactory simulation results confirm the model's capability in capturing hydro-morphodynamic processes dominated by fine suspended sediment at laboratory-scale. Applications to sedimentation in a stormwater pond are conducted to develop the process-based understanding of fine sediment dynamics over a variety of flow conditions. Urban flows with 5-year, 30-year and 100-year return period and the extreme flood event in 2012 are simulated. The modelled results deliver a step change in understanding fine sediment dynamics in stormwater ponds. The model is capable of quantitatively simulating and qualitatively assessing the performance of a stormwater pond in managing urban water quantity and quality.
Liu, Jianjun; Song, Rui; Cui, Mengmeng
2014-01-01
A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view.
Liu, Jianjun; Song, Rui; Cui, Mengmeng
2014-01-01
A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384
AQMEII3: the EU and NA regional scale program of the ...
The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur
Javaherchi, Teymour
2016-06-08
Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for the Reynolds Averaged Navier-Stokes (RANS) simulation of three coaxially located lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbines in a coaxial array is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of each device and structure of their turbulent far wake. The results of these simulations were validated against the developed in-house experimental data. Simulations for other turbine configurations are available upon request.
Sub-scale Inverse Wind Turbine Blade Design Using Bound Circulation
NASA Astrophysics Data System (ADS)
Kelley, Christopher; Berg, Jonathan
2014-11-01
A goal of the National Rotor Testbed project at Sandia is to design a sub-scale wind turbine blade that has similitude to a modern, commercial size blade. However, a smaller diameter wind turbine operating at the same tip-speed-ratio exhibits a different range of operating Reynolds numbers across the blade span, thus changing the local lift and drag coefficients. Differences to load distribution also affect the wake dynamics and stability. An inverse wind turbine blade design tool has been implemented which uses a target, dimensionless circulation distribution from a full-scale blade to find the chord and twist along a sub-scale blade. In addition, airfoil polar data are interpolated from a few specified span stations leading to a smooth, manufacturable blade. The iterative process perturbs chord and twist, after running a blade element momentum theory code, to reduce the residual sum of the squares between the modeled sub-scale circulation and the target full-scale circulation. It is shown that the converged sub-scale design also leads to performance similarity in thrust and power coefficients. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy under Contract DE-AC04-94AL85000.
De Bartolo, Samuele; Fallico, Carmine; Veltri, Massimo
2013-01-01
Hydraulic conductivity and effective porosity values for the confined sandy loam aquifer of the Montalto Uffugo (Italy) test field were obtained by laboratory and field measurements; the first ones were carried out on undisturbed soil samples and the others by slug and aquifer tests. A direct simple-scaling analysis was performed for the whole range of measurement and a comparison among the different types of fractal models describing the scale behavior was made. Some indications about the largest pore size to utilize in the fractal models were given. The results obtained for a sandy loam soil show that it is possible to obtain global indications on the behavior of the hydraulic conductivity versus the porosity utilizing a simple scaling relation and a fractal model in coupled manner. PMID:24385876
REDUCING ENERGY AND SPACE REQUIREMENTS BY ELECTROSTATIC AUGMENTATION OF A PULSE-JET FABRIC FILTER
In work performed several years ago by EPA's research lab then known as Air and Energy Engineering Research Laboratory (EPA/AEERL), small-scale testing and modeling of electrostatically stimulated fabric filtration (ESFF) has indicated than substantial performance benefits could ...
Soil mixing of stratified contaminated sands.
Al-Tabba, A; Ayotamuno, M J; Martin, R J
2000-02-01
Validation of soil mixing for the treatment of contaminated ground is needed in a wide range of site conditions to widen the application of the technology and to understand the mechanisms involved. Since very limited work has been carried out in heterogeneous ground conditions, this paper investigates the effectiveness of soil mixing in stratified sands using laboratory-scale augers. This enabled a low cost investigation of factors such as grout type and form, auger design, installation procedure, mixing mode, curing period, thickness of soil layers and natural moisture content on the unconfined compressive strength, leachability and leachate pH of the soil-grout mixes. The results showed that the auger design plays a very important part in the mixing process in heterogeneous sands. The variability of the properties measured in the stratified soils and the measurable variations caused by the various factors considered, highlighted the importance of duplicating appropriate in situ conditions, the usefulness of laboratory-scale modelling of in situ conditions and the importance of modelling soil and contaminant heterogeneities at the treatability study stage.
NASA Astrophysics Data System (ADS)
Balt, C.; Kincaid, C. R.; Ullman, D. S.
2010-12-01
Greenwich Bay and the Providence River represent two subsystems of the Narragansett Bay (RI) estuary with chronic water quality problems. Both underway and moored Acoustic Doppler Current Profiler (ADCP) observations have shown the presence of large-scale, subtidal gyres within these subsystems. Prior numerical models of Narragansett Bay, developed using the Regional Ocean Modeling System (ROMS), indicate that prevailing summer sea breeze conditions are favorable to the evolution of stable circulation gyres, which increase retention times within each subsystem. Fluid dynamics laboratory models of the Providence River, conducted in the Geophysical Fluid Dynamics Laboratory of the Research School of Earth Sciences (Australian National University), reproduce gyres that match first order features of the ADCP data. These laboratory models also reveal details of small-scale eddies along the edges of the retention gyre. We report results from spatially and temporally detailed current meter deployments (using SeaHorse Tilt Current Meters) in both subsystems, which reveal details on the growth and decay of gyres under various spring-summer forcing conditions. In particular, current meters were deployed during the severe flooding events in the Narragansett Bay watershed during March, 2010. A combination of current meter data and high-resolution ROMS modeling is used to show how gyres effectively limit subtidal exchange from the Providence River and Greenwich Bay and to understand the forcing conditions that favor efficient flushing. The residence times of stable gyres within these regions can be an order of magnitude larger than values predicted by fraction of water methods. ROMS modeling is employed to characterize gyre energy, stability, and flushing rates for a wide range of seasonal, wind and runoff scenarios.
Three experiments investigating larval stocking densities of summer flounder from hatch to metamorphosis, Paralichthys dentatus, were conducted at laboratory-scale (75-L aquaria) and at commercial scale (1,000-L tanks). Experiments 1 and 2 at commercial scale tested the densities...
An overview of Laser-Produced Relativistic Positrons in the Laboratory
NASA Astrophysics Data System (ADS)
Edghill, Brandon; Williams, Gerald; Chen, Hui; Beg, Farhat
2017-10-01
The production of relativistic positrons using ultraintense lasers can facilitate studies of fundamental pair plasma science in the relativistic regime and laboratory studies of scaled energetic astrophysical mechanisms such as gamma ray bursts. The positron densities and spatial scales required for these applications, however, are larger than current capabilities. Here, we present an overview of the experimental laser-produced positron results and their respective modeling for both the direct laser-irradiated process and the indirect process (laser wakefield accelerated electrons irradiating a high-Z converter). Conversion efficiency into positrons and positron beam characteristics are compared, including total pair yield, mean energy, angular divergence, and inferred pair density for various laser and target conditions. Prospects towards increasing positron densities and beam repetition rates will also be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and funded by LDRD (#17-ERD-010).
Quantum Entanglement of Matter and Geometry in Large Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig J.
2014-12-04
Standard quantum mechanics and gravity are used to estimate the mass and size of idealized gravitating systems where position states of matter and geometry become indeterminate. It is proposed that well-known inconsistencies of standard quantum field theory with general relativity on macroscopic scales can be reconciled by nonstandard, nonlocal entanglement of field states with quantum states of geometry. Wave functions of particle world lines are used to estimate scales of geometrical entanglement and emergent locality. Simple models of entanglement predict coherent fluctuations in position of massive bodies, of Planck scale origin, measurable on a laboratory scale, and may account formore » the fact that the information density of long lived position states in Standard Model fields, which is determined by the strong interactions, is the same as that determined holographically by the cosmological constant.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less
Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame
NASA Astrophysics Data System (ADS)
Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank
2017-10-01
This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.
An Exponential Luminous Efficiency Model for Hypervelocity Impact into Regolith
NASA Technical Reports Server (NTRS)
Swift, Wesley R.; Moser, D.E.; Suggs, Robb M.; Cooke, W.J.
2010-01-01
The flash of thermal radiation produced as part of the impact-crater forming process can be used to determine the energy of the impact if the luminous efficiency is known. From this energy the mass and, ultimately, the mass flux of similar impactors can be deduced. The luminous efficiency, Eta is a unique function of velocity with an extremely large variation in the laboratory range of under 8 km/s but a necessarily small variation with velocity in the meteoric range of 20 to 70 km/s. Impacts into granular or powdery regolith, such as that on the moon, differ from impacts into solid materials in that the energy is deposited via a serial impact process which affects the rate of deposition of internal (thermal) energy. An exponential model of the process is developed which differs from the usual polynomial models of crater formation. The model is valid for the early time portion of the process and focuses on the deposition of internal energy into the regolith. The model is successfully compared with experimental luminous efficiency data from laboratory impacts and from astronomical determinations and scaling factors are estimated. Further work is proposed to clarify the effects of mass and density upon the luminous efficiency scaling factors
Study on water evaporation rate from indoor swimming pools
NASA Astrophysics Data System (ADS)
Rzeźnik, Ilona
2017-11-01
The air relative humidity in closed spaces of indoor swimming pools influences significantly on users thermal comfort and the stability of the building structure, so its preservation on suitable level is very important. For this purpose, buildings are equipped with HVAC systems which provide adequate level of humidity. The selection of devices and their technical parameters is made using the mathematical models of water evaporation rate in the unoccupied and occupied indoor swimming pool. In the literature, there are many papers describing this phenomena but the results differ from each other. The aim of the study was the experimental verification of published models of evaporation rate in the pool. The tests carried out on a laboratory scale, using model of indoor swimming pool, measuring 99cm/68cm/22cm. The model was equipped with water spray installation with six nozzles to simulate conditions during the use of the swimming pool. The measurements were made for conditions of sports pools (water temperature 24°C) and recreational swimming pool (water temperature 34°C). According to the recommendations the air temperature was about 2°C higher than water temperature, and the relative humidity ranged from 40% to 55%. Models Shah and Biasin & Krumm were characterized by the best fit to the results of measurements on a laboratory scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, J.M.; Arnett, R.C.; Neupauer, R.M.
This report documents a study conducted to develop a regional groundwater flow model for the Eastern Snake River Plain Aquifer in the area of the Idaho National Engineering Laboratory. The model was developed to support Waste Area Group 10, Operable Unit 10-04 groundwater flow and transport studies. The products of this study are this report and a set of computational tools designed to numerically model the regional groundwater flow in the Eastern Snake River Plain aquifer. The objective of developing the current model was to create a tool for defining the regional groundwater flow at the INEL. The model wasmore » developed to (a) support future transport modeling for WAG 10-04 by providing the regional groundwater flow information needed for the WAG 10-04 risk assessment, (b) define the regional groundwater flow setting for modeling groundwater contaminant transport at the scale of the individual WAGs, (c) provide a tool for improving the understanding of the groundwater flow system below the INEL, and (d) consolidate the existing regional groundwater modeling information into one usable model. The current model is appropriate for defining the regional flow setting for flow submodels as well as hypothesis testing to better understand the regional groundwater flow in the area of the INEL. The scale of the submodels must be chosen based on accuracy required for the study.« less
NASA Astrophysics Data System (ADS)
Yoon, H.; Dewers, T. A.; Valocchi, A. J.; Werth, C. J.
2011-12-01
Dissolved CO2 during geological CO2 storage may react with minerals in fractured rocks or confined aquifers and cause mineral precipitation. The overall rate of reaction can be affected by coupled processes among hydrodynamics, transport, and reactions at pore-scale. Pore-scale models of coupled fluid flow, reactive transport, and CaCO3 precipitation and dissolution are applied to account for transient experimental results of CaCO3 precipitation and dissolution under highly supersaturated conditions in a microfluidic pore network (i.e., micromodel). Pore-scale experiments in the micromodel are used as a basis for understanding coupled physics of systems perturbed by geological CO2 injection. In the micromodel, precipitation is induced by transverse mixing along the centerline in pore bodies. Overall, the pore-scale model qualitatively captured the governing physics of reactions such as precipitate morphology, precipitation rate, and maximum precipitation area in first few pore spaces. In particular, we found that proper estimation of the effective diffusion coefficient and the reactive surface area is necessary to adequately simulate precipitation and dissolution rates. As the model domain increases, the effect of flow patterns affected by precipitation on the overall reaction rate also increases. The model is also applied to account for the effect of different reaction rate laws on mineral precipitation and dissolution at pore-scale. Reaction rate laws tested include the linear rate law, nonlinear power law, and newly-developed rate law based on in-situ measurements at nano scale in the literature. Progress on novel methods for upscaling pore-scale models for reactive transport are discussed, and are being applied to mineral precipitation patterns observed in natural analogues. H.Y. and T. D. were supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Cassiani, G.; dalla, E.; Brovelli, A.; Pitea, D.; Binley, A. M.
2003-04-01
The development of reliable constitutive laws to translate geophysical properties into hydrological ones is the fundamental step for successful applications of hydrogeophysical techniques. Many such laws have been proposed and applied, particularly with regard to two types of relationships: (a) between moisture content and dielectric properties, and (b) between electrical resistivity, rock structure and water saturation. The classical Archie's law belongs to this latter category. Archie's relationship has been widely used, starting from borehole logs applications, to translate geoelectrical measurements into estimates of saturation. However, in spite of its popularity, it remains an empirical relationship, the parameters of which must be calibrated case by case, e.g. on laboratory data. Pore-scale models have been recently recognized and used as powerful tools to investigate the constitutive relations of multiphase soils from a pore-scale point of view, because they bridge the microscopic and macroscopic scales. In this project, we develop and validate a three-dimensional pore-scale method to compute electrical properties of unsaturated and saturated porous media. First we simulate a random packing of spheres [1] that obeys the grain-size distribution and porosity of an experimental porous medium system; then we simulate primary drainage with a morphological approach [2]; finally, for each state of saturation during the drainage process, we solve the electrical conduction equation within the grain structure with a new numerical model and compute the apparent electrical resistivity of the porous medium. We apply the new method to a semi-consolidated Permo-Triassic Sandstone from the UK (Sherwood Sandstone) for which both pressure-saturation (Van Genuchten) and Archie's law parameters have been measured on laboratory samples. A comparison between simulated and measured relationships has been performed.
Ataman, Meric
2017-01-01
Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these “consistently-reduced” models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models. PMID:28727725
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ping
Controlling metallic nanoparticle (NP) interactions plays a vital role in the development of new joining techniques (nanosolder) that bond at lower processing temperatures but remain viable at higher temperatures. The pr imary objective of this project is t o develop a fundamental understanding of the actual reaction processes, associated atomic mechanisms, and the resulting microstructure that occur during thermally - driven bond formation concerning metal - metal nano - scale (%3C50nm) interfaces. In this LDRD pr oject, we have studied metallic NPs interaction at the elevated temperatures by combining in - situ transmission electron microscopy (TEM ) using an aberrationmore » - corrected scanning transmission electron microscope (AC - STEM) and atomic - scale modeling such as m olecular dynamic (MD) simulations. Various metallic NPs such as Ag, Cu and Au are synthesized by chemical routines. Numerous in - situ e xperiments were carried out with focus of the research on study of Ag - Cu system. For the first time, using in - situ STEM he ating experiments , we directly observed t he formation of a 3 - dimensional (3 - D) epitaxial Cu - Ag core - shell nanoparticle during the thermal interaction of Cu and Ag NPs at elevated temperatures (150 - 300 o C). The reaction takes place at temperatures as low as 150 o C and was only observed when care was taken to circumvent the effects of electron beam irradiation during STEM imaging. Atomic - scale modeling verified that the Cu - Ag core - shell structure is energetically favored, and indicated that this phenomenon is a nano - scale effect related to the large surface - to - volume ratio of the NPs. The observation potentially can be used for developing new nanosolder technology that uses Ag shell as the "glue" that stic ks the particles of Cu together. The LDRD has led to several journal publications and numerous conference presentations, and a TA. In addition, we have developed new TEM characterization techniques and phase - field modeling tools that can be used for future materials research at Sandia. Acknowledgeme nts This work was supported by the Laboratory Directed Research and Development (LDRD) program of Sandia National Laboratories. Sandia National Laboratories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidia ry of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.« less
Su, Bin-Guang; Chen, Shao-Fen; Yeh, Shu-Hsing; Shih, Po-Wen; Lin, Ching-Chiang
2016-11-01
To cope with the government's policies to reduce medical costs, Taiwan's healthcare service providers are striving to survive by pursuing profit maximization through cost control. This article aimed to present the results of cost evaluation using activity-based costing performed in the laboratory in order to throw light on the differences between costs and the payment system of National Health Insurance (NHI). This study analyzed the data of costs and income of the clinical laboratory. Direct costs belong to their respective sections of the department. The department's shared costs, including public expenses and administrative assigned costs, were allocated to the department's respective sections. A simple regression equation was created to predict profit and loss, and evaluate the department's break-even point, fixed cost, and contribution margin ratio. In clinical chemistry and seroimmunology sections, the cost per test was lower than the NHI payment and their major laboratory tests had revenues with the profitability ratio of 8.7%, while the other sections had a higher cost per test than the NHI payment and their major tests were in deficit. The study found a simple linear regression model as follows: "Balance=-84,995+0.543×income (R2=0.544)". In order to avoid deficit, laboratories are suggested to increase test volumes, enhance laboratory test specialization, and become marginal scale. A hospital could integrate with regional medical institutions through alliances or OEM methods to increase volumes to reach marginal scale and reduce laboratory costs, enhancing the level and quality of laboratory medicine.
Implementation of In-Situ Impedance Techniques on a Full Scale Aero-Engine System
NASA Technical Reports Server (NTRS)
Gaeta, R. J.; Mendoza, J. M.; Jones, M. G.
2007-01-01
Determination of acoustic liner impedance for jet engine applications remains a challenge for the designer. Although suitable models have been developed that take account of source amplitude and the local flow environment experienced by the liner, experimental validation of these models has been difficult. This is primarily due to the inability of researchers to faithfully mimic the environment in jet engine nacelles in the laboratory. An in-situ measurement technique, one that can be implemented in an actual engine, is desirable so an accurate impedance can be determined for future modeling and quality control. This paper documents the implementation of such a local acoustic impedance measurement technique that is used under controlled laboratory conditions as well as on full scale turbine engine liner test article. The objective for these series of in-situ measurements is to substantiate treatment design, provide understanding of flow effects on installed liner performance, and provide modeling input for fan noise propagation computations. A series of acoustic liner evaluation tests are performed that includes normal incidence tube, grazing incidence tube, and finally testing on a full scale engine on a static test stand. Lab tests were intended to provide insight and guidance for accurately measuring the impedance of the liner housed in the inlet of a Honeywell Tech7000 turbofan. Results have shown that one can acquire very reasonable liner impedance data for a full scale engine under realistic test conditions. Furthermore, higher fidelity results can be obtained by using a three-microphone coherence technique that can enhance signal-to-noise ratio at high engine power settings. This research has also confirmed the limitations of this particular type of in-situ measurement. This is most evident in the installation of instrumentation and its effect on what is being measured.
WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, Kelley; Michelen, Carlos; Bosma, Bret
2016-08-01
The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less
NASA Astrophysics Data System (ADS)
Gilbert, Lisa A.; Salisbury, Matthew H.
2011-09-01
Drilling and logging of Integrated Ocean Drilling Program (IODP) Hole 1256D have provided a unique opportunity for systematically studying a fundamental problem in marine geophysics: What influences the seismic structure of oceanic crust, porosity or composition? Compressional wave velocities (Vp) logged in open hole or from regional refraction measurements integrate both the host rock and cracks in the crust. To determine the influence of cracks on Vp at several scales, we first need an accurate ground truth in the form of laboratory Vp on crack-free, or nearly crack-free samples. We measured Vp on 46 water-saturated samples at in situ pressures to determine the baseline velocities of the host rock. These new results match or exceed Vp logs throughout most of the hole, especially in the lower dikes and gabbros, where porosities are low. In contrast, samples measured at sea under ambient laboratory conditions, had consistently lower Vp than the Vp logs, even after correction to in situ pressures. Crack-free Vp calculated from simple models of logging and laboratory porosity data for different lithologies and facies suggest that crustal velocities in the lavas and upper dikes are controlled by porosity. In particular, the models demonstrate significant large-scale porosity in the lavas, especially in the sections identified as fractured flows and breccias. However, crustal velocities in the lower dikes and gabbros are increasingly controlled by petrology as the layer 2-3 boundary is approached.
A mathematical model was developed to predict changes in contaminant concentrations with time, and to estimate contaminant fluxes due to migration, diffusion, and convection in a laboratory-scale batch electrolysis cell for the regeneration of contaminated har...
Glass Bubbles Insulation for Liquid Hydrogen Storage Tanks
NASA Astrophysics Data System (ADS)
Sass, J. P.; Cyr, W. W. St.; Barrett, T. M.; Baumgartner, R. G.; Lott, J. W.; Fesmire, J. E.
2010-04-01
A full-scale field application of glass bubbles insulation has been demonstrated in a 218,000 L liquid hydrogen storage tank. This work is the evolution of extensive materials testing, laboratory scale testing, and system studies leading to the use of glass bubbles insulation as a cost efficient and high performance alternative in cryogenic storage tanks of any size. The tank utilized is part of a rocket propulsion test complex at the NASA Stennis Space Center and is a 1960's vintage spherical double wall tank with an evacuated annulus. The original perlite that was removed from the annulus was in pristine condition and showed no signs of deterioration or compaction. Test results show a significant reduction in liquid hydrogen boiloff when compared to recent baseline data prior to removal of the perlite insulation. The data also validates the previous laboratory scale testing (1000 L) and full-scale numerical modeling (3,200,000 L) of boiloff in spherical cryogenic storage tanks. The performance of the tank will continue to be monitored during operation of the tank over the coming years.
Glass Bubbles Insulation for Liquid Hydrogen Storage Tanks
NASA Technical Reports Server (NTRS)
Sass, J. P.; SaintCyr, W. W.; Barrett, T. M.; Baumgartner, R. G.; Lott, J. W.; Fesmire, J. E.
2009-01-01
A full-scale field application of glass bubbles insulation has been demonstrated in a 218,000 L liquid hydrogen storage tank. This work is the evolution of extensive materials testing, laboratory scale testing, and system studies leading to the use of glass bubbles insulation as a cost efficient and high performance alternative in cryogenic storage tanks of any size. The tank utilized is part of a rocket propulsion test complex at the NASA Stennis Space Center and is a 1960's vintage spherical double wall tank with an evacuated annulus. The original perlite that was removed from the annulus was in pristine condition and showed no signs of deterioration or compaction. Test results show a significant reduction in liquid hydrogen boiloff when compared to recent baseline data prior to removal of the perlite insulation. The data also validates the previous laboratory scale testing (1000 L) and full-scale numerical modeling (3,200,000 L) of boiloff in spherical cryogenic storage tanks. The performance of the tank will continue to be monitored during operation of the tank over the coming years. KEYWORDS: Glass bubble, perlite, insulation, liquid hydrogen, storage tank.
Project Fire Model: Summary Progress Report, Period November 1, 1958 to April 30, 1960
W.L. Fons; H.D. Bruce; W.Y. Pong; S.S. Richards
1960-01-01
This report summarizes progress from November 1, 1958, to April 30, 1960, in a study conducted by the Pacific Southwest Forest and Range Experiment Station of the Forest Service in cooperation with the Office of Civil and Defense Mobilization. Called PROJECT FIRE MODEL for convenience, the project sought to develop and study a laboratory-scale fire which would...
Linking the Grain Scale to Experimental Measurements and Other Scales
NASA Astrophysics Data System (ADS)
Vogler, Tracy
2017-06-01
A number of physical processes occur at the scale of grains that can have a profound influence on the behavior of materials under shock loading. Examples include inelastic deformation, pore collapse, fracture, friction, and internal wave reflections. In some cases such as the initiation of energetics and brittle fracture, these processes can have first order effects on the behavior of materials: the emergent behavior from the grain scale is the dominant one. In other cases, many aspects of the bulk behavior can be described by a continuum description, but some details of the behavior are missed by continuum descriptions. The multi-scale model paradigm envisions flow of information from smaller scales (atomic, dislocation, etc.) to the grain or mesoscale and the up to the continuum scale. A significant challenge in this approach is the need to validate each step. For the grain scale, diagnosing behavior is challenging because of the small spatial and temporal scales involved. Spatially resolved diagnostics have begun to shed light on these processes, and, more recently, advanced light sources have started to be used to probe behavior at the grain scale. In this talk, I will discuss some interesting phenomena that occur at the grain scale in shock loading, experimental approaches to probe the grain scale, and efforts to link the grain scale to smaller and larger scales. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE.
Multiscale properties of unconventional reservoir rocks
NASA Astrophysics Data System (ADS)
Woodruff, W. F.
A multidisciplinary study of unconventional reservoir rocks is presented, providing the theory, forward modeling and Bayesian inverse modeling approaches, and laboratory protocols to characterize clay-rich, low porosity and permeability shales and mudstones within an anisotropic framework. Several physical models characterizing oil and gas shales are developed across multiple length scales, ranging from microscale phenomena, e.g. the effect of the cation exchange capacity of reactive clay mineral surfaces on water adsorption isotherms, and the effects of infinitesimal porosity compaction on elastic and electrical properties, to meso-scale phenomena, e.g. the role of mineral foliations, tortuosity of conduction pathways and the effects of organic matter (kerogen and hydrocarbon fractions) on complex conductivity and their connections to intrinsic electrical anisotropy, as well as the macro-scale electrical and elastic properties including formulations for the complex conductivity tensor and undrained stiffness tensor within the context of effective stress and poroelasticity. Detailed laboratory protocols are described for sample preparation and measurement of these properties using spectral induced polarization (SIP) and ultrasonics for the anisotropic characterization of shales for both unjacketed samples under benchtop conditions and jacketed samples under differential loading. An ongoing study of the effects of kerogen maturation through hydrous pyrolysis on the complex conductivity is also provided in review. Experimental results are catalogued and presented for various unconventional formations in North America including the Haynesville, Bakken, and Woodford shales.
Jay, Kenneth; Friborg, Maria Kristine; Sjøgaard, Gisela; Jakobsen, Markus Due; Sundstrup, Emil; Brandt, Mikkel; Andersen, Lars Louis
2015-12-11
Musculoskeletal pain and stress-related disorders are leading causes of impaired work ability, sickness absences and disability pensions. However, knowledge about the combined detrimental effect of pain and stress on work ability is lacking. This study investigates the association between pain in the neck-shoulders, perceived stress, and work ability. In a cross-sectional survey at a large pharmaceutical company in Denmark 473 female laboratory technicians replied to questions about stress (Perceived Stress Scale), musculoskeletal pain intensity (scale 0-10) of the neck and shoulders, and work ability (Work Ability Index). General linear models tested the association between variables. In the multi-adjusted model, stress (p < 0.001) and pain (p < 0.001) had independent main effects on the work ability index score, and there was no significant stress by pain interaction (p = 0.32). Work ability decreased gradually with both increased stress and pain. Workers with low stress and low pain had the highest Work Ability Index score (44.6 (95% CI 43.9-45.3)) and workers with high stress and high pain had the lowest score (32.7 (95% CI 30.6-34.9)). This cross-sectional study indicates that increased stress and musculoskeletal pain are independently associated with lower work ability in female laboratory technicians.
De la Cruz, Florentino B; Barlaz, Morton A
2010-06-15
The current methane generation model used by the U.S. EPA (Landfill Gas Emissions Model) treats municipal solid waste (MSW) as a homogeneous waste with one decay rate. However, component-specific decay rates are required to evaluate the effects of changes in waste composition on methane generation. Laboratory-scale rate constants, k(lab), for the major biodegradable MSW components were used to derive field-scale decay rates (k(field)) for each waste component using the assumption that the average of the field-scale decay rates for each waste component, weighted by its composition, is equal to the bulk MSW decay rate. For an assumed bulk MSW decay rate of 0.04 yr(-1), k(field) was estimated to be 0.298, 0.171, 0.015, 0.144, 0.033, 0.02, 0.122, and 0.029 yr(-1), for grass, leaves, branches, food waste, newsprint, corrugated containers, coated paper, and office paper, respectively. The effect of landfill waste diversion programs on methane production was explored to illustrate the use of component-specific decay rates. One hundred percent diversion of yard waste and food waste reduced the year 20 methane production rate by 45%. When a landfill gas collection schedule was introduced, collectable methane was most influenced by food waste diversion at years 10 and 20 and paper diversion at year 40.
EPOS Multi-Scale Laboratory platform: a long-term reference tool for experimental Earth Sciences
NASA Astrophysics Data System (ADS)
Trippanera, Daniele; Tesei, Telemaco; Funiciello, Francesca; Sagnotti, Leonardo; Scarlato, Piergiorgio; Rosenau, Matthias; Elger, Kirsten; Ulbricht, Damian; Lange, Otto; Calignano, Elisa; Spiers, Chris; Drury, Martin; Willingshofer, Ernst; Winkler, Aldo
2017-04-01
With continuous progress on scientific research, a large amount of datasets has been and will be produced. The data access and sharing along with their storage and homogenization within a unique and coherent framework is a new challenge for the whole scientific community. This is particularly emphasized for geo-scientific laboratories, encompassing the most diverse Earth Science disciplines and typology of data. To this aim the "Multiscale Laboratories" Work Package (WP16), operating in the framework of the European Plate Observing System (EPOS), is developing a virtual platform of geo-scientific data and services for the worldwide community of laboratories. This long-term project aims at merging the top class multidisciplinary laboratories in Geoscience into a coherent and collaborative network, facilitating the standardization of virtual access to data, data products and software. This will help our community to evolve beyond the stage in which most of data produced by the different laboratories are available only within the related scholarly publications (often as print-version only) or they remain unpublished and inaccessible on local devices. The EPOS multi-scale laboratory platform will provide the possibility to easily share and discover data by means of open access, DOI-referenced, online data publication including long-term storage, managing and curation services and to set up a cohesive community of laboratories. The WP16 is starting with three pilot cases laboratories: (1) rock physics, (2) palaeomagnetic, and (3) analogue modelling. As a proof of concept, first analogue modelling datasets have been published via GFZ Data Services (http://doidb.wdc-terra.org/search/public/ui?&sort=updated+desc&q=epos). The datasets include rock analogue material properties (e.g. friction data, rheology data, SEM imagery), as well as supplementary figures, images and movies from experiments on tectonic processes. A metadata catalogue tailored to the specific communities will link the growing number of datasets to a centralized EPOS hub. Acknowledging the fact that we are dealing with a variety in levels of maturity regarding available data infrastructures within the different labs, we have set up an architecture that provides different scenarios for participation. Thus, research groups which do not have access to localized repositories and catalogues for sustainable storage of data and metadata can rely on shared services within the Multi-scale Laboratories community. As an example of the usage of data retrieved through the community, an experimentalist willing to decide which material is suitable for his experimental setup can get "virtual lab access" to retrieve information about material parameters with a minimum effort and then may decide to move in a specific laboratory equipped with the instruments needed. The currently participating and collaborating laboratories (Utrecht University, GFZ, Roma Tre University, INGV, NERC, CSIC-ICTJA, CNRS, LMU, UBI, ETH, CNR) warmly welcome everyone who is interested in participating at the development of this project.
Do swimming animals mix the ocean?
NASA Astrophysics Data System (ADS)
Dabiri, John
2013-11-01
Perhaps. The oceans are teeming with billions of swimming organisms, from bacteria to blue whales. Current research efforts in biological oceanography typically focus on the impact of the marine environment on the organisms within. We ask the opposite question: can organisms in the ocean, especially those that migrate vertically every day and regionally every year, change the physical structure of the water column? The answer has potentially important implications for ecological models at local scale and climate modeling at global scales. This talk will introduce the still-controversial prospect of biogenic ocean mixing, beginning with evidence from measurements in the field. More recent laboratory-scale experiments, in which we create controlled vertical migrations of plankton aggregations using laser signaling, provide initial clues toward a mechanism to achieve efficient mixing at scales larger than the individual organisms. These results are compared and contrasted with theoretical models, and they highlight promising avenues for future research in this area. Funding from the Office of Naval Research and the National Science Foundation is gratefully acknowledged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.
The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less
Simulation of Anisotropic Rock Damage for Geologic Fracturing
NASA Astrophysics Data System (ADS)
Busetti, S.; Xu, H.; Arson, C. F.
2014-12-01
A continuum damage model for differential stress-induced anisotropic crack formation and stiffness degradation is used to study geologic fracturing in rocks. The finite element-based model solves for deformation in the quasi-linear elastic domain and determines the six component damage tensor at each deformation increment. The model permits an isotropic or anisotropic intact or pre-damaged reference state, and the elasticity tensor evolves depending on the stress path. The damage variable, similar to Oda's fabric tensor, grows when the surface energy dissipated by three-dimensional opened cracks exceeds a threshold defined at the appropriate scale of the representative elementary volume (REV). At the laboratory or wellbore scale (<1m) brittle continuum damage reflects microcracking, grain boundary separation, grain crushing, or fine delamination, such as in shale. At outcrop (1m-100m), seismic (10m-1000m), and tectonic (>1000m) scales the damaged REV reflects early natural fracturing (background or tectonic fracturing) or shear strain localization (fault process zone, fault-tip damage, etc.). The numerical model was recently benchmarked against triaxial stress-strain data from laboratory rock mechanics tests. However, the utility of the model to predict geologic fabric such as natural fracturing in hydrocarbon reservoirs was not fully explored. To test the ability of the model to predict geological fracturing, finite element simulations (Abaqus) of common geologic scenarios with known fracture patterns (borehole pressurization, folding, faulting) are simulated and the modeled damage tensor is compared against physical fracture observations. Simulated damage anisotropy is similar to that derived using fractured rock-mass upscaling techniques for pre-determined fracture patterns. This suggests that if model parameters are constrained with local data (e.g., lab, wellbore, or reservoir domain), forward modeling could be used to predict mechanical fabric at the relevant REV scale. This reference fabric also can be used as the starting material property to pre-condition subsequent deformation or fluid flow. Continuing efforts are to expand the present damage model to couple damage evolution with plasticity and with permeability for more geologically realistic simulation.
NASA Astrophysics Data System (ADS)
Bader, D. C.
2015-12-01
The Accelerated Climate Modeling for Energy (ACME) Project is concluding its first year. Supported by the Office of Science in the U.S. Department of Energy (DOE), its vision is to be "an ongoing, state-of-the-science Earth system modeling, modeling simulation and prediction project that optimizes the use of DOE laboratory resources to meet the science needs of the nation and the mission needs of DOE." Included in the "laboratory resources," is a large investment in computational, network and information technologies that will be utilized to both build better and more accurate climate models and broadly disseminate the data they generate. Current model diagnostic analysis and data dissemination technologies will not scale to the size of the simulations and the complexity of the models envisioned by ACME and other top tier international modeling centers. In this talk, the ACME Workflow component plans to meet these future needs will be described and early implementation examples will be highlighted.
A Mass Diffusion Model for Dry Snow Utilizing a Fabric Tensor to Characterize Anisotropy
NASA Astrophysics Data System (ADS)
Shertzer, Richard H.; Adams, Edward E.
2018-03-01
A homogenization algorithm for randomly distributed microstructures is applied to develop a mass diffusion model for dry snow. Homogenization is a multiscale approach linking constituent behavior at the microscopic level—among ice and air—to the macroscopic material—snow. Principles of continuum mechanics at the microscopic scale describe water vapor diffusion across an ice grain's surface to the air-filled pore space. Volume averaging and a localization assumption scale up and down, respectively, between microscopic and macroscopic scales. The model yields a mass diffusivity expression at the macroscopic scale that is, in general, a second-order tensor parameterized by both bulk and microstructural variables. The model predicts a mass diffusivity of water vapor through snow that is less than that through air. Mass diffusivity is expected to decrease linearly with ice volume fraction. Potential anisotropy in snow's mass diffusivity is captured due to the tensor representation. The tensor is built from directional data assigned to specific, idealized microstructural features. Such anisotropy has been observed in the field and laboratories in snow morphologies of interest such as weak layers of depth hoar and near-surface facets.
Granular activated carbon adsorption of MIB in the presence of dissolved organic matter.
Summers, R Scott; Kim, Soo Myung; Shimabuku, Kyle; Chae, Seon-Ha; Corwin, Christopher J
2013-06-15
Based on the results of over twenty laboratory granular activated carbon (GAC) column runs, models were developed and utilized for the prediction of 2-methylisoborneol (MIB) breakthrough behavior at parts per trillion levels and verified with pilot-scale data. The influent MIB concentration was found not to impact the concentration normalized breakthrough. Increasing influent background dissolved organic matter (DOM) concentration was found to systematically decrease the GAC adsorption capacity for MIB. A series of empirical models were developed that related the throughput in bed volumes for a range of MIB breakthrough targets to the influent DOM concentration. The proportional diffusivity (PD) designed rapid small-scale column test (RSSCT) could be directly used to scale-up MIB breakthrough performance below 15% breakthrough. The empirical model to predict the throughput to 50% breakthrough based on the influent DOM concentration served as input to the pore diffusion model (PDM) and well-predicted the MIB breakthrough performance below a 50% breakthrough. The PDM predictions of throughput to 10% breakthrough well simulated the PD-RSSCT and pilot-scale 10% MIB breakthrough. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
2017-03-01
A number of full-scale tests have been carried out in the laboratory focused on the shear : performance of simulated precast concrete deck panels (PCP). Shear tests were carried out to : simulate the type of loading that will be applied to the deck p...
DOT National Transportation Integrated Search
1980-04-01
In the report, procedures to reduce the propulsion system noise of urban rail transit vehicles on elevated structures are studied. Experiments in a laboratory use a scale model transit vehicle to evaluate the acoustical effectiveness of noise barrier...
Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Bremner, Paul
2014-01-01
This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calderer, Antoni; Yang, Xiaolei; Angelidis, Dionysios
2015-10-30
The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.
Johnson, Benjamin N.; Ashe, Melinda L.; Wilson, Stephen J.
2017-01-01
Borderline personality disorder (BPD) and alcohol use disorder (AUD) share impulsivity as an etiological factor. However, impulsivity is ill-defined, often overlapping with self-control capacity. This study attempts to disentangle these constructs and their associations with alcohol use and BPD. Undergraduates (N = 192) completed the Five Factor Model Rating Form, which generated two dimensional scales of BPD, the Self-Control Scale, the UPPS-P (self-reported impulsivity), and the Stop-signal and delay discounting tasks (laboratory-measured impulsivity). Self-control appeared as a major predictor of BPD features and drinking, explaining as much or more variance in outcome than impulsivity. Co-occurrence of elevated BPD features and problem drinking was also best explained by self-control. Laboratory measures of impulsivity were not correlated with BPD scales or alcohol use. Self-regulatory capacity may be an important but overlooked factor in BPD and alcohol use and should be considered alongside impulsivity in future research. PMID:27064849
Stegen, James C
2018-01-01
To improve predictions of ecosystem function in future environments, we need to integrate the ecological and environmental histories experienced by microbial communities with hydrobiogeochemistry across scales. A key issue is whether we can derive generalizable scaling relationships that describe this multiscale integration. There is a strong foundation for addressing these challenges. We have the ability to infer ecological history with null models and reveal impacts of environmental history through laboratory and field experimentation. Recent developments also provide opportunities to inform ecosystem models with targeted omics data. A major next step is coupling knowledge derived from such studies with multiscale modeling frameworks that are predictive under non-steady-state conditions. This is particularly true for systems spanning dynamic interfaces, which are often hot spots of hydrobiogeochemical function. We can advance predictive capabilities through a holistic perspective focused on the nexus of history, ecology, and hydrobiogeochemistry.
Interval Analysis Approach to Prototype the Robust Control of the Laboratory Overhead Crane
NASA Astrophysics Data System (ADS)
Smoczek, J.; Szpytko, J.; Hyla, P.
2014-07-01
The paper describes the software-hardware equipment and control-measurement solutions elaborated to prototype the laboratory scaled overhead crane control system. The novelty approach to crane dynamic system modelling and fuzzy robust control scheme design is presented. The iterative procedure for designing a fuzzy scheduling control scheme is developed based on the interval analysis of discrete-time closed-loop system characteristic polynomial coefficients in the presence of rope length and mass of a payload variation to select the minimum set of operating points corresponding to the midpoints of membership functions at which the linear controllers are determined through desired poles assignment. The experimental results obtained on the laboratory stand are presented.
Seismic and geodetic signatures of fault slip at the Slumgullion Landslide Natural Laboratory
Gomberg, J.; Schulz, W.; Bodin, P.; Kean, J.
2011-01-01
We tested the hypothesis that the Slumgullion landslide is a useful natural laboratory for observing fault slip, specifically that slip along its basal surface and side-bounding strike-slip faults occurs with comparable richness of aseismic and seismic modes as along crustal- and plate-scale boundaries. Our study provides new constraints on models governing landslide motion. We monitored landslide deformation with temporary deployments of a 29-element prism array surveyed by a robotic theodolite and an 88-station seismic network that complemented permanent extensometers and environmental instrumentation. Aseismic deformation observations show that large blocks of the landslide move steadily at approximately centimeters per day, possibly punctuated by variations of a few millimeters, while localized transient slip episodes of blocks less than a few tens of meters across occur frequently. We recorded a rich variety of seismic signals, nearly all of which originated outside the monitoring network boundaries or from the side-bounding strike-slip faults. The landslide basal surface beneath our seismic network likely slipped almost completely aseismically. Our results provide independent corroboration of previous inferences that dilatant strengthening along sections of the side-bounding strike-slip faults controls the overall landslide motion, acting as seismically radiating brakes that limit acceleration of the aseismically slipping basal surface. Dilatant strengthening has also been invoked in recent models of transient slip and tremor sources along crustal- and plate-scale faults suggesting that the landslide may indeed be a useful natural laboratory for testing predictions of specific mechanisms that control fault slip at all scales.
Magliocca, Nicholas R; Brown, Daniel G; Ellis, Erle C
2014-01-01
Local changes in land use result from the decisions and actions of land-users within land systems, which are structured by local and global environmental, economic, political, and cultural contexts. Such cross-scale causation presents a major challenge for developing a general understanding of how local decision-making shapes land-use changes at the global scale. This paper implements a generalized agent-based model (ABM) as a virtual laboratory to explore how global and local processes influence the land-use and livelihood decisions of local land-users, operationalized as settlement-level agents, across the landscapes of six real-world test sites. Test sites were chosen in USA, Laos, and China to capture globally-significant variation in population density, market influence, and environmental conditions, with land systems ranging from swidden to commercial agriculture. Publicly available global data were integrated into the ABM to model cross-scale effects of economic globalization on local land-use decisions. A suite of statistics was developed to assess the accuracy of model-predicted land-use outcomes relative to observed and random (i.e. null model) landscapes. At four of six sites, where environmental and demographic forces were important constraints on land-use choices, modeled land-use outcomes were more similar to those observed across sites than the null model. At the two sites in which market forces significantly influenced land-use and livelihood decisions, the model was a poorer predictor of land-use outcomes than the null model. Model successes and failures in simulating real-world land-use patterns enabled the testing of hypotheses on land-use decision-making and yielded insights on the importance of missing mechanisms. The virtual laboratory approach provides a practical framework for systematic improvement of both theory and predictive skill in land change science based on a continual process of experimentation and model enhancement.
NASA Astrophysics Data System (ADS)
Guenet, Bertrand; Esteban Moyano, Fernando; Peylin, Philippe; Ciais, Philippe; Janssens, Ivan A.
2016-03-01
Priming of soil carbon decomposition encompasses different processes through which the decomposition of native (already present) soil organic matter is amplified through the addition of new organic matter, with new inputs typically being more labile than the native soil organic matter. Evidence for priming comes from laboratory and field experiments, but to date there is no estimate of its impact at global scale and under the current anthropogenic perturbation of the carbon cycle. Current soil carbon decomposition models do not include priming mechanisms, thereby introducing uncertainty when extrapolating short-term local observations to ecosystem and regional to global scale. In this study we present a simple conceptual model of decomposition priming, called PRIM, able to reproduce laboratory (incubation) and field (litter manipulation) priming experiments. Parameters for this model were first optimized against data from 20 soil incubation experiments using a Bayesian framework. The optimized parameter values were evaluated against another set of soil incubation data independent from the ones used for calibration and the PRIM model reproduced the soil incubations data better than the original, CENTURY-type soil decomposition model, whose decomposition equations are based only on first-order kinetics. We then compared the PRIM model and the standard first-order decay model incorporated into the global land biosphere model ORCHIDEE (Organising Carbon and Hydrology In Dynamic Ecosystems). A test of both models was performed at ecosystem scale using litter manipulation experiments from five sites. Although both versions were equally able to reproduce observed decay rates of litter, only ORCHIDEE-PRIM could simulate the observed priming (R2 = 0.54) in cases where litter was added or removed. This result suggests that a conceptually simple and numerically tractable representation of priming adapted to global models is able to capture the sign and magnitude of the priming of litter and soil organic matter.
NASA Astrophysics Data System (ADS)
Guenet, B.; Moyano, F. E.; Peylin, P.; Ciais, P.; Janssens, I. A.
2015-10-01
Priming of soil carbon decomposition encompasses different processes through which the decomposition of native (already present) soil organic matter is amplified through the addition of new organic matter, with new inputs typically being more labile than the native soil organic matter. Evidence for priming comes from laboratory and field experiments, but to date there is no estimate of its impact at global scale and under the current anthropogenic perturbation of the carbon cycle. Current soil carbon decomposition models do not include priming mechanisms, thereby introducing uncertainty when extrapolating short-term local observations to ecosystem and regional to global scale. In this study we present a simple conceptual model of decomposition priming, called PRIM, able to reproduce laboratory (incubation) and field (litter manipulation) priming experiments. Parameters for this model were first optimized against data from 20 soil incubation experiments using a Bayesian framework. The optimized parameter values were evaluated against another set of soil incubation data independent from the ones used for calibration and the PRIM model reproduced the soil incubations data better than the original, CENTURY-type soil decomposition model, whose decomposition equations are based only on first order kinetics. We then compared the PRIM model and the standard first order decay model incorporated into the global land biosphere model ORCHIDEE. A test of both models was performed at ecosystem scale using litter manipulation experiments from 5 sites. Although both versions were equally able to reproduce observed decay rates of litter, only ORCHIDEE-PRIM could simulate the observed priming (R2 = 0.54) in cases where litter was added or removed. This result suggests that a conceptually simple and numerically tractable representation of priming adapted to global models is able to capture the sign and magnitude of the priming of litter and soil organic matter.
Magliocca, Nicholas R.; Brown, Daniel G.; Ellis, Erle C.
2014-01-01
Local changes in land use result from the decisions and actions of land-users within land systems, which are structured by local and global environmental, economic, political, and cultural contexts. Such cross-scale causation presents a major challenge for developing a general understanding of how local decision-making shapes land-use changes at the global scale. This paper implements a generalized agent-based model (ABM) as a virtual laboratory to explore how global and local processes influence the land-use and livelihood decisions of local land-users, operationalized as settlement-level agents, across the landscapes of six real-world test sites. Test sites were chosen in USA, Laos, and China to capture globally-significant variation in population density, market influence, and environmental conditions, with land systems ranging from swidden to commercial agriculture. Publicly available global data were integrated into the ABM to model cross-scale effects of economic globalization on local land-use decisions. A suite of statistics was developed to assess the accuracy of model-predicted land-use outcomes relative to observed and random (i.e. null model) landscapes. At four of six sites, where environmental and demographic forces were important constraints on land-use choices, modeled land-use outcomes were more similar to those observed across sites than the null model. At the two sites in which market forces significantly influenced land-use and livelihood decisions, the model was a poorer predictor of land-use outcomes than the null model. Model successes and failures in simulating real-world land-use patterns enabled the testing of hypotheses on land-use decision-making and yielded insights on the importance of missing mechanisms. The virtual laboratory approach provides a practical framework for systematic improvement of both theory and predictive skill in land change science based on a continual process of experimentation and model enhancement. PMID:24489696
Photometry of icy satellites: How important is multiple scattering in diluting shadows?
NASA Technical Reports Server (NTRS)
Buratti, B.; Veverka, J.
1984-01-01
Voyager observations have shown that the photometric properties of icy satellites are influenced significantly by large-scale roughness elements on the surfaces. While recent progress was made in treating the photometric effects of macroscopic roughness, it is still the case that even the most complete models do not account for the effects of multiple scattering fully. Multiple scattering dilutes shadows caused by large-scale features, yet for any specific model it is difficult to calculate the amount of dilution as a function of albedo. Accordingly, laboratory measurements were undertaken using the Cornell Goniometer to evaluate the magnitude of the effect.
Identification of possible non-stationary effects in a new type of vortex furnace
NASA Astrophysics Data System (ADS)
Shadrin, Evgeniy Yu.; Anufriev, Igor S.; Papulov, Anatoly P.
2017-10-01
The article presents the results of an experimental study of pressure and velocity pulsations in the model of improved vortex furnace with distributed air supply and vertically oriented nozzles of the secondary blast. Investigation of aerodynamic characteristics of a swirling flow with different regime parameters was conducted in an isothermal laboratory model (in 1:25 scale) of vortex furnace using laser Doppler measuring system and pressure pulsations analyzer. The obtained results have revealed a number of features of the flow structure, and the spectral analysis of pressure and velocity pulsations allows to speak about the absence of large-scale unsteady vortical structures in the studied design.
Posttest Analyses of the Steel Containment Vessel Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costello, J.F.; Hessheimer, M.F.; Ludwigsen, J.S.
A high pressure test of a scale model of a steel containment vessel (SCV) was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. This testis part of a program to investigate the response of representative models of nuclear containment structures to pressure loads beyond the design basis accident. The posttest analyses of this test focused on three areas where the pretest analysis effort did not adequately predict the model behavior duringmore » the test. These areas are the onset of global yielding, the strain concentrations around the equipment hatch and the strain concentrations that led to a small tear near a weld relief opening that was not modeled in the pretest analysis.« less
WRF/CMAQ AQMEII3 Simulations of US Regional-Scale ...
Chemical boundary conditions are a key input to regional-scale photochemical models. In this study, performed during the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3), we perform annual simulations over North America with chemical boundary conditions prepared from four different global models. Results indicate that the impacts of different boundary conditions are significant for ozone throughout the year and most pronounced outside the summer season. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
Next-generation genome-scale models for metabolic engineering.
King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O
2015-12-01
Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Javaherchi, Teymour; Stelzenmuller, Nick; Seydel, Joseph; Aliseda, Alberto
2013-11-01
We investigate, through a combination of scale model experiments and numerical simulations, the evolution of the flow field around the rotor and in the wake of Marine Hydrokinetic (MHK) turbines. Understanding the dynamics of this flow field is the key to optimizing the energy conversion of single devices and the arrangement of turbines in commercially viable arrays. This work presents a comparison between numerical and experimental results from two different case studies of scaled horizontal axis MHK turbines (45:1 scale). In the first case study, we investigate the effect of Reynolds number (Re = 40,000 to 100,000) and Tip Speed Ratio (TSR = 5 to 12) variation on the performance and wake structure of a single turbine. In the second case, we study the effect of the turbine downstream spacing (5d to 14d) on the performance and wake development in a coaxial configuration of two turbines. These results provide insights into the dynamics of Horizontal Axis Hydrokinetic Turbines, and by extension to Horizontal Axis Wind Turbines in close proximity to each other, and highlight the capabilities and limitations of the numerical models. Once validated at laboratory scale, the numerical model can be used to address other aspects of MHK turbines at full scale. Supported by DOE through the National Northwest Marine Renewable Energy Center.
A comparison of refuse attenuation in laboratory and field scale lysimeters.
Youcai, Zhao; Luochun, Wang; Renhua, Hua; Dimin, Xu; Guowei, Gu
2002-01-01
For this study, small and middle scale laboratory lysimeters, and a large scale field lysimeter in situ in Shanghai Refuse Landfill, with refuse weights of 187,600 and 10,800,000 kg, respectively, were created. These lysimeters are compared in terms of leachate quality (pH, concentrations of COD, BOD and NH3-N), refuse composition (biodegradable matter and volatile solid) and surface settlement for a monitoring period of 0-300 days. The objectives of this study were to explore both the similarities and disparities between laboratory and field scale lysimeters, and to compare degradation behaviors of refuse at the intensive reaction phase in the different scale lysimeters. Quantitative relationships of leachate quality and refuse composition with placement time show that degradation behaviors of refuse seem to depend heavily on the scales of the lysimeters and the parameters of concern, especially in the starting period of 0-6 months. However, some similarities exist between laboratory and field lysimeters after 4-6 months of placement because COD and BOD concentrations in leachate in the field lysimeter decrease regularly in a parallel pattern with those in the laboratory lysimeters. NH3-N, volatile solid (VS) and biodegradable matter (BDM) also gradually decrease in parallel in this intensive reaction phase for all scale lysimeters as refuse ages. Though the concrete data are different among the different scale lysimeters, it may be considered that laboratory lysimeters with sufficient scale are basically applicable for a rough simulation of a real landfill, especially for illustrating the degradation pattern and mechanism. Settlement of refuse surface is roughly proportional to the initial refuse height.
Whelan, Jessica; Craven, Stephen; Glennon, Brian
2012-01-01
In this study, the application of Raman spectroscopy to the simultaneous quantitative determination of glucose, glutamine, lactate, ammonia, glutamate, total cell density (TCD), and viable cell density (VCD) in a CHO fed-batch process was demonstrated in situ in 3 L and 15 L bioreactors. Spectral preprocessing and partial least squares (PLS) regression were used to correlate spectral data with off-line reference data. Separate PLS calibration models were developed for each analyte at the 3 L laboratory bioreactor scale before assessing its transferability to the same bioprocess conducted at the 15 L pilot scale. PLS calibration models were successfully developed for all analytes bar VCD and transferred to the 15 L scale. Copyright © 2012 American Institute of Chemical Engineers (AIChE).
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M. O.; Johnson, P. E.
1986-01-01
A Viking Lander 1 image was modeled as mixtures of reflectance spectra of palagonite dust, gray andesitelike rock, and a coarse rocklike soil. The rocks are covered to varying degrees by dust but otherwise appear unweathered. Rocklike soil occurs as lag deposits in deflation zones around stones and on top of a drift and as a layer in a trench dug by the lander. This soil probably is derived from the rocks by wind abrasion and/or spallation. Dust is the major component of the soil and covers most of the surface. The dust is unrelated spectrally to the rock but is equivalent to the global-scale dust observed telescopically. A new method was developed to model a multispectral image as mixtures of end-member spectra and to compare image spectra directly with laboratory reference spectra. The method for the first time uses shade and secondary illumination effects as spectral end-members; thus the effects of topography and illumination on all scales can be isolated or removed. The image was calibrated absolutely from the laboratory spectra, in close agreement with direct calibrations. The method has broad applications to interpreting multispectral images, including satellite images.
Petterson, S R; Stenström, T A
2015-09-01
To support the implementation of quantitative microbial risk assessment (QMRA) for managing infectious risks associated with drinking water systems, a simple modeling approach for quantifying Log10 reduction across a free chlorine disinfection contactor was developed. The study was undertaken in three stages: firstly, review of the laboratory studies published in the literature; secondly, development of a conceptual approach to apply the laboratory studies to full-scale conditions; and finally implementation of the calculations for a hypothetical case study system. The developed model explicitly accounted for variability in residence time and pathogen specific chlorine sensitivity. Survival functions were constructed for a range of pathogens relying on the upper bound of the reported data transformed to a common metric. The application of the model within a hypothetical case study demonstrated the importance of accounting for variable residence time in QMRA. While the overall Log10 reduction may appear high, small parcels of water with short residence time can compromise the overall performance of the barrier. While theoretically simple, the approach presented is of great value for undertaking an initial assessment of a full-scale disinfection contactor based on limited site-specific information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Lake, L.W.; Sepehrnoori, K.
1988-11-01
The objective of this research is to develop, validate, and apply a comprehensive chemical flooding simulator for chemical recovery processes involving surfactants, polymers, and alkaline chemicals in various combinations. This integrated program includes components of laboratory experiments, physical property modelling, scale-up theory, and numerical analysis as necessary and integral components of the simulation activity. Developing, testing and applying flooding simulator (UTCHEM) to a wide variety of laboratory and reservoir problems involving tracers, polymers, polymer gels, surfactants, and alkaline agent has been continued. Improvements in both the physical-chemical and numerical aspects of UTCHEM have been made which enhance its versatility, accuracymore » and speed. Supporting experimental studies during the past year include relative permeability and trapping of microemulsion, tracer flow studies oil recovery in cores using alcohol free surfactant slugs, and microemulsion viscosity measurements. These have enabled model improvement simulator testing. Another code called PROPACK has also been developed which is used as a preprocessor for UTCHEM. Specifically, it is used to evaluate input to UTCHEM by computing and plotting key physical properties such as phase behavior interfacial tension.« less
Harnessing Big Data to Represent 30-meter Spatial Heterogeneity in Earth System Models
NASA Astrophysics Data System (ADS)
Chaney, N.; Shevliakova, E.; Malyshev, S.; Van Huijgevoort, M.; Milly, C.; Sulman, B. N.
2016-12-01
Terrestrial land surface processes play a critical role in the Earth system; they have a profound impact on the global climate, food and energy production, freshwater resources, and biodiversity. One of the most fascinating yet challenging aspects of characterizing terrestrial ecosystems is their field-scale (˜30 m) spatial heterogeneity. It has been observed repeatedly that the water, energy, and biogeochemical cycles at multiple temporal and spatial scales have deep ties to an ecosystem's spatial structure. Current Earth system models largely disregard this important relationship leading to an inadequate representation of ecosystem dynamics. In this presentation, we will show how existing global environmental datasets can be harnessed to explicitly represent field-scale spatial heterogeneity in Earth system models. For each macroscale grid cell, these environmental data are clustered according to their field-scale soil and topographic attributes to define unique sub-grid tiles. The state-of-the-art Geophysical Fluid Dynamics Laboratory (GFDL) land model is then used to simulate these tiles and their spatial interactions via the exchange of water, energy, and nutrients along explicit topographic gradients. Using historical simulations over the contiguous United States, we will show how a robust representation of field-scale spatial heterogeneity impacts modeled ecosystem dynamics including the water, energy, and biogeochemical cycles as well as vegetation composition and distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
NASA Astrophysics Data System (ADS)
Ishibashi, Takuya; Watanabe, Noriaki; Hirano, Nobuo; Okamoto, Atsushi; Tsuchiya, Noriyoshi
2015-01-01
The present study evaluates aperture distributions and fluid flow characteristics for variously sized laboratory-scale granite fractures under confining stress. As a significant result of the laboratory investigation, the contact area in fracture plane was found to be virtually independent of scale. By combining this characteristic with the self-affine fractal nature of fracture surfaces, a novel method for predicting fracture aperture distributions beyond laboratory scale is developed. Validity of this method is revealed through reproduction of the results of laboratory investigation and the maximum aperture-fracture length relations, which are reported in the literature, for natural fractures. The present study finally predicts conceivable scale dependencies of fluid flows through joints (fractures without shear displacement) and faults (fractures with shear displacement). Both joint and fault aperture distributions are characterized by a scale-independent contact area, a scale-dependent geometric mean, and a scale-independent geometric standard deviation of aperture. The contact areas for joints and faults are approximately 60% and 40%. Changes in the geometric means of joint and fault apertures (µm), em, joint and em, fault, with fracture length (m), l, are approximated by em, joint = 1 × 102 l0.1 and em, fault = 1 × 103 l0.7, whereas the geometric standard deviations of both joint and fault apertures are approximately 3. Fluid flows through both joints and faults are characterized by formations of preferential flow paths (i.e., channeling flows) with scale-independent flow areas of approximately 10%, whereas the joint and fault permeabilities (m2), kjoint and kfault, are scale dependent and are approximated as kjoint = 1 × 10-12 l0.2 and kfault = 1 × 10-8 l1.1.
Peng, Yi; Xiong, Xiong; Adhikari, Kabindra; Knadel, Maria; Grunwald, Sabine; Greve, Mogens Humlekrog
2015-01-01
There is a great challenge in combining soil proximal spectra and remote sensing spectra to improve the accuracy of soil organic carbon (SOC) models. This is primarily because mixing of spectral data from different sources and technologies to improve soil models is still in its infancy. The first objective of this study was to integrate information of SOC derived from visible near-infrared reflectance (Vis-NIR) spectra in the laboratory with remote sensing (RS) images to improve predictions of topsoil SOC in the Skjern river catchment, Denmark. The second objective was to improve SOC prediction results by separately modeling uplands and wetlands. A total of 328 topsoil samples were collected and analyzed for SOC. Satellite Pour l'Observation de la Terre (SPOT5), Landsat Data Continuity Mission (Landsat 8) images, laboratory Vis-NIR and other ancillary environmental data including terrain parameters and soil maps were compiled to predict topsoil SOC using Cubist regression and Bayesian kriging. The results showed that the model developed from RS data, ancillary environmental data and laboratory spectral data yielded a lower root mean square error (RMSE) (2.8%) and higher R2 (0.59) than the model developed from only RS data and ancillary environmental data (RMSE: 3.6%, R2: 0.46). Plant-available water (PAW) was the most important predictor for all the models because of its close relationship with soil organic matter content. Moreover, vegetation indices, such as the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), were very important predictors in SOC spatial models. Furthermore, the 'upland model' was able to more accurately predict SOC compared with the 'upland & wetland model'. However, the separately calibrated 'upland and wetland model' did not improve the prediction accuracy for wetland sites, since it was not possible to adequately discriminate the vegetation in the RS summer images. We conclude that laboratory Vis-NIR spectroscopy adds critical information that significantly improves the prediction accuracy of SOC compared to using RS data alone. We recommend the incorporation of laboratory spectra with RS data and other environmental data to improve soil spatial modeling and digital soil mapping (DSM).
Virtual Patterson Experiment - A Way to Access the Rheology of Aggregates and Melanges
NASA Astrophysics Data System (ADS)
Delannoy, Thomas; Burov, Evgueni; Wolf, Sylvie
2014-05-01
Understanding the mechanisms of lithospheric deformation requires bridging the gap between human-scale laboratory experiments and the huge geological objects they represent. Those experiments are limited in spatial and time scale as well as in choice of materials (e.g., mono-phase minerals, exaggerated temperatures and strain rates), which means that the resulting constitutive laws may not fully represent real rocks at geological spatial and temporal scales. We use the thermo-mechanical numerical modelling approach as a tool to link both experiments and nature and hence better understand the rheology of the lithosphere, by enabling us to study the behavior of polymineralic aggregates and their impact on the localization of the deformation. We have adapted the large strain visco-elasto-plastic Flamar code to allow it to operate at all spatial and temporal scales, from sub-grain to geodynamic scale, and from seismic time scales to millions of years. Our first goal was to reproduce real rock mechanics experiments on deformation of mono and polymineralic aggregates in Patterson's load machine in order to deepen our understanding of the rheology of polymineralic rocks. In particular, we studied in detail the deformation of a 15x15 mm mica-quartz sample at 750 °C and 300 MPa. This mixture includes a molten phase and a solid phase in which shear bands develop as a result of interactions between ductile and brittle deformation and stress concentration at the boundaries between weak and strong phases. We used digitized x-ray scans of real samples as initial configuration for the numerical models so the model-predicted deformation and stress-strain behavior can match those observed in the laboratory experiment. Analyzing the numerical experiments providing the best match with the press experiments and making other complementary models by changing different parameters in the initial state (strength contrast between the phases, proportions, microstructure, etc.) provides a number of new elements of understanding of the mechanisms governing the localization of the deformation across the aggregates. We next used stress-strain curves derived from the numerical experiments to study in detail the evolution of the rheological behavior of each mineral phase as well as that of the mixtures in order to formulate constitutive relations for mélanges and polymineralic aggregates. The next step of our approach would be to link the constitutive laws obtained at small scale (laws that govern the rheology of a polymineralic aggregate, the effect of the presence of a molten phase, etc.) to the large-scale behavior of the Earth by implementing them in lithosphere-scale models.
NASA Technical Reports Server (NTRS)
Mennell, R.; Vaughn, J. E.; Singellton, R.
1973-01-01
Experimental aerodynamic investigations were conducted on a scale model space shuttle vehicle (SSV) orbiter. The purpose of the test was to investigate the longitudinal and lateral-directional aerodynamic characteristics. Emphasis was placed on model component, wing-glove, and wing-body fairing effects, as well as elevon, aileron, and rudder control effectiveness. Angles of attack from - 5 deg to + 30 deg and angles of sideslip from - 5 deg to + 10 deg were tested. Static pressures were recorded on base, fuselage, and wing surfaces. Tufts and talc-kerosene flow visualization techniques were also utilized. The aerodynamic force balance results are presented in plotted and tabular form.
Multitemporal Three Dimensional Imaging of Volcanic Products on the Macro- and Micro- Scale
NASA Astrophysics Data System (ADS)
Carter, A. J.; Ramsey, M. S.; Durant, A. J.; Skilling, I. P.
2006-12-01
Satellite data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) can be processed using a nadir- and backward-viewing band at the same wavelength to generate a Digital Elevation Model (DEM) at a maximum spatial resolution of 15 metres. Bezymianny Volcano (Kamchatka Peninsula, Russia) was chosen as a test target for multitemporal DEM generation. DEMs were used to generate a layer stack and calculate coarse topographic changes from 2000 to 2006, the most significant of which was a new crater that formed in spring 2005. The eruption that occurred on 11 January 2005 produced a pyroclastic deposit on the east flank, which was mapped and from which samples were collected in August 2005. A comparison was made between field-based observations of the deposit and micron-scale roughness (analogous to vesicularity) derived from ASTER thermal infrared data following the model described in Ramsey and Fink (1999) on lava domes. In order to investigate applying this technique to the pyroclastic deposits, 18 small samples from Bezymianny were selected for Scanning Electron Microscope (SEM) micron-scale analysis. The SEM image data were processed using software capable of calculating surface roughness and vesicle volume from stereo pairs: a statistical analysis of samples is presented using a high resolution grid of surface profiles. The results allow for a direct comparison to field, laboratory, and satellite-based estimates of micron-scale roughness. Prior to SEM processing, laboratory thermal emission spectra of the microsamples were collected and modelled to estimate vesicularity. Each data set was compared and assessed for coherence within the limitations of each technique. This study outlines the value of initially imaging at the macro-scale to assess major topographic changes over time at the volcano. This is followed by an example of the application of micro-scale SEM imaging and spectral deconvolution, highlighting the advantages of using multiple resolutions to analyse frequently overlapping products at Bezymianny.
Perkins, Kim S.
2008-01-01
Sediments are believed to comprise as much as 50 percent of the Snake River Plain aquifer thickness in some locations within the Idaho National Laboratory. However, the hydraulic properties of these deep sediments have not been well characterized and they are not represented explicitly in the current conceptual model of subregional scale ground-water flow. The purpose of this study is to evaluate the nature of the sedimentary material within the aquifer and to test the applicability of a site-specific property-transfer model developed for the sedimentary interbeds of the unsaturated zone. Saturated hydraulic conductivity (Ksat) was measured for 10 core samples from sedimentary interbeds within the Snake River Plain aquifer and also estimated using the property-transfer model. The property-transfer model for predicting Ksat was previously developed using a multiple linear-regression technique with bulk physical-property measurements (bulk density [pbulk], the median particle diameter, and the uniformity coefficient) as the explanatory variables. The model systematically underestimates Ksat,typically by about a factor of 10, which likely is due to higher bulk-density values for the aquifer samples compared to the samples from the unsaturated zone upon which the model was developed. Linear relations between the logarithm of Ksat and pbulk also were explored for comparison.
Integrated low emissions cleanup system for direct coal-fueled turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lippert, T.E.; Newby, R.A.; Alvin, M.A.
1992-01-01
The Westinghouse Electric Corporation, Science Technology Center (W-STC) is developing an Integrated Low Emissions Cleanup (ILEC) concept for high-temperature gas cleaning to meet environmental standards, as well as to economical gas turbine life. The ILEC concept simultaneously controls sulfur, particulate, and alkali contaminants in high-pressure fuel gases or combustion gases at temperatures up to 1850[degrees]F for advanced power generation systems (PFBC, APFBC, IGCC, DCF7). The objective of this program is to demonstrate, at a bench scale, the conceptual, technical feasibility of the REC concept. The ELEC development program has a 3 phase structure: Phase 1 - laboratory-scale testing; phase 2more » - bench-scale equipment; design and fabrication; and phase 3 - bench-scale testing. Phase 1 laboratory testing has been completed. In Phase 1, entrained sulfur and alkali sorbent kinetics were measured and evaluated, and commercial-scale performance was projected. Related cold flow model testing has shown that gas-particle contacting within the ceramic barrier filter vessel will provide a good reactor environment. The Phase 1 test results and the commercial evaluation conducted in the Phase 1 program support the bench-scale facility testing to be performed in Phase 3. Phase 2 is nearing completion with the design and assembly of a modified, bench-scale test facility to demonstrate the technical feasibility of the ILEC features. This feasibility testing will be conducted in Phase 3.« less
Integrated low emissions cleanup system for direct coal-fueled turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lippert, T.E.; Newby, R.A.; Alvin, M.A.
1992-12-31
The Westinghouse Electric Corporation, Science & Technology Center (W-STC) is developing an Integrated Low Emissions Cleanup (ILEC) concept for high-temperature gas cleaning to meet environmental standards, as well as to economical gas turbine life. The ILEC concept simultaneously controls sulfur, particulate, and alkali contaminants in high-pressure fuel gases or combustion gases at temperatures up to 1850{degrees}F for advanced power generation systems (PFBC, APFBC, IGCC, DCF7). The objective of this program is to demonstrate, at a bench scale, the conceptual, technical feasibility of the REC concept. The ELEC development program has a 3 phase structure: Phase 1 - laboratory-scale testing; phasemore » 2 - bench-scale equipment; design and fabrication; and phase 3 - bench-scale testing. Phase 1 laboratory testing has been completed. In Phase 1, entrained sulfur and alkali sorbent kinetics were measured and evaluated, and commercial-scale performance was projected. Related cold flow model testing has shown that gas-particle contacting within the ceramic barrier filter vessel will provide a good reactor environment. The Phase 1 test results and the commercial evaluation conducted in the Phase 1 program support the bench-scale facility testing to be performed in Phase 3. Phase 2 is nearing completion with the design and assembly of a modified, bench-scale test facility to demonstrate the technical feasibility of the ILEC features. This feasibility testing will be conducted in Phase 3.« less
Building a Laboratory-Scale Biogas Plant and Verifying its Functionality
NASA Astrophysics Data System (ADS)
Boleman, Tomáš; Fiala, Jozef; Blinová, Lenka; Gerulová, Kristína
2011-01-01
The paper deals with the process of building a laboratory-scale biogas plant and verifying its functionality. The laboratory-scale prototype was constructed in the Department of Safety and Environmental Engineering at the Faculty of Materials Science and Technology in Trnava, of the Slovak University of Technology. The Department has already built a solar laboratory to promote and utilise solar energy, and designed SETUR hydro engine. The laboratory is the next step in the Department's activities in the field of renewable energy sources and biomass. The Department is also involved in the European Union project, where the goal is to upgrade all existed renewable energy sources used in the Department.
NASA Astrophysics Data System (ADS)
Wee, Loo Kang; Tiang Ning, Hwee
2014-09-01
This paper presents the customization of Easy Java Simulation models, used with actual laboratory instruments, to create active experiential learning for measurements. The laboratory instruments are the vernier caliper and the micrometer. Three computer model design ideas that complement real equipment are discussed. These ideas involve (1) a simple two-dimensional view for learning from pen and paper questions and the real world; (2) hints, answers, different scale options and the inclusion of zero error; (3) assessment for learning feedback. The initial positive feedback from Singaporean students and educators indicates that these tools could be successfully shared and implemented in learning communities. Educators are encouraged to change the source code for these computer models to suit their own purposes; they have creative commons attribution licenses for the benefit of all.
Customer satisfaction assessment at the Pacific Northwest National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
DN Anderson; ML Sours
2000-03-23
The Pacific Northwest National Laboratory (PNNL) is developing and implementing a customer satisfaction assessment program (CSAP) to assess the quality of research and development provided by the laboratory. This report presents the customer survey component of the PNNL CSAP. The customer survey questionnaire is composed of two major sections: Strategic Value and Project Performance. Both sections contain a set of questions that can be answered with a 5-point Likert scale response. The strategic value section consists of five questions that are designed to determine if a project directly contributes to critical future national needs. The project Performance section consists ofmore » nine questions designed to determine PNNL performance in meeting customer expectations. A statistical model for customer survey data is developed and this report discusses how to analyze the data with this model. The properties of the statistical model can be used to establish a gold standard or performance expectation for the laboratory, and then to assess progress. The gold standard is defined using laboratory management input--answers to four questions, in terms of the information obtained from the customer survey: (1) What should the average Strategic Value be for the laboratory project portfolio? (2) What Strategic Value interval should include most of the projects in the laboratory portfolio? (3) What should average Project Performance be for projects with a Strategic Value of about 2? (4) What should average Project Performance be for projects with a Strategic Value of about 4? To be able to provide meaningful answers to these questions, the PNNL customer survey will need to be fully implemented for several years, thus providing a link between management perceptions of laboratory performance and customer survey data.« less
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Scaled laboratory experiments explain the kink behaviour of the Crab Nebula jet.
Li, C K; Tzeferacos, P; Lamb, D; Gregori, G; Norreys, P A; Rosenberg, M J; Follett, R K; Froula, D H; Koenig, M; Seguin, F H; Frenje, J A; Rinderknecht, H G; Sio, H; Zylstra, A B; Petrasso, R D; Amendt, P A; Park, H S; Remington, B A; Ryutov, D D; Wilks, S C; Betti, R; Frank, A; Hu, S X; Sangster, T C; Hartigan, P; Drake, R P; Kuranz, C C; Lebedev, S V; Woolsey, N C
2016-10-07
The remarkable discovery by the Chandra X-ray observatory that the Crab nebula's jet periodically changes direction provides a challenge to our understanding of astrophysical jet dynamics. It has been suggested that this phenomenon may be the consequence of magnetic fields and magnetohydrodynamic instabilities, but experimental demonstration in a controlled laboratory environment has remained elusive. Here we report experiments that use high-power lasers to create a plasma jet that can be directly compared with the Crab jet through well-defined physical scaling laws. The jet generates its own embedded toroidal magnetic fields; as it moves, plasma instabilities result in multiple deflections of the propagation direction, mimicking the kink behaviour of the Crab jet. The experiment is modelled with three-dimensional numerical simulations that show exactly how the instability develops and results in changes of direction of the jet.
Karst medium characterization and simulation of groundwater flow in Lijiang Riversed, China
NASA Astrophysics Data System (ADS)
Hu, B. X.
2015-12-01
It is important to study water and carbon cycle processes for water resource management, pollution prevention and global warming influence on southwest karst region of China. Lijiang river basin is selected as our study region. Interdisciplinary field and laboratory experiments with various technologies are conducted to characterize the karst aquifers in detail. Key processes in the karst water cycle and carbon cycle are determined. Based on the MODFLOW-CFP model, new watershed flow and carbon cycle models are developed coupled subsurface and surface water flow models, flow and chemical/biological models. Our study is focused on the karst springshed in Mao village. The mechanisms coupling carbon cycle and water cycle are explored. Parallel computing technology is used to construct the numerical model for the carbon cycle and water cycle in the small scale watershed, which are calibrated and verified by field observations. The developed coupling model for the small scale watershed is extended to a large scale watershed considering the scale effect of model parameters and proper model structure simplification. The large scale watershed model is used to study water cycle and carbon cycle in Lijiang rivershed, and to calculate the carbon flux and carbon sinks in the Lijiang river basin. The study results provide scientific methods for water resources management and environmental protection in southwest karst region corresponding to global climate change. This study could provide basic theory and simulation method for geological carbon sequestration in China karst region.
Simulation of groundwater flow and evaluation of carbon sink in Lijiang Rivershed, China
NASA Astrophysics Data System (ADS)
Hu, Bill X.; Cao, Jianhua; Tong, Juxiu; Gao, Bing
2016-04-01
It is important to study water and carbon cycle processes for water resource management, pollution prevention and global warming influence on southwest karst region of China. Lijiang river basin is selected as our study region. Interdisciplinary field and laboratory experiments with various technologies are conducted to characterize the karst aquifers in detail. Key processes in the karst water cycle and carbon cycle are determined. Based on the MODFLOW-CFP model, new watershed flow and carbon cycle models are developed coupled subsurface and surface water flow models, flow and chemical/biological models. Our study is focused on the karst springshed in Mao village. The mechanisms coupling carbon cycle and water cycle are explored. Parallel computing technology is used to construct the numerical model for the carbon cycle and water cycle in the small scale watershed, which are calibrated and verified by field observations. The developed coupling model for the small scale watershed is extended to a large scale watershed considering the scale effect of model parameters and proper model structure simplification. The large scale watershed model is used to study water cycle and carbon cycle in Lijiang rivershed, and to calculate the carbon flux and carbon sinks in the Lijiang river basin. The study results provide scientific methods for water resources management and environmental protection in southwest karst region corresponding to global climate change. This study could provide basic theory and simulation method for geological carbon sequestration in China karst region.
Howanitz, Peter J; Lehman, Christopher M; Jones, Bruce A; Meier, Frederick A; Horowitz, Gary L
2015-08-01
Hemolysis is an important clinical laboratory quality attribute that influences result reliability. To determine hemolysis identification and rejection practices occurring in clinical laboratories. We used the College of American Pathologists Survey program to distribute a Q-Probes-type questionnaire about hemolysis practices to Chemistry Survey participants. Of 3495 participants sent the questionnaire, 846 (24%) responded. In 71% of 772 laboratories, the hemolysis rate was less than 3.0%, whereas in 5%, it was 6.0% or greater. A visual scale, an instrument scale, and combination of visual and instrument scales were used to identify hemolysis in 48%, 11%, and 41% of laboratories, respectively. A picture of the hemolysis level was used as an aid to technologists' visual interpretation of hemolysis levels in 40% of laboratories. In 7.0% of laboratories, all hemolyzed specimens were rejected; in 4% of laboratories, no hemolyzed specimens were rejected; and in 88% of laboratories, some specimens were rejected depending on hemolysis levels. Participants used 69 different terms to describe hemolysis scales, with 21 terms used in more than 10 laboratories. Slight and moderate were the terms used most commonly. Of 16 different cutoffs used to reject hemolyzed specimens, moderate was the most common, occurring in 30% of laboratories. For whole blood electrolyte measurements performed in 86 laboratories, 57% did not evaluate the presence of hemolysis, but for those that did, the most common practice in 21 laboratories (24%) was centrifuging and visually determining the presence of hemolysis in all specimens. Hemolysis practices vary widely. Standard assessment and consistent reporting are the first steps in reducing interlaboratory variability among results.
A comparison of relative toxicity rankings by some small-scale laboratory tests
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Cumming, H. J.
1977-01-01
Small-scale laboratory tests for fire toxicity, suitable for use in the average laboratory hood, are needed for screening and ranking materials on the basis of relative toxicity. The performance of wool, cotton, and aromatic polyamide under several test procedures is presented.
Scaling of the critical slip distance for seismic faulting with shear strain in fault zones
Marone, Chris; Kilgore, Brian D.
1993-01-01
THEORETICAL and experimentally based laws for seismic faulting contain a critical slip distance1-5, Dc, which is the slip over which strength breaks down during earthquake nucleation. On an earthquake-generating fault, this distance plays a key role in determining the rupture nucleation dimension6, the amount of premonitory and post-seismic slip7-10, and the maximum seismic ground acceleration1,11. In laboratory friction experiments, Dc has been related to the size of surface contact junctions2,5,12; thus, the discrepancy between laboratory measurements of Dc (??? 10-5 m) and values obtained from modelling earthquakes (??? 10-2 m) has been attributed to differences in roughness between laboratory surfaces and natural faults5. This interpretation predicts a dependence of Dc on the particle size of fault gouge 2 (breccia and wear material) but not on shear strain. Here we present experimental results showing that Dc scales with shear strain in simulated fault gouge. Our data suggest a new physical interpretation for the critical slip distance, in which Dc is controlled by the thickness of the zone of localized shear strain. As gouge zones of mature faults are commonly 102-103 m thick13-17, whereas laboratory gouge layers are 1-10 mm thick, our data offer an alternative interpretation of the discrepancy between laboratory and field-based estimates of Dc.
Quantifying Diapycnal Mixing in an Energetic Ocean
NASA Astrophysics Data System (ADS)
Ivey, Gregory N.; Bluteau, Cynthia E.; Jones, Nicole L.
2018-01-01
Turbulent diapycnal mixing controls global circulation and the distribution of tracers in the ocean. For turbulence in stratified shear flows, we introduce a new turbulent length scale Lρ dependent on χ. We show the flux Richardson number Rif is determined by the dimensionless ratio of three length scales: the Ozmidov scale LO, the Corrsin shear scale LS, and Lρ. This new model predicts that Rif varies from 0 to 0.5, which we test primarily against energetic field observations collected in 100 m of water on the Australian North West Shelf (NWS), in addition to laboratory observations. The field observations consisted of turbulence microstructure vertical profiles taken near moored temperature and velocity turbulence time series. Irrespective of the value of the gradient Richardson number Ri, both instruments yielded a median Rif=0.17, while the observed Rif ranged from 0.01 to 0.50, in agreement with the predicted range of Rif. Using a Prandtl mixing length model, we show that diapycnal mixing Kρ can be predicted from Lρ and the background vertical shear S. Using field and laboratory observations, we show that Lρ=0.3LE where LE is the Ellison length scale. The diapycnal diffusivity can thus be calculated from Kρ=0.09LES2. This prediction agrees very well with the diapycnal mixing estimates obtained from our moored turbulence instruments for observed diffusivities as large as 10-1 m2s-1. Moorings with relatively low sampling rates can thus provide long time series estimates of diapycnal mixing rates, significantly increasing the number of diapycnal mixing estimates in the ocean.
Achieving across-laboratory replicability in psychophysical scaling
Ward, Lawrence M.; Baumann, Michael; Moffat, Graeme; Roberts, Larry E.; Mori, Shuji; Rutledge-Taylor, Matthew; West, Robert L.
2015-01-01
It is well known that, although psychophysical scaling produces good qualitative agreement between experiments, precise quantitative agreement between experimental results, such as that routinely achieved in physics or biology, is rarely or never attained. A particularly galling example of this is the fact that power function exponents for the same psychological continuum, measured in different laboratories but ostensibly using the same scaling method, magnitude estimation, can vary by a factor of three. Constrained scaling (CS), in which observers first learn a standardized meaning for a set of numerical responses relative to a standard sensory continuum and then make magnitude judgments of other sensations using the learned response scale, has produced excellent quantitative agreement between individual observers’ psychophysical functions. Theoretically it could do the same for across-laboratory comparisons, although this needs to be tested directly. We compared nine different experiments from four different laboratories as an example of the level of across experiment and across-laboratory agreement achievable using CS. In general, we found across experiment and across-laboratory agreement using CS to be significantly superior to that typically obtained with conventional magnitude estimation techniques, although some of its potential remains to be realized. PMID:26191019
NASA Astrophysics Data System (ADS)
Moon, C.; Mitchell, S. A.; Callor, N.; Dewers, T. A.; Heath, J. E.; Yoon, H.; Conner, G. R.
2017-12-01
Traditional subsurface continuum multiphysics models include useful yet limiting geometrical assumptions: penny- or disc-shaped cracks, spherical or elliptical pores, bundles of capillary tubes, cubic law fracture permeability, etc. Each physics (flow, transport, mechanics) uses constitutive models with an increasing number of fit parameters that pertain to the microporous structure of the rock, but bear no inter-physics relationships or self-consistency. Recent advances in digital rock physics and pore-scale modeling link complex physics to detailed pore-level geometries, but measures for upscaling are somewhat unsatisfactory and come at a high computational cost. Continuum mechanics rely on a separation between small scale pore fluctuations and larger scale heterogeneity (and perhaps anisotropy), but this can break down (particularly for shales). Algebraic topology offers powerful mathematical tools for describing a local-to-global structure of shapes. Persistent homology, in particular, analyzes the dynamics of topological features and summarizes into numeric values. It offers a roadmap to both "fingerprint" topologies of pore structure and multiscale connectedness as well as links pore structure to physical behavior, thus potentially providing a means to relate the dependence of constitutive behaviors of pore structures in a self-consistent way. We present a persistence homology (PH) analysis framework of 3D image sets including a focused ion beam-scanning electron microscopy data set of the Selma Chalk. We extract structural characteristics of sampling volumes via persistence homology and fit a statistical model using the summarized values to estimate porosity, permeability, and connectivity—Lattice Boltzmann methods for single phase flow modeling are used to obtain the relationships. These PH methods allow for prediction of geophysical properties based on the geometry and connectivity in a computationally efficient way. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
NASA Technical Reports Server (NTRS)
Barbee, Brent W.; Greenaugh, Kevin C.; Seery, Bernard D.; Bambacus, Myra; Leung, Ronald Y.; Finewood, Lee; Dearborn, David S. P.; Miller, Paul L.; Weaver, Robert P.; Plesko, Catherine;
2017-01-01
NASA's Goddard Space Flight Center (GSFC) and the National Nuclear Security Administration (NNSA), Department of Energy (DOE) National Laboratories, Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory(LANL), and Sandia National Laboratory (SNL) are collaborating on Planetary Defense Research. The research program is organized around three case studies: 1. Deflection of the Potentially Hazardous Asteroid (PHA) 101955 Bennu (1999 RQ36)[OSIRIS-REx mission target], 2. Deflection of the secondary member of the PHA 65803 Didymos (1996 GT) [DART mission target], 3. Deflection of a scaled-down version of the comet 67PChuryumov-Gerasimenko [Rosetta mission target]. NASAGSFC is providing astrodynamics and spacecraft mission design expertise, while NNSA, DOE, LLNL, LANL and SNL are providing expertise in modeling the effects of kinetic impactor spacecraft and nuclear explosive devices on the target objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachara, John M.; Bjornstad, Bruce N.; Christensen, John N.
2010-02-01
The Integrated Field-Scale Subsurface Research Challenge (IFRC) at the Hanford Site 300 Area uranium (U) plume addresses multi-scale mass transfer processes in a complex hydrogeologic setting where groundwater and riverwater interact. A series of forefront science questions on mass transfer are posed for research which relate to the effect of spatial heterogeneities; the importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements and approaches needed to characterize and model a mass-transfer dominated system. The project was initiated in February 2007, with CY 2007 and CY 2008 progress summarized in preceding reports. The site has 35more » instrumented wells, and an extensive monitoring system. It includes a deep borehole for microbiologic and biogeochemical research that sampled the entire thickness of the unconfined 300 A aquifer. Significant, impactful progress has been made in CY 2009 with completion of extensive laboratory measurements on field sediments, field hydrologic and geophysical characterization, four field experiments, and modeling. The laboratory characterization results are being subjected to geostatistical analyses to develop spatial heterogeneity models of U concentration and chemical, physical, and hydrologic properties needed for reactive transport modeling. The field experiments focused on: (1) physical characterization of the groundwater flow field during a period of stable hydrologic conditions in early spring, (2) comprehensive groundwater monitoring during spring to characterize the release of U(VI) from the lower vadose zone to the aquifer during water table rise and fall, (3) dynamic geophysical monitoring of salt-plume migration during summer, and (4) a U reactive tracer experiment (desorption) during the fall. Geophysical characterization of the well field was completed using the down-well Electrical Resistance Tomography (ERT) array, with results subjected to robust, geostatistically constrained inversion analyses. These measurements along with hydrologic characterization have yielded 3D distributions of hydraulic properties that have been incorporated into an updated and increasingly robust hydrologic model. Based on significant findings from the microbiologic characterization of deep borehole sediments in CY 2008, down-hole biogeochemistry studies were initiated where colonization substrates and spatially discrete water and gas samplers were deployed to select wells. The increasingly comprehensive field experimental results, along with the field and laboratory characterization, are leading to a new conceptual model of U(VI) flow and transport in the IFRC footprint and the 300 Area in general, and insights on the microbiological community and associated biogeochemical processes. A significant issue related to vertical flow in the IFRC wells was identified and evaluated during the spring and fall field experimental campaigns. Both upward and downward flows were observed in response to dynamic Columbia River stage. The vertical flows are caused by the interaction of pressure gradients with our heterogeneous hydraulic conductivity field. These impacts are being evaluated with additional modeling and field activities to facilitate interpretation and mitigation. The project moves into CY 2010 with ambitious plans for a drilling additional wells for the IFRC well field, additional experiments, and modeling. This research is part of the ERSP Hanford IFRC at Pacific Northwest National Laboratory.« less
HPC simulations of grain-scale spallation to improve thermal spallation drilling
NASA Astrophysics Data System (ADS)
Walsh, S. D.; Lomov, I.; Wideman, T. W.; Potter, J.
2012-12-01
Thermal spallation drilling and related hard-rock hole opening techniques are transformative technologies with the potential to dramatically reduce the costs associated with EGS well drilling and improve the productivity of new and existing wells. In contrast to conventional drilling methods that employ mechanical means to penetrate rock, thermal spallation methods fragment rock into small pieces ("spalls") without contact via the rapid transmission of heat to the rock surface. State-of-the-art constitutive models of thermal spallation employ Weibull statistical failure theory to represent the relationship between rock heterogeneity and its propensity to produce spalls when heat is applied to the rock surface. These models have been successfully used to predict such factors as penetration rate, spall-size distribution and borehole radius from drilling jet velocity and applied heat flux. A properly calibrated Weibull model would permit design optimization of thermal spallation drilling under geothermal field conditions. However, although useful for predicting system response in a given context, Weibull models are by their nature empirically derived. In the past, the parameters used in these models were carefully determined from laboratory tests, and thus model applicability was limited by experimental scope. This becomes problematic, for example, if simulating spall production at depths relevant for geothermal energy production, or modeling thermal spallation drilling in new rock types. Nevertheless, with sufficient computational resources, Weibull models could be validated in the absence of experimental data by explicit small-scale simulations that fully resolve rock grains. This presentation will discuss how high-fidelity simulations can be used to inform Weibull models of thermal spallation, and what these simulations reveal about the processes driving spallation at the grain-scale - in particular, the role that inter-grain boundaries and micro-pores play in the onset and extent of spallation. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Atmospheric Research 2016 Technical Highlights
NASA Technical Reports Server (NTRS)
Platnick, Steven
2017-01-01
Atmospheric research in the Earth Sciences Division (610) consists of research and technology development programs dedicated to advancing knowledge and understanding of the atmosphere and its interaction with the climate of Earth. The Divisions goals are to improve understanding of the dynamics and physical properties of precipitation, clouds, and aerosols; atmospheric chemistry, including the role of natural and anthropogenic trace species on the ozone balance in the stratosphere and the troposphere; and radiative properties of Earth's atmosphere and the influence of solar variability on the Earth's climate. Major research activities are carried out in the Mesoscale Atmospheric Processes Laboratory, the Climate and Radiation Laboratory, the Atmospheric Chemistry and Dynamics Laboratory, and the Wallops Field Support Office. The overall scope of the research covers an end-to-end process, starting with the identification of scientific problems, leading to observation requirements for remote-sensing platforms, technology and retrieval algorithm development; followed by flight projects and satellite missions; and eventually, resulting in data processing, analyses of measurements, and dissemination from flight projects and missions. Instrument scientists conceive, design, develop, and implement ultraviolet, infrared, optical, radar, laser, and lidar technology to remotely sense the atmosphere. Members of the various laboratories conduct field measurements for satellite sensor calibration and data validation, and carry out numerous modeling activities. These modeling activities include climate model simulations, modeling the chemistry and transport of trace species on regional-to-global scales, cloud resolving models, and developing the next-generation Earth system models. Satellite missions, field campaigns, peer-reviewed publications, and successful proposals are essential at every stage of the research process to meeting our goals and maintaining leadership of the Earth Sciences Division in atmospheric science research. Figure 1.1 shows the 22-year record of peer-reviewed publications and proposals among the various laboratories.
Predictive model for disinfection by-product in Alexandria drinking water, northern west of Egypt.
Abdullah, Ali M; Hussona, Salah El-dien
2013-10-01
Chlorine has been utilized in the early stages of water treatment processes as disinfectant. Disinfection for drinking water reduces the risk of pathogenic infection but may pose a chemical threat to human health due to disinfection residues and their by-products (DBP) when the organic and inorganic precursors are present in water. In the last two decades, many modeling attempts have been made to predict the occurrence of DBP in drinking water. Models have been developed based on data generated in laboratory-scale and field-scale investigations. The objective of this paper is to develop a predictive model for DBP formation in the Alexandria governorate located at the northern west of Egypt based on field-scale investigations as well as laboratory-controlled experimentations. The present study showed that the correlation coefficient between trihalomethanes (THM) predicted and THM measured was R (2)=0.88 and the minimum deviation percentage between THM predicted and THM measured was 0.8 %, the maximum deviation percentage was 89.3 %, and the average deviation was 17.8 %, while the correlation coefficient between dichloroacetic acid (DCAA) predicted and DCAA measured was R (2)=0.98 and the minimum deviation percentage between DCAA predicted and DCAA measured was 1.3 %, the maximum deviation percentage was 47.2 %, and the average deviation was 16.6 %. In addition, the correlation coefficient between trichloroacetic acid (TCAA) predicted and TCAA measured was R (2)=0.98 and the minimum deviation percentage between TCAA predicted and TCAA measured was 4.9 %, the maximum deviation percentage was 43.0 %, and the average deviation was 16.0 %.
Barnett, J Matthew; Yu, Xiao-Ying; Recknagle, Kurtis P; Glissmeyer, John A
2016-11-01
A planned laboratory space and exhaust system modification to the Pacific Northwest National Laboratory Material Science and Technology Building indicated that a new evaluation of the mixing at the air sampling system location would be required for compliance to ANSI/HPS N13.1-2011. The modified exhaust system would add a third fan, thereby increasing the overall exhaust rate out the stack, thus voiding the previous mixing study. Prior to modifying the radioactive air emissions exhaust system, a three-dimensional computational fluid dynamics computer model was used to evaluate the mixing at the sampling system location. Modeling of the original three-fan system indicated that not all mixing criteria could be met. A second modeling effort was conducted with the addition of an air blender downstream of the confluence of the three fans, which then showed satisfactory mixing results. The final installation included an air blender, and the exhaust system underwent full-scale tests to verify velocity, cyclonic flow, gas, and particulate uniformity. The modeling results and those of the full-scale tests show agreement between each of the evaluated criteria. The use of a computational fluid dynamics code was an effective aid in the design process and allowed the sampling system to remain in its original location while still meeting the requirements for sampling at a well mixed location.
Enhanced Vapor-Phase Diffusion in Porous Media - LDRD Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, C.K.; Webb, S.W.
1999-01-01
As part of the Laboratory-Directed Research and Development (LDRD) Program at Sandia National Laboratories, an investigation into the existence of enhanced vapor-phase diffusion (EVD) in porous media has been conducted. A thorough literature review was initially performed across multiple disciplines (soil science and engineering), and based on this review, the existence of EVD was found to be questionable. As a result, modeling and experiments were initiated to investigate the existence of EVD. In this LDRD, the first mechanistic model of EVD was developed which demonstrated the mechanisms responsible for EVD. The first direct measurements of EVD have also been conductedmore » at multiple scales. Measurements have been made at the pore scale, in a two- dimensional network as represented by a fracture aperture, and in a porous medium. Significant enhancement of vapor-phase transport relative to Fickian diffusion was measured in all cases. The modeling and experimental results provide additional mechanisms for EVD beyond those presented by the generally accepted model of Philip and deVries (1957), which required a thermal gradient for EVD to exist. Modeling and experimental results show significant enhancement under isothermal conditions. Application of EVD to vapor transport in the near-surface vadose zone show a significant variation between no enhancement, the model of Philip and deVries, and the present results. Based on this information, the model of Philip and deVries may need to be modified, and additional studies are recommended.« less
Laboratory Observations of Dune Erosion
NASA Astrophysics Data System (ADS)
Maddux, T. B.; Ruggiero, P.; Palmsten, M.; Holman, R.; Cox, D. T.
2006-12-01
Coastal dunes are an important feature along many coastlines, owing to their input to the sediment supply, use as habitat, and ability to protect onshore resources from wave attack. Correct predictions of the erosion and overtopping rates of these features are needed to develop improved responses to coastal dune damage events, and to determining the likelihood and magnitude of future erosion and overtopping on different beaches. We have conducted a large-scale laboratory study at Oregon State University's O.H. Hinsdale Wave Research Laboratory (HWRL) with the goal of producing a comprehensive, near prototype-scale, physical model data set of hydrodynamics, sediment transport, and morphological evolution during extreme dune erosion events. The two goals of this work are (1) to develop a better understanding of swash/dune dynamics and (2) to evaluate and guide further development of dune erosion models. We present initial results from the first phase of the experimental program. An initial beach and dune profile was selected based on field LIDAR-based observations of various U.S. east coast and Gulf coast dune systems. The laboratory beach was brought to equilibrium with pre-storm random wave conditions. It was subsequently subjected to attack from steadily increasing water level and offshore wave heights. Observations made include inner surf zone and swash free surface and velocities as well as wave-by-wave estimates of topographical change at high spatial resolution through the use of stereo video imagery. Future work will include studies of fluid overtopping of the dune and sediment overwash and assessment of the resilience of man-made "push-up" dunes to wave attack in comparison with their more-compacted "natural" cousins.
USDA-ARS?s Scientific Manuscript database
Pasta is a simple food made from water and durum wheat (Triticum turgidum subsp. durum) semolina. As pasta increases in popularity, studies have endeavored to analyze the attributes that contribute to high quality pasta. Despite being a simple food, the laboratory scale analysis of pasta quality is ...
Combustion experiments in a laboratory-scale fixed bed reactor were performed to determine the role of temperature and time in PCDD/F formation allowing a global kinetic expression to be written for PCDD/F formation due to soot oxidation in fly ash deposits. Rate constants were c...
Laforce, Brecht; Vermeulen, Bram; Garrevoet, Jan; Vekemans, Bart; Van Hoorebeke, Luc; Janssen, Colin; Vincze, Laszlo
2016-03-15
A new laboratory scale X-ray fluorescence (XRF) imaging instrument, based on an X-ray microfocus tube equipped with a monocapillary optic, has been developed to perform XRF computed tomography experiments with both higher spatial resolution (20 μm) and a better energy resolution (130 eV @Mn-K(α)) than has been achieved up-to-now. This instrument opens a new range of possible applications for XRF-CT. Next to the analytical characterization of the setup by using well-defined model/reference samples, demonstrating its capabilities for tomographic imaging, the XRF-CT microprobe has been used to image the interior of an ecotoxicological model organism, Americamysis bahia. This had been exposed to elevated metal (Cu and Ni) concentrations. The technique allowed the visualization of the accumulation sites of copper, clearly indicating the affected organs, i.e. either the gastric system or the hepatopancreas. As another illustrative application, the scanner has been employed to investigate goethite spherules from the Cretaceous-Paleogene boundary, revealing the internal elemental distribution of these valuable distal ejecta layer particles.
Probing free-space quantum channels with laboratory-based experiments
NASA Astrophysics Data System (ADS)
Bohmann, M.; Kruse, R.; Sperling, J.; Silberhorn, C.; Vogel, W.
2017-06-01
Atmospheric channels are a promising candidate to establish secure quantum communication on a global scale. However, due to their turbulent nature, it is crucial to understand the impact of the atmosphere on the quantum properties of light and examine it experimentally. In this paper, we introduce a method to probe atmospheric free-space links with quantum light on a laboratory scale. In contrast to previous works, our method models arbitrary intensity losses caused by turbulence to emulate general atmospheric conditions. This allows us to characterize turbulent quantum channels in a well-controlled manner. To implement this technique, we perform a series of measurements with different constant attenuations and simulate the fluctuating losses by combining the obtained data. We directly test the proposed method with an on-chip source of nonclassical light and a time-bin-multiplexed detection system. With the obtained data, we characterize the nonclassicality of the generated states for different atmospheric noise models and analyze a postselection protocol. This general technique in atmospheric quantum optics allows for studying turbulent quantum channels and predicting their properties for future applications.
NASA Astrophysics Data System (ADS)
López-Sánchez, M.; Mansilla-Plaza, L.; Sánchez-de-laOrden, M.
2017-10-01
Prior to field scale research, soil samples are analysed on a laboratory scale for electrical resistivity calibrations. Currently, there are a variety of field instruments to estimate the water content in soils using different physical phenomena. These instruments can be used to develop moisture-resistivity relationships on the same soil samples. This assures that measurements are performed on the same material and under the same conditions (e.g., humidity and temperature). A geometric factor is applied to the location of electrodes, in order to calculate the apparent electrical resistivity of the laboratory test cells. This geometric factor can be determined in three different ways: by means of the use of an analytical approximation, laboratory trials (experimental approximation), or by the analysis of a numerical model. The first case, the analytical approximation, is not appropriate for complex cells or arrays. And both, the experimental and numerical approximation can lead to inaccurate results. Therefore, we propose a novel approach to obtain a compromise solution between both techniques, providing a more precise determination of the geometrical factor.
NASA Astrophysics Data System (ADS)
Trevisan, L.; Illangasekare, T. H.; Rodriguez, D.; Sakaki, T.; Cihan, A.; Birkholzer, J. T.; Zhou, Q.
2011-12-01
Geological storage of carbon dioxide in deep geologic formations is being considered as a technical option to reduce greenhouse gas loading to the atmosphere. The processes associated with the movement and stable trapping are complex in deep naturally heterogeneous formations. Three primary mechanisms contribute to trapping; capillary entrapment due to immobilization of the supercritical fluid CO2 within soil pores, liquid CO2 dissolving in the formation water and mineralization. Natural heterogeneity in the formation is expected to affect all three mechanisms. A research project is in progress with the primary goal to improve our understanding of capillary and dissolution trapping during injection and post-injection process, focusing on formation heterogeneity. It is expected that this improved knowledge will help to develop site characterization methods targeting on obtaining the most critical parameters that capture the heterogeneity to design strategies and schemes to maximize trapping. This research combines experiments at the laboratory scale with multiphase modeling to upscale relevant trapping processes to the field scale. This paper presents the results from a set of experiments that were conducted in an intermediate scale test tanks. Intermediate scale testing provides an attractive alternative to investigate these processes under controlled conditions in the laboratory. Conducting these types of experiments is highly challenging as methods have to be developed to extrapolate the data from experiments that are conducted under ambient laboratory conditions to high temperatures and pressures settings in deep geologic formations. We explored the use of a combination of surrogate fluids that have similar density, viscosity contrasts and analogous solubility and interfacial tension as supercritical CO2-brine in deep formations. The extrapolation approach involves the use of dimensionless numbers such as Capillary number (Ca) and the Bond number (Bo). A set of experiments that captures some of the complexities of the geologic heterogeneity and injection scenarios are planned in a 4.8 m long tank. To test the experimental methods and instrumentation, a set of preliminary experiments were conducted in a smaller tank with dimensions 90 cm x 60 cm. The tank was packed to represent both homogeneous and heterogeneous conditions. Using the surrogate fluids, different injection scenarios were tested. Images of the migration plume showed the critical role that heterogeneity plays in stable entrapment. Destructive sampling done at the end of the experiments provided data on the final saturation distributions. Preliminary analysis suggests the entrapment configuration is controlled by the large-scale heterogeneities as well as the pore-scale entrapment mechanisms. The data was used in modeling analysis that is presented in a companion abstract.
Zhang, Liang; Zhao, Hai; Gan, Mingzhe; Jin, Yanlin; Gao, Xiaofeng; Chen, Qian; Guan, Jiafa; Wang, Zhongyan
2011-03-01
The aim of this work was to research a bioprocess for bioethanol production from raw sweet potato by Saccharomyces cerevisiae at laboratory, pilot and industrial scales. The fermentation mode, inoculum size and pressure from different gases were determined in laboratory. The maximum ethanol concentration, average ethanol productivity rate and yield of ethanol after fermentation in laboratory scale (128.51 g/L, 4.76 g/L/h and 91.4%) were satisfactory with small decrease at pilot scale (109.06 g/L, 4.89 g/L/h and 91.24%) and industrial scale (97.94 g/L, 4.19 g/L/h and 91.27%). When scaled up, the viscosity caused resistance to fermentation parameters, 1.56 AUG/g (sweet potato mash) of xylanase decreased the viscosity from approximately 30000 to 500 cp. Overall, sweet potato is a attractive feedstock for be bioethanol production from both the economic standpoints and environmentally friendly. Copyright © 2011 Elsevier Ltd. All rights reserved.
Modeling Gas Slug Break-up in the Lava Lake at Mt. Erebus, Antarctica
NASA Astrophysics Data System (ADS)
Velazquez, L. C.; Qin, Z.; Suckale, J.; Soldati, A.; Rust, A.; Cashman, K. V.
2017-12-01
Lava lakes are perhaps the most direct look scientists can take inside a volcano. They have thus become a fundamental component in our understanding of the dynamics of magmatic systems. Mount Erebus, Ross Island, Antarctica contains one of the most persistent and long-lived lava lakes on Earth, creating a unique and complex area of study. Its persistent magma degassing, convective overturns, and Strombolian eruptions have been studied through extensive field campaigns and analog as well as computational models. These provide diverse insights into the plumbing system not only at Mt. Erebus, but at other volcanoes as well. Eruptions at Erebus are episodic. One of the leading hypotheses to explain this episodicity is the rise and burst of large conduit-filling bubbles, known as gas slugs, at the lava lake surface. These slugs are thought to form deep in the plumbing system, rise through the conduit, and exit through the lava lake. The goal of this study is to investigate the stability of the hypothesized slugs as they transition from the conduit into the lava lake. Analogue laboratory results suggest that the flaring geometry at the transition point may trigger slug breakup and formation of separate daughter bubbles that then burst through the surface separately. We test this hypothesis through numerical simulations. Our model solves the two-fluid Navier-Stokes equations by calculating the conservation of mass and momentum in the gas and liquid. The laboratory experiments use a Hele-Shaw cell, in which the flaring geometry of the lava lake walls can be adjusted. A gas slug of variable volume is then injected into a liquid at different viscosities. We first validate our numerical simulations against these laboratory experiments and then proceed to investigate the same dynamics at the volcanic scale. At the natural scale, we investigate the same system parameters as at the lab scale. First results indicate that simulations reproduce experiments well. The results obtained at the volcano scale will help to assess how slug break-up alters the episodicity of degassing at the lava lake surface. A thorough understanding of this model will help constrain the main processes controlling the episodic eruptions at Mt. Erebus and other, similar volcanoes.
Generation of Collapsed Cross Sections for Hatch 1 Cycles 1-3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, Brian J
2012-11-01
Under NRC JCN V6361, Oak Ridge National Laboratory (ORNL) was tasked to develop and run SCALE/TRITON models for generation of collapsed few-group cross sections and to convert the cross sections to PMAXS format using the GENPMAXS conversion utility for use in PARCS/PATHS simulations of Hatch Unit 1, cycles 1-3. This letter report documents the final models used to produce the Hatch collapsed cross sections.
Laboratory Astrophysics: Enabling Scientific Discovery and Understanding
NASA Technical Reports Server (NTRS)
Kirby, K.
2006-01-01
NASA's Science Strategic Roadmap for Universe Exploration lays out a series of science objectives on a grand scale and discusses the various missions, over a wide range of wavelengths, which will enable discovery. Astronomical spectroscopy is arguably the most powerful tool we have for exploring the Universe. Experimental and theoretical studies in Laboratory Astrophysics convert "hard-won data into scientific understanding". However, the development of instruments with increasingly high spectroscopic resolution demands atomic and molecular data of unprecedented accuracy and completeness. How to meet these needs, in a time of severe budgetary constraints, poses a significant challenge both to NASA, the astronomical observers and model-builders, and the laboratory astrophysics community. I will discuss these issues, together with some recent examples of productive astronomy/lab astro collaborations.
NASA Astrophysics Data System (ADS)
Sanchez, M. J.; Santamarina, C.; Gai, X., Sr.; Teymouri, M., Sr.
2017-12-01
Stability and behavior of Hydrate Bearing Sediments (HBS) are characterized by the metastable character of the gas hydrate structure which strongly depends on thermo-hydro-chemo-mechanical (THCM) actions. Hydrate formation, dissociation and methane production from hydrate bearing sediments are coupled THCM processes that involve, amongst other, exothermic formation and endothermic dissociation of hydrate and ice phases, mixed fluid flow and large changes in fluid pressure. The analysis of available data from past field and laboratory experiments, and the optimization of future field production studies require a formal and robust numerical framework able to capture the very complex behavior of this type of soil. A comprehensive fully coupled THCM formulation has been developed and implemented into a finite element code to tackle problems involving gas hydrates sediments. Special attention is paid to the geomechanical behavior of HBS, and particularly to their response upon hydrate dissociation under loading. The numerical framework has been validated against recent experiments conducted under controlled conditions in the laboratory that challenge the proposed approach and highlight the complex interaction among THCM processes in HBS. The performance of the models in these case studies is highly satisfactory. Finally, the numerical code is applied to analyze the behavior of gas hydrate soils under field-scale conditions exploring different features of material behavior under possible reservoir conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias
2016-08-11
This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less
A refuge for inorganic chemistry: Bunsen's Heidelberg laboratory.
Nawa, Christine
2014-05-01
Immediately after its opening in 1855, Bunsen's Heidelberg laboratory became iconic as the most modern and best equipped laboratory in Europe. Although comparatively modest in size, the laboratory's progressive equipment made it a role model for new construction projects in Germany and beyond. In retrospect, it represents an intermediate stage of development between early teaching facilities, such as Liebig's laboratory in Giessen, and the new 'chemistry palaces' that came into existence with Wöhler's Göttingen laboratory of 1860. As a 'transition laboratory,' Bunsen's Heidelberg edifice is of particular historical interest. This paper explores the allocation of spaces to specific procedures and audiences within the laboratory, and the hierarchies and professional rites of passage embedded within it. On this basis, it argues that the laboratory in Heidelberg was tailored to Bunsen's needs in inorganic and physical chemistry and never aimed at a broad-scale representation of chemistry as a whole. On the contrary, it is an example of early specialisation within a chemical laboratory preceding the process of differentiation into chemical sub-disciplines. Finally, it is shown that the relatively small size of this laboratory, and the fact that after ca. 1860 no significant changes were made within the building, are inseparably connected to Bunsen's views on chemistry teaching.
Electrohydraulic Forming of Near-Net Shape Automotive Panels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golovaschenko, Sergey F.
2013-09-26
The objective of this project was to develop the electrohydraulic forming (EHF) process as a near-net shape automotive panel manufacturing technology that simultaneously reduces the energy embedded in vehicles and the energy consumed while producing automotive structures. Pulsed pressure is created via a shockwave generated by the discharge of high voltage capacitors through a pair of electrodes in a liquid-filled chamber. The shockwave in the liquid initiated by the expansion of the plasma channel formed between two electrodes propagates towards the blank and causes the blank to be deformed into a one-sided die cavity. The numerical model of the EHFmore » process was validated experimentally and was successfully applied to the design of the electrode system and to a multi-electrode EHF chamber for full scale validation of the process. The numerical model was able to predict stresses in the dies during pulsed forming and was validated by the experimental study of the die insert failure mode for corner filling operations. The electrohydraulic forming process and its major subsystems, including durable electrodes, an EHF chamber, a water/air management system, a pulse generator and integrated process controls, were validated to be capable to operate in a fully automated, computer controlled mode for forming of a portion of a full-scale sheet metal component in laboratory conditions. Additionally, the novel processes of electrohydraulic trimming and electrohydraulic calibration were demonstrated at a reduced-scale component level. Furthermore, a hybrid process combining conventional stamping with EHF was demonstrated as a laboratory process for a full-scale automotive panel formed out of AHSS material. The economic feasibility of the developed EHF processes was defined by developing a cost model of the EHF process in comparison to the conventional stamping process.« less
NASA Astrophysics Data System (ADS)
Willmott, Jon R.; Lowe, David; Broughton, Mick; White, Ben S.; Machin, Graham
2016-09-01
A primary temperature scale requires realising a unit in terms of its definition. For high temperature radiation thermometry in terms of the International Temperature Scale of 1990 this means extrapolating from the signal measured at the freezing temperature of gold, silver or copper using Planck’s radiation law. The difficulty in doing this means that primary scales above 1000 °C require specialist equipment and careful characterisation in order to achieve the extrapolation with sufficient accuracy. As such, maintenance of the scale at high temperatures is usually only practicable for National Metrology Institutes, and calibration laboratories have to rely on a scale calibrated against transfer standards. At lower temperatures it is practicable for an industrial calibration laboratory to have its own primary temperature scale, which reduces the number of steps between the primary scale and end user. Proposed changes to the SI that will introduce internationally accepted high temperature reference standards might make it practicable to have a primary high temperature scale in a calibration laboratory. In this study such a scale was established by calibrating radiation thermometers directly to high temperature reference standards. The possible reduction in uncertainty to an end user as a result of the reduced calibration chain was evaluated.
NASA Downscaling Project: Final Report
NASA Technical Reports Server (NTRS)
Ferraro, Robert; Waliser, Duane; Peters-Lidard, Christa
2017-01-01
A team of researchers from NASA Ames Research Center, Goddard Space Flight Center, the Jet Propulsion Laboratory, and Marshall Space Flight Center, along with university partners at UCLA, conducted an investigation to explore whether downscaling coarse resolution global climate model (GCM) predictions might provide valid insights into the regional impacts sought by decision makers. Since the computational cost of running global models at high spatial resolution for any useful climate scale period is prohibitive, the hope for downscaling is that a coarse resolution GCM provides sufficiently accurate synoptic scale information for a regional climate model (RCM) to accurately develop fine scale features that represent the regional impacts of a changing climate. As a proxy for a prognostic climate forecast model, and so that ground truth in the form of satellite and in-situ observations could be used for evaluation, the MERRA and MERRA - 2 reanalyses were used to drive the NU - WRF regional climate model and a GEOS - 5 replay. This was performed at various resolutions that were at factors of 2 to 10 higher than the reanalysis forcing. A number of experiments were conducted that varied resolution, model parameterizations, and intermediate scale nudging, for simulations over the continental US during the period from 2000 - 2010. The results of these experiments were compared to observational datasets to evaluate the output.
Development of a Scale-up Tool for Pervaporation Processes
Thiess, Holger; Strube, Jochen
2018-01-01
In this study, an engineering tool for the design and optimization of pervaporation processes is developed based on physico-chemical modelling coupled with laboratory/mini-plant experiments. The model incorporates the solution-diffusion-mechanism, polarization effects (concentration and temperature), axial dispersion, pressure drop and the temperature drop in the feed channel due to vaporization of the permeating components. The permeance, being the key model parameter, was determined via dehydration experiments on a mini-plant scale for the binary mixtures ethanol/water and ethyl acetate/water. A second set of experimental data was utilized for the validation of the model for two chemical systems. The industrially relevant ternary mixture, ethanol/ethyl acetate/water, was investigated close to its azeotropic point and compared to a simulation conducted with the determined binary permeance data. Experimental and simulation data proved to agree very well for the investigated process conditions. In order to test the scalability of the developed engineering tool, large-scale data from an industrial pervaporation plant used for the dehydration of ethanol was compared to a process simulation conducted with the validated physico-chemical model. Since the membranes employed in both mini-plant and industrial scale were of the same type, the permeance data could be transferred. The comparison of the measured and simulated data proved the scalability of the derived model. PMID:29342956
NASA Technical Reports Server (NTRS)
Ferraro, Robert; Waliser, Duane; Peters-Lidard, Christa
2017-01-01
A team of researchers from NASA Ames Research Center, Goddard Space Flight Center, the Jet Propulsion Laboratory, and Marshall Space Flight Center, along with university partners at UCLA, conducted an investigation to explore whether downscaling coarse resolution global climate model (GCM) predictions might provide valid insights into the regional impacts sought by decision makers. Since the computational cost of running global models at high spatial resolution for any useful climate scale period is prohibitive, the hope for downscaling is that a coarse resolution GCM provides sufficiently accurate synoptic scale information for a regional climate model (RCM) to accurately develop fine scale features that represent the regional impacts of a changing climate. As a proxy for a prognostic climate forecast model, and so that ground truth in the form of satellite and in-situ observations could be used for evaluation, the MERRA and MERRA-2 reanalyses were used to drive the NU-WRF regional climate model and a GEOS-5 replay. This was performed at various resolutions that were at factors of 2 to 10 higher than the reanalysis forcing. A number of experiments were conducted that varied resolution, model parameterizations, and intermediate scale nudging, for simulations over the continental US during the period from 2000-2010. The results of these experiments were compared to observational datasets to evaluate the output.
Scaled model guidelines for solar coronagraphs' external occulters with an optimized shape.
Landini, Federico; Baccani, Cristian; Schweitzer, Hagen; Asoubar, Daniel; Romoli, Marco; Taccola, Matteo; Focardi, Mauro; Pancrazzi, Maurizio; Fineschi, Silvano
2017-12-01
One of the major challenges faced by externally occulted solar coronagraphs is the suppression of the light diffracted by the occulter edge. It is a contribution to the stray light that overwhelms the coronal signal on the focal plane and must be reduced by modifying the geometrical shape of the occulter. There is a rich literature, mostly experimental, on the appropriate choice of the most suitable shape. The problem arises when huge coronagraphs, such as those in formation flight, shall be tested in a laboratory. A recent contribution [Opt. Lett.41, 757 (2016)OPLEDP0146-959210.1364/OL.41.000757] provides the guidelines for scaling the geometry and replicate in the laboratory the flight diffraction pattern as produced by the whole solar disk and a flight occulter but leaves the conclusion on the occulter scale law somehow unjustified. This paper provides the numerical support for validating that conclusion and presents the first-ever simulation of the diffraction behind an occulter with an optimized shape along the optical axis with the solar disk as a source. This paper, together with Opt. Lett.41, 757 (2016)OPLEDP0146-959210.1364/OL.41.000757, aims at constituting a complete guide for scaling the coronagraphs' geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraucunas, Ian P.; Clarke, Leon E.; Dirks, James A.
2015-04-01
The Platform for Regional Integrated Modeling and Analysis (PRIMA) is an innovative modeling system developed at Pacific Northwest National Laboratory (PNNL) to simulate interactions among natural and human systems at scales relevant to regional decision making. PRIMA brings together state-of-the-art models of regional climate, hydrology, agriculture, socioeconomics, and energy systems using a flexible coupling approach. The platform can be customized to inform a variety of complex questions and decisions, such as the integrated evaluation of mitigation and adaptation options across a range of sectors. Research into stakeholder decision support needs underpins the platform's application to regional issues, including uncertainty characterization.more » Ongoing numerical experiments are yielding new insights into the interactions among human and natural systems on regional scales with an initial focus on the energy-land-water nexus in the upper U.S. Midwest. This paper focuses on PRIMA’s functional capabilities and describes some lessons learned to date about integrated regional modeling.« less
Simulation of pump-turbine prototype fast mode transition for grid stability support
NASA Astrophysics Data System (ADS)
Nicolet, C.; Braun, O.; Ruchonnet, N.; Hell, J.; Béguin, A.; Avellan, F.
2017-04-01
The paper explores the additional services that Full Size Frequency Converter, FSFC, solution can provide for the case of an existing pumped storage power plant of 2x210 MW, for which conversion from fixed speed to variable speed is investigated with a focus on fast mode transition. First, reduced scale model tests experiments of fast transition of Francis pump-turbine which have been performed at the ANDRITZ HYDRO Hydraulic Laboratory in Linz Austria are presented. The tests consist of linear speed transition from pump to turbine and vice versa performed with constant guide vane opening. Then existing pumped storage power plant with pump-turbine quasi homologous to the reduced scale model is modelled using the simulation software SIMSEN considering the reservoirs, penstocks, the two Francis pump-turbines, the two downstream surge tanks, and the tailrace tunnel. For the electrical part, an FSFC configuration is considered with a detailed electrical model. The transitions from turbine to pump and vice versa are simulated, and similarities between prototype simulation results and reduced scale model experiments are highlighted.
A laboratory-calibrated model of coho salmon growth with utility for ecological analyses
Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.
2018-01-01
We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.
Atmospheric-like rotating annulus experiment: gravity wave emission from baroclinic jets
NASA Astrophysics Data System (ADS)
Rodda, Costanza; Borcia, Ion; Harlander, Uwe
2017-04-01
Large-scale balanced flows can spontaneously radiate meso-scale inertia-gravity waves (IGWs) and are thus in fact unbalanced. While flow-dependent parameterizations for the radiation of IGWs from orographic and convective sources do exist, the situation is less developed for spontaneously emitted IGWs. Observations identify increased IGW activity in the vicinity of jet exit regions. A direct interpretation of those based on geostrophic adjustment might be tempting. However, directly applying this concept to the parameterization of spontaneous imbalance is difficult since the dynamics itself is continuously re-establishing an unbalanced flow which then sheds imbalances by GW radiation. Examining spontaneous IGW emission in the atmosphere and validating parameterization schemes confronts the scientist with particular challenges. Due to its extreme complexity, GW emission will always be embedded in the interaction of a multitude of interdependent processes, many of which are hardly detectable from analysis or campaign data. The benefits of repeated and more detailed measurements, while representing the only source of information about the real atmosphere, are limited by the non-repeatability of an atmospheric situation. The same event never occurs twice. This argues for complementary laboratory experiments, which can provide a more focused dialogue between experiment and theory. Indeed, life cycles are also examined in rotating- annulus laboratory experiments. Thus, these experiments might form a useful empirical benchmark for theoretical and modelling work that is also independent of any sort of subgrid model. In addition, the more direct correspondence between experimental and model data and the data reproducibility makes lab experiments a powerful testbed for parameterizations. Joint laboratory experiment and numerical simulation have been conducted. The comparison between the data obtained from the experiment and the numerical simulations shows a very good agreement for the large scale baroclinic wave regime. Moreover, in both cases a clear signal of horizontal divergence, embedded in the baroclinic wave front, appears suggesting IGWs emission.
Near-fault peak ground velocity from earthquake and laboratory data
McGarr, A.; Fletcher, Joe B.
2007-01-01
We test the hypothesis that peak ground velocity (PGV) has an upper bound independent of earthquake magnitude and that this bound is controlled primarily by the strength of the seismogenic crust. The highest PGVs, ranging up to several meters per second, have been measured at sites within a few kilometers of the causative faults. Because the database for near-fault PGV is small, we use earthquake slip models, laboratory experiments, and evidence from a mining-induced earthquake to investigate the factors influencing near-fault PGV and the nature of its scaling. For each earthquake slip model we have calculated the peak slip rates for all subfaults and then chosen the maximum of these rates as an estimate of twice the largest near-fault PGV. Nine slip models for eight earthquakes, with magnitudes ranging from 6.5 to 7.6, yielded maximum peak slip rates ranging from 2.3 to 12 m/sec with a median of 5.9 m/sec. By making several adjustments, PGVs for small earthquakes can be simulated from peak slip rates measured during laboratory stick-slip experiments. First, we adjust the PGV for differences in the state of stress (i.e., the difference between the laboratory loading stresses and those appropriate for faults at seismogenic depths). To do this, we multiply both the slip and the peak slip rate by the ratio of the effective normal stresses acting on fault planes measured at 6.8 km depth at the KTB site, Germany (deepest available in situ stress measurements), to those acting on the laboratory faults. We also adjust the seismic moment by replacing the laboratory fault with a buried circular shear crack whose radius is chosen to match the experimental unloading stiffness. An additional, less important adjustment is needed for experiments run in triaxial loading conditions. With these adjustments, peak slip rates for 10 stick-slip events, with scaled moment magnitudes from -2.9 to 1.0, range from 3.3 to 10.3 m/sec, with a median of 5.4 m/sec. Both the earthquake and laboratory results are consistent with typical maximum peak slip rates averaging between 5 and 6 m/sec or corresponding maximum near-fault PGVs between 2.5 and 3 m/sec at seismogenic depths, independent of magnitude. Our ability to replicate maximum slip rates in the fault zones of earthquakes by adjusting the corresponding laboratory rates using the ratio of effective normal stresses acting on the fault planes suggests that the strength of the seismogenic crust is the important factor limiting the near-fault PGV.
Heat transfer analysis of a lab scale solar receiver using the discrete ordinates model
NASA Astrophysics Data System (ADS)
Dordevich, Milorad C. W.
This thesis documents the development, implementation and simulation outcomes of the Discrete Ordinates Radiation Model in ANSYS FLUENT simulating the radiative heat transfer occurring in the San Diego State University lab-scale Small Particle Heat Exchange Receiver. In tandem, it also serves to document how well the Discrete Ordinates Radiation Model results compared with those from the in-house developed Monte Carlo Ray Trace Method in a number of simplified geometries. The secondary goal of this study was the inclusion of new physics, specifically buoyancy. Implementation of an additional Monte Carlo Ray Trace Method software package known as VEGAS, which was specifically developed to model lab scale solar simulators and provide directional, flux and beam spread information for the aperture boundary condition, was also a goal of this study. Upon establishment of the model, test cases were run to understand the predictive capabilities of the model. It was shown that agreement within 15% was obtained against laboratory measurements made in the San Diego State University Combustion and Solar Energy Laboratory with the metrics of comparison being the thermal efficiency and outlet, wall and aperture quartz temperatures. Parametric testing additionally showed that the thermal efficiency of the system was very dependent on the mass flow rate and particle loading. It was also shown that the orientation of the small particle heat exchange receiver was important in attaining optimal efficiency due to the fact that buoyancy induced effects could not be neglected. The analyses presented in this work were all performed on the lab-scale small particle heat exchange receiver. The lab-scale small particle heat exchange receiver is 0.38 m in diameter by 0.51 m tall and operated with an input irradiation flux of 3 kWth and a nominal mass flow rate of 2 g/s with a suspended particle mass loading of 2 g/m3. Finally, based on acumen gained during the implementation and development of the model, a new and improved design was simulated to predict how the efficiency within the small particle heat exchange receiver could be improved through a few simple internal geometry design modifications. It was shown that the theoretical calculated efficiency of the small particle heat exchange receiver could be improved from 64% to 87% with adjustments to the internal geometry, mass flow rate, and mass loading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Blackman, H.S.; Novack, S.D.
The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3)more » enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Blackman, H.S.; Novack, S.D.
The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3)more » enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.« less
Reynolds Number Effects on Helicopter Rotor Hub Flow
NASA Astrophysics Data System (ADS)
Reich, David; Willits, Steve; Schmitz, Sven
2015-11-01
The 12 inch diameter water tunnel at the Pennsylvania State University Applied Research Laboratory was used with the objective of quantifying effects of Reynolds number scaling on drag and shed wake of model helicopter rotor hub flows. Hub diameter-based Reynolds numbers ranged from 1.06 million to 2.62 million. Measurements included steady and unsteady hub drag, as well as Particle Image Velocimetry. Results include time-averaged, phase-averaged, and spectral analysis of the drag and wake flow-field. A strong dependence of steady and unsteady drag on Reynolds number was noted, alluding to the importance of adequate Reynolds scaling for model helicopter rotor hubs that exhibit interaction between various bluff bodies.
Numerical simulation of small-scale thermal convection in the atmosphere
NASA Technical Reports Server (NTRS)
Somerville, R. C. J.
1973-01-01
A Boussinesq system is integrated numerically in three dimensions and time in a study of nonhydrostatic convection in the atmosphere. Simulation of cloud convection is achieved by the inclusion of parametrized effects of latent heat and small-scale turbulence. The results are compared with the cell structure observed in Rayleigh-Benard laboratory conversion experiments in air. At a Rayleigh number of 4000, the numerical model adequately simulates the experimentally observed evolution, including some prominent transients of a flow from a randomly perturbed initial conductive state into the final state of steady large-amplitude two-dimensional rolls. At Rayleigh number 9000, the model reproduces the experimentally observed unsteady equilibrium of vertically coherent oscillatory waves superimposed on rolls.
Peng, Yi; Xiong, Xiong; Adhikari, Kabindra; Knadel, Maria; Grunwald, Sabine; Greve, Mogens Humlekrog
2015-01-01
There is a great challenge in combining soil proximal spectra and remote sensing spectra to improve the accuracy of soil organic carbon (SOC) models. This is primarily because mixing of spectral data from different sources and technologies to improve soil models is still in its infancy. The first objective of this study was to integrate information of SOC derived from visible near-infrared reflectance (Vis-NIR) spectra in the laboratory with remote sensing (RS) images to improve predictions of topsoil SOC in the Skjern river catchment, Denmark. The second objective was to improve SOC prediction results by separately modeling uplands and wetlands. A total of 328 topsoil samples were collected and analyzed for SOC. Satellite Pour l’Observation de la Terre (SPOT5), Landsat Data Continuity Mission (Landsat 8) images, laboratory Vis-NIR and other ancillary environmental data including terrain parameters and soil maps were compiled to predict topsoil SOC using Cubist regression and Bayesian kriging. The results showed that the model developed from RS data, ancillary environmental data and laboratory spectral data yielded a lower root mean square error (RMSE) (2.8%) and higher R2 (0.59) than the model developed from only RS data and ancillary environmental data (RMSE: 3.6%, R2: 0.46). Plant-available water (PAW) was the most important predictor for all the models because of its close relationship with soil organic matter content. Moreover, vegetation indices, such as the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), were very important predictors in SOC spatial models. Furthermore, the ‘upland model’ was able to more accurately predict SOC compared with the ‘upland & wetland model’. However, the separately calibrated ‘upland and wetland model’ did not improve the prediction accuracy for wetland sites, since it was not possible to adequately discriminate the vegetation in the RS summer images. We conclude that laboratory Vis-NIR spectroscopy adds critical information that significantly improves the prediction accuracy of SOC compared to using RS data alone. We recommend the incorporation of laboratory spectra with RS data and other environmental data to improve soil spatial modeling and digital soil mapping (DSM). PMID:26555071
NASA Astrophysics Data System (ADS)
van den Ende, M. P. A.; Chen, J.; Ampuero, J.-P.; Niemeijer, A. R.
2018-05-01
Rate-and-state friction (RSF) is commonly used for the characterisation of laboratory friction experiments, such as velocity-step tests. However, the RSF framework provides little physical basis for the extrapolation of these results to the scales and conditions of natural fault systems, and so open questions remain regarding the applicability of the experimentally obtained RSF parameters for predicting seismic cycle transients. As an alternative to classical RSF, microphysics-based models offer means for interpreting laboratory and field observations, but are generally over-simplified with respect to heterogeneous natural systems. In order to bridge the temporal and spatial gap between the laboratory and nature, we have implemented existing microphysical model formulations into an earthquake cycle simulator. Through this numerical framework, we make a direct comparison between simulations exhibiting RSF-controlled fault rheology, and simulations in which the fault rheology is dictated by the microphysical model. Even though the input parameters for the RSF simulation are directly derived from the microphysical model, the microphysics-based simulations produce significantly smaller seismic event sizes than the RSF-based simulation, and suggest a more stable fault slip behaviour. Our results reveal fundamental limitations in using classical rate-and-state friction for the extrapolation of laboratory results. The microphysics-based approach offers a more complete framework in this respect, and may be used for a more detailed study of the seismic cycle in relation to material properties and fault zone pressure-temperature conditions.
NASA Technical Reports Server (NTRS)
Kingsland, R. B.; Vaughn, J. E.; Singellton, R.
1973-01-01
Experimental aerodynamic investigations were conducted in a low speed wind tunnel on a scale model space shuttle vehicle (SSV) orbiter. The purpose of the test was to investigate the longitudinal and lateral-directional aerodynamic characteristics of the space shuttle orbiter. Emphasis was placed on model component, wing-glove, and wing-body fairing effects, as well as elevon, aileron, and rudder control effectiveness. Angles of attack from - 5 deg to + 30 deg and angles of sideslip of - 5 deg, 0 deg, and + 5 deg were tested. Static pressures were recorded on base, fuselage, and wing surfaces. Tufts and talc-kerosene flow visualization techniques were also utilized. The aerodynamic force balance results are presented in plotted and tabular form.
Phase Transitions and Scaling in Systems Far from Equilibrium
NASA Astrophysics Data System (ADS)
Täuber, Uwe C.
2017-03-01
Scaling ideas and renormalization group approaches proved crucial for a deep understanding and classification of critical phenomena in thermal equilibrium. Over the past decades, these powerful conceptual and mathematical tools were extended to continuous phase transitions separating distinct nonequilibrium stationary states in driven classical and quantum systems. In concordance with detailed numerical simulations and laboratory experiments, several prominent dynamical universality classes have emerged that govern large-scale, long-time scaling properties both near and far from thermal equilibrium. These pertain to genuine specific critical points as well as entire parameter space regions for steady states that display generic scale invariance. The exploration of nonstationary relaxation properties and associated physical aging scaling constitutes a complementary potent means to characterize cooperative dynamics in complex out-of-equilibrium systems. This review describes dynamic scaling features through paradigmatic examples that include near-equilibrium critical dynamics, driven lattice gases and growing interfaces, correlation-dominated reaction-diffusion systems, and basic epidemic models.
Small scale monitoring of a bioremediation barrier using miniature electrical resistivity tomography
NASA Astrophysics Data System (ADS)
Sentenac, Philippe; Hogson, Tom; Keenan, Helen; Kulessa, Bernd
2015-04-01
The aim of this study was to assess, in the laboratory, the efficiency of a barrier of oxygen release compound (ORC) to block and divert a diesel plume migration in a scaled aquifer model using miniature electrical resistivity tomography (ERT) as the monitoring system. Two plumes of contaminant (diesel) were injected in a soil model made of local sand and clay. The diesel plumes migration was imaged and monitored using a miniature resistivity array system that has proved to be accurate in soil resistivity variations in small-scaled models of soil. ERT results reflected the lateral spreading and diversion of the diesel plumes in the unsaturated zone. One of the contaminant plumes was partially blocked by the ORC barrier and a diversion and reorganisation of the diesel in the soil matrix was observed. The technique of time-lapse ERT imaging showed that a dense non-aqueous phase liquid (DNAPL) contaminant like diesel can be monitored through a bioremediation barrier and the technique is well suited to monitor the efficiency of the barrier. Therefore, miniature ERT as a small-scale modelling tool could complement conventional techniques, which require more expensive and intrusive site investigation prior to remediation.
Transregional Collaborative Research Centre 32: Patterns in Soil-Vegetation-Atmosphere-Systems
NASA Astrophysics Data System (ADS)
Masbou, M.; Simmer, C.; Kollet, S.; Boessenkool, K.; Crewell, S.; Diekkrüger, B.; Huber, K.; Klitzsch, N.; Koyama, C.; Vereecken, H.
2012-04-01
The soil-vegetation-atmosphere system is characterized by non-linear exchanges of mass, momentum and energy with complex patterns, structures and processes that act at different temporal and spatial scales. Under the TR32 framework, the characterisation of these structures and patterns will lead to a deeper qualitative and quantitative understanding of the SVA system, and ultimately to better predictions of the SVA state. Research in TR32 is based on three methodological pillars: Monitoring, Modelling and Data Assimilation. Focusing our research on the Rur Catchment (Germany), patterns are monitored since 2006 continuously using existing and novel geophysical and remote sensing techniques from the local to the catchment scale based on ground penetrating radar methods, induced polarization, radiomagnetotellurics, electrical resistivity tomography, boundary layer scintillometry, lidar techniques, cosmic-ray, microwave radiometry, and precipitation radars with polarization diversity. Modelling approaches involve development of scaled consistent coupled model platform: high resolution numerical weather prediction (NWP; 400m) and hydrological models (few meters). In the second phase (2011-2014), the focus is on the integration of models from the groundwater to the atmosphere for both the m- and km-scale and the extension of the experimental monitoring in respect to vegetation. The coupled modelling platform is based on the atmospheric model COSMO, the land surface model CLM and the hydrological model ParFlow. A scale consistent two-way coupling is performed using the external OASIS coupler. Example work includes the transfer of laboratory methods to the field; the measurements of patterns of soil-carbon, evapotranspiration and respiration measured in the field; catchment-scale modeling of exchange processes and the setup of an atmospheric boundary layer monitoring network. These modern and predominantly non-invasive measurement techniques are exploited in combination with advanced modelling systems by data assimilation to yield improved numerical models for the prediction of water-, energy and CO2-transfer by accounting for the patterns occurring at various scales.
PAM: Particle automata model in simulation of Fusarium graminearum pathogen expansion.
Wcisło, Rafał; Miller, S Shea; Dzwinel, Witold
2016-01-21
The multi-scale nature and inherent complexity of biological systems are a great challenge for computer modeling and classical modeling paradigms. We present a novel particle automata modeling metaphor in the context of developing a 3D model of Fusarium graminearum infection in wheat. The system consisting of the host plant and Fusarium pathogen cells can be represented by an ensemble of discrete particles defined by a set of attributes. The cells-particles can interact with each other mimicking mechanical resistance of the cell walls and cell coalescence. The particles can move, while some of their attributes can be changed according to prescribed rules. The rules can represent cellular scales of a complex system, while the integrated particle automata model (PAM) simulates its overall multi-scale behavior. We show that due to the ability of mimicking mechanical interactions of Fusarium tip cells with the host tissue, the model is able to simulate realistic penetration properties of the colonization process reproducing both vertical and lateral Fusarium invasion scenarios. The comparison of simulation results with micrographs from laboratory experiments shows encouraging qualitative agreement between the two. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quick clay and landslides of clayey soils.
Khaldoun, Asmae; Moller, Peder; Fall, Abdoulaye; Wegdam, Gerard; De Leeuw, Bert; Méheust, Yves; Otto Fossum, Jon; Bonn, Daniel
2009-10-30
We study the rheology of quick clay, an unstable soil responsible for many landslides. We show that above a critical stress the material starts flowing abruptly with a very large viscosity decrease caused by the flow. This leads to avalanche behavior that accounts for the instability of quick clay soils. Reproducing landslides on a small scale in the laboratory shows that an additional factor that determines the violence of the slides is the inhomogeneity of the flow. We propose a simple yield stress model capable of reproducing the laboratory landslide data, allowing us to relate landslides to the measured rheology.
Quick Clay and Landslides of Clayey Soils
NASA Astrophysics Data System (ADS)
Khaldoun, Asmae; Moller, Peder; Fall, Abdoulaye; Wegdam, Gerard; de Leeuw, Bert; Méheust, Yves; Otto Fossum, Jon; Bonn, Daniel
2009-10-01
We study the rheology of quick clay, an unstable soil responsible for many landslides. We show that above a critical stress the material starts flowing abruptly with a very large viscosity decrease caused by the flow. This leads to avalanche behavior that accounts for the instability of quick clay soils. Reproducing landslides on a small scale in the laboratory shows that an additional factor that determines the violence of the slides is the inhomogeneity of the flow. We propose a simple yield stress model capable of reproducing the laboratory landslide data, allowing us to relate landslides to the measured rheology.
Approximate Seismic Diffusive Models of Near-Receiver Geology: Applications from Lab Scale to Field
NASA Astrophysics Data System (ADS)
King, Thomas; Benson, Philip; De Siena, Luca; Vinciguerra, Sergio
2017-04-01
This paper presents a novel and simple method of seismic envelope analysis that can be applied at multiple scales, e.g. field, m to km scale and laboratory, mm to cm scale, and utilises the diffusive approximation of the seismic wavefield (Wegler, 2003). Coefficient values for diffusion and attenuation are obtained from seismic coda energies and are used to describe the rate at which seismic energy is scattered and attenuated into the local medium around a receiver. Values are acquired by performing a linear least squares inversion of coda energies calculated in successive time windows along a seismic trace. Acoustic emission data were taken from piezoelectric transducers (PZT) with typical resonance frequency of 1-5MHz glued around rock samples during deformation laboratory experiments carried out using a servo-controlled triaxial testing machine, where a shear/damage zone is generated under compression after the nucleation, growth and coalescence of microcracks. Passive field data were collected from conventional geophones during the 2004-2008 eruption of Mount St. Helens volcano (MSH), USA where a sudden reawakening of the volcanic activity and a new dome growth has occurred. The laboratory study shows a strong correlation between variations of the coefficients over time and the increase of differential stress as the experiment progresses. The field study links structural variations present in the near-surface geology, including those seen in previous geophysical studies of the area, to these same coefficients. Both studies show a correlation between frequency and structural feature size, i.e. landslide slip-planes and microcracks, with higher frequencies being much more sensitive to smaller scale features and vice-versa.
Drive Scaling of hohlraums heated with 2ω light
NASA Astrophysics Data System (ADS)
Oades, Kevin; Foster, John; Slark, Gary; Stevenson, Mark; Kauffman, Robert; Suter, Larry; Hinkel, Denise; Miller, Mike; Schneider, Marilyn; Springer, Paul
2002-11-01
We report on experiments using a single beam from the AWE?s HELEN laser to study scaling of hohlraum drive with hohlraum scale size. The hohlruams were heated with 400 J in a 1 ns square pulse with and without a phaseplate. The drive was measured using a PCD and an FRD. Scattered light was measured using a full aperture backscatter system. Drive is consistent with hohlraum scaling and LASNEX modeling using the absorbed laser energy. Bremsstrahlung from fast electrons and M-shell x-ray production were also measured. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.
Biological Conversion of Sugars to Hydrocarbons Technology Pathway
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Ryan; Biddy, Mary J.; Tan, Eric
2013-03-31
In support of the Bioenergy Technologies Office, the National Renewable Energy Laboratory (NREL) and the Pacific Northwest National Laboratory (PNNL) are undertaking studies of biomass conversion technologies to identify barriers and target research toward reducing conversion costs. Process designs and preliminary economic estimates for each of these pathway cases were developed using rigorous modeling tools (Aspen Plus and Chemcad). These analyses incorporated the best information available at the time of development, including data from recent pilot and bench-scale demonstrations, collaborative industrial and academic partners, and published literature and patents. This technology pathway case investigates the biological conversion of biomass derivedmore » sugars to hydrocarbon biofuels, utilizing data from recent literature references and information consistent with recent pilot scale demonstrations at NREL. Technical barriers and key research needs have been identified that should be pursued for the pathway to become competitive with petroleum-derived gasoline, diesel and jet range hydrocarbon blendstocks.« less
Scaled laboratory experiments explain the kink behaviour of the Crab Nebula jet
Li, C. K.; Tzeferacos, P.; Lamb, D.; Gregori, G.; Norreys, P. A.; Rosenberg, M. J.; Follett, R. K.; Froula, D. H.; Koenig, M.; Seguin, F. H.; Frenje, J. A.; Rinderknecht, H. G.; Sio, H.; Zylstra, A. B.; Petrasso, R. D.; Amendt, P. A.; Park, H. S.; Remington, B. A.; Ryutov, D. D.; Wilks, S. C.; Betti, R.; Frank, A.; Hu, S. X.; Sangster, T. C.; Hartigan, P.; Drake, R. P.; Kuranz, C. C.; Lebedev, S. V.; Woolsey, N. C.
2016-01-01
The remarkable discovery by the Chandra X-ray observatory that the Crab nebula's jet periodically changes direction provides a challenge to our understanding of astrophysical jet dynamics. It has been suggested that this phenomenon may be the consequence of magnetic fields and magnetohydrodynamic instabilities, but experimental demonstration in a controlled laboratory environment has remained elusive. Here we report experiments that use high-power lasers to create a plasma jet that can be directly compared with the Crab jet through well-defined physical scaling laws. The jet generates its own embedded toroidal magnetic fields; as it moves, plasma instabilities result in multiple deflections of the propagation direction, mimicking the kink behaviour of the Crab jet. The experiment is modelled with three-dimensional numerical simulations that show exactly how the instability develops and results in changes of direction of the jet. PMID:27713403
NASA Astrophysics Data System (ADS)
Tatomir, Alexandru Bogdan A. C.; Sauter, Martin
2017-04-01
A number of theoretical approaches estimating the interfacial area between two fluid phases are available (Schaffer et al.,2013). Kinetic interface sensitive (KIS) tracers are used to describe the evolution of fluid-fluid interfaces advancing in two phase porous media systems (Tatomir et al., 2015). Initially developed to offer answers about the supercritical (sc)CO2 plume movement and the efficiency of trapping in geological carbon storage reservoirs, KIS tracers are tested in dynamic controlled laboratory conditions. N-octane and water, analogue to a scCO2 - brine system, are used. The KIS tracer is dissolved in n-octane, which is injected as the non-wetting phase in a fully water saturated porous media column. The porous system is made up of spherical glass beads with sizes of 100-250 μm. Subsequently, the KIS tracer follows a hydrolysis reaction over the n-octane - water interface resulting in an acid and phenol which are both water soluble. The fluid-fluid interfacial area is described numerically with the help of constitutive-relationships derived from the Brooks-Corey model. The specific interfacial area is determined numerically from pore scale calculations, or from different literature sources making use of pore network model calculations (Joekar-Niasar et al., 2008). This research describes the design of the laboratory setup and compares the break-through curves obtained with the forward model and in the laboratory experiment. Furthermore, first results are shown in the attempt to validate the immiscible two phase flow reactive transport numerical model with dynamic laboratory column experiments. Keywords: Fluid-fluid interfacial area, KIS tracers, model validation, CCS, geological storage of CO2
Reference Model 6 (RM6): Oscillating Wave Energy Converter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bull, Diana L; Smith, Chris; Jenne, Dale Scott
This report is an addendum to SAND2013-9040: Methodology for Design and Economic Analysis of Marine Energy Conversion (MEC) Technologies. This report describes an Oscillating Water Column Wave Energy Converter reference model design in a complementary manner to Reference Models 1-4 contained in the above report. In this report, a conceptual design for an Oscillating Water Column Wave Energy Converter (WEC) device appropriate for the modeled reference resource site was identified, and a detailed backward bent duct buoy (BBDB) device design was developed using a combination of numerical modeling tools and scaled physical models. Our team used the methodology in SAND2013-9040more » for the economic analysis that included costs for designing, manufacturing, deploying, and operating commercial-scale MEC arrays, up to 100 devices. The methodology was applied to identify key cost drivers and to estimate levelized cost of energy (LCOE) for this RM6 Oscillating Water Column device in dollars per kilowatt-hour ($/kWh). Although many costs were difficult to estimate at this time due to the lack of operational experience, the main contribution of this work was to disseminate a detailed set of methodologies and models that allow for an initial cost analysis of this emerging technology. This project is sponsored by the U.S. Department of Energy's (DOE) Wind and Water Power Technologies Program Office (WWPTO), within the Office of Energy Efficiency & Renewable Energy (EERE). Sandia National Laboratories, the lead in this effort, collaborated with partners from National Laboratories, industry, and universities to design and test this reference model.« less
Warp-X: A new exascale computing platform for beam–plasma simulations
Vay, J. -L.; Almgren, A.; Bell, J.; ...
2018-01-31
Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less
Warp-X: A new exascale computing platform for beam–plasma simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vay, J. -L.; Almgren, A.; Bell, J.
Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less
Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate-Scale Hydrodynamic Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Khangaonkar, Tarang; Labiosa, Rochelle G.
2010-11-30
The Washington State Department of Ecology contracted with Pacific Northwest National Laboratory to develop an intermediate-scale hydrodynamic and water quality model to study dissolved oxygen and nutrient dynamics in Puget Sound and to help define potential Puget Sound-wide nutrient management strategies and decisions. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or dominate human impacts to dissolved oxygen levels in the sensitive areas. In this study, anmore » intermediate-scale hydrodynamic model of Puget Sound was developed to simulate the hydrodynamics of Puget Sound and the Northwest Straits for the year 2006. The model was constructed using the unstructured Finite Volume Coastal Ocean Model. The overall model grid resolution within Puget Sound in its present configuration is about 880 m. The model was driven by tides, river inflows, and meteorological forcing (wind and net heat flux) and simulated tidal circulations, temperature, and salinity distributions in Puget Sound. The model was validated against observed data of water surface elevation, velocity, temperature, and salinity at various stations within the study domain. Model validation indicated that the model simulates tidal elevations and currents in Puget Sound well and reproduces the general patterns of the temperature and salinity distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edlund, E. M.; Ji, H.
2015-10-06
Here, we present fluid velocity measurements in a modified Taylor-Couette device operated in the quasi-Keplerian regime, where it is observed that nearly ideal flows exhibit self-similarity under scaling of the Reynolds number. In contrast, nonideal flows show progressive departure from ideal Couette as the Reynolds number is increased. We present a model that describes the observed departures from ideal Couette rotation as a function of the fluxes of angular momentum across the boundaries, capturing the dependence on Reynolds number and boundary conditions.
Edlund, E M; Ji, H
2015-10-01
We present fluid velocity measurements in a modified Taylor-Couette device operated in the quasi-Keplerian regime, where it is observed that nearly ideal flows exhibit self-similarity under scaling of the Reynolds number. In contrast, nonideal flows show progressive departure from ideal Couette as the Reynolds number is increased. We present a model that describes the observed departures from ideal Couette rotation as a function of the fluxes of angular momentum across the boundaries, capturing the dependence on Reynolds number and boundary conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J; Polsky, Yarom; Kisner, Roger A
2015-09-01
For the past quarter, we have placed our effort in implementing the first version of the ModelBased Iterative Reconstruction (MBIR) algorithm, assembling and testing the electronics, designing transducers mounts, and defining our laboratory test samples. We have successfully developed the first implementation of MBIR for ultrasound imaging. The current algorithm was tested with synthetic data and we are currently making new modifications for the reconstruction of real ultrasound data. Beside assembling and testing the electronics, we developed a LabView graphic user interface (GUI) to fully control the ultrasonic phased array, adjust the time-delays of the transducers, and store the measuredmore » reflections. As part of preparing for a laboratory-scale demonstration, the design and fabrication of the laboratory samples has begun. Three cement blocks with embedded objects will be fabricated, characterized, and used to demonstrate the capabilities of the system. During the next quarter, we will continue to improve the current MBIR forward model and integrate the reconstruction code with the LabView GUI. In addition, we will define focal laws for the ultrasonic phased array and perform the laboratory demonstration. We expect to perform laboratory demonstration by the end of October 2015.« less
NASA Technical Reports Server (NTRS)
Starnes, James H., Jr.; Newman, James C., Jr.; Harris, Charles E.; Piascik, Robert S.; Young, Richard D.; Rose, Cheryl A.
2003-01-01
Analysis methodologies for predicting fatigue-crack growth from rivet holes in panels subjected to cyclic loads and for predicting the residual strength of aluminum fuselage structures with cracks and subjected to combined internal pressure and mechanical loads are described. The fatigue-crack growth analysis methodology is based on small-crack theory and a plasticity induced crack-closure model, and the effect of a corrosive environment on crack-growth rate is included. The residual strength analysis methodology is based on the critical crack-tip-opening-angle fracture criterion that characterizes the fracture behavior of a material of interest, and a geometric and material nonlinear finite element shell analysis code that performs the structural analysis of the fuselage structure of interest. The methodologies have been verified experimentally for structures ranging from laboratory coupons to full-scale structural components. Analytical and experimental results based on these methodologies are described and compared for laboratory coupons and flat panels, small-scale pressurized shells, and full-scale curved stiffened panels. The residual strength analysis methodology is sufficiently general to include the effects of multiple-site damage on structural behavior.
Analysis of a Digital Technique for Frequency Transposition of Speech.
1985-09-01
scaled excitation function drives the vocal tract model. In a phone interview with James Kaiser of Bell Laboratories, he mentioned that current thinking...is processed using the Fast Fourier Transform (FFT) and then low pass filtered if desired. mAbe (Pb) FFT LPF- nih ~a s5ee.. S. 4Nrf#Nr Flow Chart for
Particle size distributions from laboratory-scale biomass fires using fast response instruments
S Hosseini; L. Qi; D. Cocker; D. Weise; A. Miller; M. Shrivastava; J.W. Miller; S. Mahalingam; M. Princevac; H. Jung
2010-01-01
Particle size distribution from biomass combustion is an important parameter as it affects air quality, climate modelling and health effects. To date, particle size distributions reported from prior studies vary not only due to difference in fuels but also difference in experimental conditions. This study aims to report characteristics of particle size distributions in...
Using the Moon as a Tool for Discovery-Oriented Learning.
ERIC Educational Resources Information Center
Cummins, Robert Hays; Ritger, Scott David; Myers, Christopher Adam
1992-01-01
Students test the hypothesis that the moon revolves east to west around the earth, determine by observation approximately how many degrees the moon revolves per night, and develop a scale model of the earth-sun-moon system in this laboratory exercise. Students are actively involved in the scientific process and are introduced to the importance of…
Scientific rigor through videogames.
Treuille, Adrien; Das, Rhiju
2014-11-01
Hypothesis-driven experimentation - the scientific method - can be subverted by fraud, irreproducibility, and lack of rigorous predictive tests. A robust solution to these problems may be the 'massive open laboratory' model, recently embodied in the internet-scale videogame EteRNA. Deploying similar platforms throughout biology could enforce the scientific method more broadly. Copyright © 2014 Elsevier Ltd. All rights reserved.
Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hales, J. D.; Tonks, M. R.; Chockalingam, K.
2015-03-01
Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less
NASA Astrophysics Data System (ADS)
Vogler, D.; Settgast, R. R.; Annavarapu, C.; Madonna, C.; Bayer, P.; Amann, F.
2018-02-01
In this work, we present the application of a fully coupled hydro-mechanical method to investigate the effect of fracture heterogeneity on fluid flow through fractures at the laboratory scale. Experimental and numerical studies of fracture closure behavior in the presence of heterogeneous mechanical and hydraulic properties are presented. We compare the results of two sets of laboratory experiments on granodiorite specimens against numerical simulations in order to investigate the mechanical fracture closure and the hydro-mechanical effects, respectively. The model captures fracture closure behavior and predicts a nonlinear increase in fluid injection pressure with loading. Results from this study indicate that the heterogeneous aperture distributions measured for experiment specimens can be used as model input for a local cubic law model in a heterogeneous fracture to capture fracture closure behavior and corresponding fluid pressure response.
A compendium of chameleon constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrage, Clare; Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk
2016-11-01
The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical andmore » laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.« less
Simulation of two-dimensional turbulent flows in a rotating annulus
NASA Astrophysics Data System (ADS)
Storey, Brian D.
2004-05-01
Rotating water tank experiments have been used to study fundamental processes of atmospheric and geophysical turbulence in a controlled laboratory setting. When these tanks are undergoing strong rotation the forced turbulent flow becomes highly two dimensional along the axis of rotation. An efficient numerical method has been developed for simulating the forced quasi-geostrophic equations in an annular geometry to model current laboratory experiments. The algorithm employs a spectral method with Fourier series and Chebyshev polynomials as basis functions. The algorithm has been implemented on a parallel architecture to allow modelling of a wide range of spatial scales over long integration times. This paper describes the derivation of the model equations, numerical method, testing and performance of the algorithm. Results provide reasonable agreement with the experimental data, indicating that such computations can be used as a predictive tool to design future experiments.
Pretreatment Engineering Platform Phase 1 Final Test Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurath, Dean E.; Hanson, Brady D.; Minette, Michael J.
2009-12-23
Pacific Northwest National Laboratory (PNNL) was tasked by Bechtel National Inc. (BNI) on the River Protection Project, Hanford Tank Waste Treatment and Immobilization Plant (RPP-WTP) project to conduct testing to demonstrate the performance of the WTP Pretreatment Facility (PTF) leaching and ultrafiltration processes at an engineering-scale. In addition to the demonstration, the testing was to address specific technical issues identified in Issue Response Plan for Implementation of External Flowsheet Review Team (EFRT) Recommendations - M12, Undemonstrated Leaching Processes.( ) Testing was conducted in a 1/4.5-scale mock-up of the PTF ultrafiltration system, the Pretreatment Engineering Platform (PEP). Parallel laboratory testing wasmore » conducted in various PNNL laboratories to allow direct comparison of process performance at an engineering-scale and a laboratory-scale. This report presents and discusses the results of those tests.« less
Radiation breakage of DNA: a model based on random-walk chromatin structure
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Sachs, R. K.
2001-01-01
Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.
Cold Flow Testing of a Modified Subscale Model Exhaust System for a Space Based Laser
2004-06-01
Abstract The aim of this research was a continued study of gas-dynamic phenomena that occurred in a set of stacked nozzles as reported by Captains...join the vacuum and test sections. The goals of this research were two fold; first, modify the original scale-model of the stacked cylindrical...Defense Advanced Research Projects Agency (DARPA), in conjunction with the Airborne Laser Laboratory, have studied the use of an Airborne Laser (ABL
Spatial structure and scaling of macropores in hydrological process at small catchment scale
NASA Astrophysics Data System (ADS)
Silasari, Rasmiaditya; Broer, Martine; Blöschl, Günter
2013-04-01
During rainfall events, the formation of overland flow can occur under the circumstances of saturation excess and/or infiltration excess. These conditions are affected by the soil moisture state which represents the soil water content in micropores and macropores. Macropores act as pathway for the preferential flows and have been widely studied locally. However, very little is known about their spatial structure and conductivity of macropores and other flow characteristic at the catchment scale. This study will analyze these characteristics to better understand its importance in hydrological processes. The research will be conducted in Petzenkirchen Hydrological Open Air Laboratory (HOAL), a 64 ha catchment located 100 km west of Vienna. The land use is divided between arable land (87%), pasture (5%), forest (6%) and paved surfaces (2%). Video cameras will be installed on an agricultural field to monitor the overland flow pattern during rainfall events. A wireless soil moisture network is also installed within the monitored area. These field data will be combined to analyze the soil moisture state and the responding surface runoff occurrence. The variability of the macropores spatial structure of the observed area (field scale) then will be assessed based on the topography and soil data. Soil characteristics will be supported with laboratory experiments on soil matrix flow to obtain proper definitions of the spatial structure of macropores and its variability. A coupled physically based distributed model of surface and subsurface flow will be used to simulate the variability of macropores spatial structure and its effect on the flow behaviour. This model will be validated by simulating the observed rainfall events. Upscaling from field scale to catchment scale will be done to understand the effect of macropores variability on larger scales by applying spatial stochastic methods. The first phase in this study is the installation and monitoring configuration of video cameras and soil moisture monitoring equipment to obtain the initial data of overland flow occurrence and soil moisture state relationships.
NASA Astrophysics Data System (ADS)
Pritchard, M. S.; Kooperman, G. J.; Zhao, Z.; Wang, M.; Russell, L. M.; Somerville, R. C.; Ghan, S. J.
2011-12-01
Evaluating the fidelity of new aerosol physics in climate models is confounded by uncertainties in source emissions, systematic error in cloud parameterizations, and inadequate sampling of long-range plume concentrations. To explore the degree to which cloud parameterizations distort aerosol processing and scavenging, the Pacific Northwest National Laboratory (PNNL) Aerosol-Enabled Multi-Scale Modeling Framework (AE-MMF), a superparameterized branch of the Community Atmosphere Model Version 5 (CAM5), is applied to represent the unusually active and well sampled North American wildfire season in 2004. In the AE-MMF approach, the evolution of double moment aerosols in the exterior global resolved scale is linked explicitly to convective statistics harvested from an interior cloud resolving scale. The model is configured in retroactive nudged mode to observationally constrain synoptic meteorology, and Arctic wildfire activity is prescribed at high space/time resolution using data from the Global Fire Emissions Database. Comparisons against standard CAM5 bracket the effect of superparameterization to isolate the role of capturing rainfall intermittency on the bulk characteristics of 2004 Arctic plume transport. Ground based lidar and in situ aircraft wildfire plume constraints from the International Consortium for Atmospheric Research on Transport and Transformation field campaign are used as a baseline for model evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Micah Johnson, Andrew Slaughter
PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implemented using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations:more » the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.« less
Characterization of seismic properties across scales: from the laboratory- to the field scale
NASA Astrophysics Data System (ADS)
Grab, Melchior; Quintal, Beatriz; Caspari, Eva; Maurer, Hansruedi; Greenhalgh, Stewart
2016-04-01
When exploring geothermal systems, the main interest is on factors controlling the efficiency of the heat exchanger. This includes the energy state of the pore fluids and the presence of permeable structures building part of the fluid transport system. Seismic methods are amongst the most common exploration techniques to image the deep subsurface in order to evaluate such a geothermal heat exchanger. They make use of the fact that a seismic wave caries information on the properties of the rocks in the subsurface through which it passes. This enables the derivation of the stiffness and the density of the host rock from the seismic velocities. Moreover, it is well-known that the seismic waveforms are modulated while propagating trough the subsurface by visco-elastic effects due to wave induced fluid flow, hence, delivering information about the fluids in the rock's pore space. To constrain the interpretation of seismic data, that is, to link seismic properties with the fluid state and host rock permeability, it is common practice to measure the rock properties of small rock specimens in the laboratory under in-situ conditions. However, in magmatic geothermal systems or in systems situated in the crystalline basement, the host rock is often highly impermeable and fluid transport predominately takes place in fracture networks, consisting of fractures larger than the rock samples investigated in the laboratory. Therefore, laboratory experiments only provide the properties of relatively intact rock and an up-scaling procedure is required to characterize the seismic properties of large rock volumes containing fractures and fracture networks and to study the effects of fluids in such fractured rock. We present a technique to parameterize fractured rock volumes as typically encountered in Icelandic magmatic geothermal systems, by combining laboratory experiments with effective medium calculations. The resulting models can be used to calculate the frequency-dependent bulk modulus K(ω) and shear modulus G(ω), from which the P- and S-wave velocities V P(ω) and V S(ω) and the quality factors QP(ω) and QS(ω) of fluid saturated fractured rock volumes can be estimated. These volumes are much larger and contain more complex structures than the rock samples investigated in the laboratory. Thus, the derived quantities describe the elastic and anelastic (energy loss due to wave induced fluid flow) short-term deformation induced by seismic waves at scales that are relevant for field-scale seismic exploration projects.
R2 dark energy in the laboratory
NASA Astrophysics Data System (ADS)
Brax, Philippe; Valageas, Patrick; Vanhove, Pierre
2018-05-01
We analyze the role, on large cosmological scales and laboratory experiments, of the leading curvature squared contributions to the low-energy effective action of gravity. We argue for a natural relationship c0λ2≃1 at low energy between the R2 coefficients c0 of the Ricci scalar squared term in this expansion and the dark energy scale Λ =(λ MPl)4 in four-dimensional Planck mass units. We show how the compatibility between the acceleration of the expansion rate of the Universe, local tests of gravity and the quantum stability of the model all converge to select such a relationship up to a coefficient which should be determined experimentally. When embedding this low-energy theory of gravity into candidates for its ultraviolet completion, we find that the proposed relationship is guaranteed in string-inspired supergravity models with modulus stabilization and supersymmetry breaking leading to de Sitter compactifications. In this case, the scalar degree of freedom of R2 gravity is associated to a volume modulus. Once written in terms of a scalar-tensor theory, the effective theory corresponds to a massive scalar field coupled with the universal strength β =1 /√{6 } to the matter stress-energy tensor. When the relationship c0λ2≃1 is realized, we find that on astrophysical scales and in cosmology the scalar field is ultralocal and therefore no effect arises on such large scales. On the other hand, the scalar field mass is tightly constrained by the nonobservation of fifth forces in torsion pendulum experiments such as Eöt-Wash. It turns out that the observation of the dark energy scale in cosmology implies that the scalar field could be detectable by fifth-force experiments in the near future.
Mechanical Stability of Fractured Rift Basin Mudstones: from lab to basin scale
NASA Astrophysics Data System (ADS)
Zakharova, N. V.; Goldberg, D.; Collins, D.; Swager, L.; Payne, W. G.
2016-12-01
Understanding petrophysical and mechanical properties of caprock mudstones is essential for ensuring good containment and mechanical formation stability at potential CO2 storage sites. Natural heterogeneity and presence of fractures, however, create challenges for accurate prediction of mudstone behavior under injection conditions and at reservoir scale. In this study, we present a multi-scale geomechanical analysis for Mesozoic mudstones from the Newark Rift basin, integrating petropyshical core and borehole data, in situ stress measurements, and caprock stability modeling. The project funded by the U.S. Department of Energy's National Energy Technology Laboratory (NETL) focuses on the Newark basin as a representative locality for a series of the Mesozoic rift basins in eastern North America considered as potential CO2 storage sites. An extensive core characterization program, which included laboratory CT scans, XRD, SEM, MICP, porosity, permeability, acoustic velocity measurements, and geomechanical testing under a range of confining pressures, revealed large variability and heterogeneity in both petrophysical and mechanical properties. Estimates of unconfined compressive strength for these predominantly lacustrine mudstones range from 5,000 to 50,000 psi, with only a weak correlation to clay content. Thinly bedded intervals exhibit up to 30% strength anisotropy. Mineralized fractures, abundant in most formations, are characterized by compressive strength as low as 10% of matrix strength. Upscaling these observations from core to reservoir scale is challenging. No simple one-to-one correlation between mechanical and petrophyscial properties exists, and therefore, we develop multivariate empirical relationships among these properties. A large suite of geophysical logs, including new measurements of the in situ stress field, is used to extrapolate these relationships to a basin-scale geomechanical model and predict mudstone behavior under injection conditions.
A Virtual Laboratory for the 4 Bed Molecular Sieve of the Carbon Dioxide Removal Assembly
NASA Technical Reports Server (NTRS)
Coker, Robert; Knox, James; O'Connor, Brian
2016-01-01
Ongoing work to improve water and carbon dioxide separation systems to be used on crewed space vehicles combines sub-scale systems testing and multi-physics simulations. Thus, as part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive COMSOL Multiphysics models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) have been developed. This Virtual Laboratory is being used to help reduce mass, power, and volume requirements for exploration missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future missions as well as the resolution of anomalies observed in the ISS CDRA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less
Scaled-model guidelines for formation-flying solar coronagraph missions.
Landini, Federico; Romoli, Marco; Baccani, Cristian; Focardi, Mauro; Pancrazzi, Maurizio; Galano, Damien; Kirschner, Volker
2016-02-15
Stray light suppression is the main concern in designing a solar coronagraph. The main contribution to the stray light for an externally occulted space-borne solar coronagraph is the light diffracted by the occulter and scattered by the optics. It is mandatory to carefully evaluate the diffraction generated by an external occulter and the impact that it has on the stray light signal on the focal plane. The scientific need for observations to cover a large portion of the heliosphere with an inner field of view as close as possible to the photospheric limb supports the ambition of launching formation-flying giant solar coronagraphs. Their dimension prevents the possibility of replicating the flight geometry in a clean laboratory environment, and the strong need for a scaled model is thus envisaged. The problem of scaling a coronagraph has already been faced for exoplanets, for a single point source on axis at infinity. We face the problem here by adopting an original approach and by introducing the scaling of the solar disk as an extended source.
SHynergie: Development of a virtual project laboratory for monitoring hydraulic stimulations
NASA Astrophysics Data System (ADS)
Renner, Jörg; Friederich, Wolfgang; Meschke, Günther; Müller, Thomas; Steeb, Holger
2016-04-01
Hydraulic stimulations are the primary means of developing subsurface reservoirs regarding the extent of fluid transport in them. The associated creation or conditioning of a system of hydraulic conduits involves a range of hydraulic and mechanical processes but also chemical reactions, such as dissolution and precipitation, may affect the stimulation result on time scales as short as hours. In the light of the extent and complexity of these processes, the steering potential for the operator of a stimulation critically depends on the ability to integrate the maximum amount of site-specific information with profound process understanding and a large spectrum of experience. We report on the development of a virtual project laboratory for monitoring hydraulic stimulations within the project SHynergie (http://www.ruhr-uni-bochum.de/shynergie/). The concept of the laboratory envisioned product that constitutes a preparing and accompanying rather than post-processing instrument ultimately accessible to persons responsible for a project over a web-repository. The virtual laboratory consists of a data base, a toolbox, and a model-building environment. Entries in the data base are of two categories. On the one hand, selected mineral and rock properties are provided from the literature. On the other hand, project-specific entries of any format can be made that are assigned attributes regarding their use in a stimulation problem at hand. The toolbox is interactive and allows the user to perform calculations of effective properties and simulations of different types (e.g., wave propagation in a reservoir, hydraulic test). The model component is also hybrid. The laboratory provides a library of models reflecting a range of scenarios but also allows the user to develop a site-specific model constituting the basis for simulations. The laboratory offers the option to use its components following the typical workflow of a stimulation project. The toolbox incorporates simulation instruments developed in the course of the SHynergie project that account for the experimental and modeling results of the various sub-projects.
Rouwane, Asmaa; Rabiet, Marion; Grybos, Malgorzata; Bernard, Guillaume; Guibaud, Gilles
2016-03-01
The dynamics of arsenic (As) and antimony (Sb) in wetland soil periodically submitted to agricultural pressure as well as the impact of soil enrichment with NO3 (-) (50 mg L(-1)) and PO4 (3-) (20 mg L(-1)) on As and Sb release were evaluated at both field and laboratory scales. The results showed that As and Sb exhibited different temporal behaviors, depending on the study scale. At field scale, As release (up to 93 μg L(-1)) occurred under Fe-reducing conditions, whereas Sb release was favored under oxidizing conditions (up to 5 μg L(-1)) and particularity when dissolved organic carbon (DOC) increased in soil pore water (up to 92.8 mg L(-1)). At laboratory scale, As and Sb release was much higher under reducing conditions (up to 138 and 1 μg L(-1), respectively) compared to oxic conditions (up to 6 and 0.5 μg L(-1), respectively) and was enhanced by NO3 (-) and PO4 (3-) addition (increased by a factor of 2.3 for As and 1.6 for Sb). The higher release of As and Sb in the enriched reduced soil compared to the non-enriched soil was probably induced by the combined effect of PO4 (3-) and HCO3 (-) which compete for the same binding sites of soil surfaces. Modeling results using Visual Minteq were in accordance with experimental results regarding As but failed in simulating the effects of PO4 (3-) and HCO3 (-) on Sb release.
NASA Astrophysics Data System (ADS)
Trippetta, Fabio; Ruggieri, Roberta; Lipparini, Lorenzo
2016-04-01
Crustal processes such as deformations or faulting are strictly related to the petrophysical properties of involved rocks. These properties depend on mineral composition, fabric, pores and any secondary features such as cracks or infilling material that may have been introduced during the whole diagenetic and tectonic history of the rock. In this work we investigate the role of hydrocarbons (HC) in changing the petrophysical properties of rock by merging laboratory experiments, well data and static models focusing on the carbonate-bearing Majella reservoir. This reservoir represent an interesting analogue for the several oil fields discovered in the subsurface in the region, allowing a comparison of a wide range of geological and geophysical data at different scale. The investigated lithology is made of high porosity ramp calcarenites, structurally slightly affected by a superimposed fracture system and displaced by few major normal faults, with some minor strike-slip movements. Sets of rock specimens were selected in the field and in particular two groups were investigated: 1. clean rocks (without oil) and 2. HC bearing rocks (with different saturations). For both groups, density, porosity, P and S wave velocity, permeability and elastic moduli measurements at increasing confining pressure were conducted on cylindrical specimens at the HP-HT Laboratory of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) in Rome, Italy. For clean samples at ambient pressure, laboratory porosity varies from 10 % up to 26 % and P wave velocity (Vp) spans from 4,1 km/s to 4,9 km/s and a very good correlation between Vp, Vs and porosity is observed. The P wave velocity at 100 MPa of confining pressure, ranges between 4,5 km/s and 5,2 km/s with a pressure independent Vp/Vs ratio of about 1,9. The presence of HC within the samples affects both Vp and Vs. In particular velocities increase with the presence of hydrocarbons proportionally respect to the amount of the filled porosity. Preliminary data also suggest a different behaviour at increasing confining pressure for clean and-oil bearing samples: almost perfectly elastic behaviour for oil-bearing samples and more inelastic behaviours for cleaner samples. Thus HC presence appears to contrast the increase of confining pressure acting as semi-fluids, reducing the rock inelastic compaction and enhancing its elastic behaviour. Trying to upscale our rock-physics results, we started from wells and laboratory data on stratigraphy, porosity and Vp in order to simulate the effect of the HC presence at larger scale, using Petrel® software. The developed synthetic model highlights that Vp, which is primarily controlled by porosity, changes significantly within oil-bearing portions, with a notable impact on the velocity model that should be adopted. Moreover we are currently performing laboratory tests in order to evaluate the changes in the elastic parameters with the aim of modelling the effects of the HC on the mechanical behaviour of the involved rocks at larger scale.
2006-02-15
New testing is underway in the Aero-Acoustic Propulsion Laboratory (AAPL) at NASA's Glenn Research Center. The research focuses on a model called the Highly Variable Cycle Exhaust System -- a 0.17 scale model of an exhaust system that will operate at subsonic, transonic and supersonic exhaust speeds in a future supersonic business jet. The model features ejector doors used at different angles. Researchers are investigating the impact of these ejectors on the resulting acoustic radiation. Here, Steven Sedensky, a mechanical engineer with Jacobs Sverdrup, takes measurements of the ejector door positions.
NASA Astrophysics Data System (ADS)
Jamroz, Ben; Julien, Keith; Knobloch, Edgar
2008-12-01
Taking advantage of disparate spatio-temporal scales relevant to astrophysics and laboratory experiments, we derive asymptotically exact reduced partial differential equation models for the magnetorotational instability. These models extend recent single-mode formulations leading to saturation in the presence of weak dissipation, and are characterized by a back-reaction on the imposed shear. Numerical simulations performed for a broad class of initial conditions indicate an initial phase of growth dominated by the optimal (fastest growing) magnetorotational instability fingering mode, followed by a vertical coarsening to a box-filling mode.
A 3D unstructured grid nearshore hydrodynamic model based on the vortex force formalism
NASA Astrophysics Data System (ADS)
Zheng, Peng; Li, Ming; van der A, Dominic A.; van der Zanden, Joep; Wolf, Judith; Chen, Xueen; Wang, Caixia
2017-08-01
A new three-dimensional nearshore hydrodynamic model system is developed based on the unstructured-grid version of the third generation spectral wave model SWAN (Un-SWAN) coupled with the three-dimensional ocean circulation model FVCOM to enable the full representation of the wave-current interaction in the nearshore region. A new wave-current coupling scheme is developed by adopting the vortex-force (VF) scheme to represent the wave-current interaction. The GLS turbulence model is also modified to better reproduce wave-breaking enhanced turbulence, together with a roller transport model to account for the effect of surface wave roller. This new model system is validated first against a theoretical case of obliquely incident waves on a planar beach, and then applied to three test cases: a laboratory scale experiment of normal waves on a beach with a fixed breaker bar, a field experiment of oblique incident waves on a natural, sandy barred beach (Duck'94 experiment), and a laboratory study of normal-incident waves propagating around a shore-parallel breakwater. Overall, the model predictions agree well with the available measurements in these tests, illustrating the robustness and efficiency of the present model for very different spatial scales and hydrodynamic conditions. Sensitivity tests indicate the importance of roller effects and wave energy dissipation on the mean flow (undertow) profile over the depth. These tests further suggest to adopt a spatially varying value for roller effects across the beach. In addition, the parameter values in the GLS turbulence model should be spatially inhomogeneous, which leads to better prediction of the turbulent kinetic energy and an improved prediction of the undertow velocity profile.
Sadiq, Rehan; Rodriguez, Manuel J
2004-04-05
Disinfection for drinking water reduces the risk of pathogenic infection but may pose chemical threat to human health due to disinfection residues and their by-products (DBPs) when the organic and inorganic precursors are present in water. More than 250 DBPs have been identified, but the behavioural profile of only approximately 20 DBPs are adequately known. In the last 2 decades, many modelling attempts have been made to predict the occurrence of DBPs in drinking water. Models have been developed based on data generated in laboratory-scaled and field-scaled investigations. The objective of this paper is to review DBPs predictive models, identify their advantages and limitations, and examine their potential applications as decision-making tools for water treatment analysis, epidemiological studies and regulatory concerns. The paper concludes with a discussion about the future research needs in this area.
A description of the new 3D electron gun and collector modeling tool: MICHELLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petillo, J.; Mondelli, A.; Krueger, W.
1999-07-01
A new 3D finite element gun and collector modeling code is under development at SAIC in collaboration with industrial partners and national laboratories. This development program has been designed specifically to address the shortcomings of current simulation and modeling tools. In particular, although there are 3D gun codes that exist today, their ability to address fine scale features is somewhat limited in 3D due to disparate length scales of certain classes of devices. Additionally, features like advanced emission rules, including thermionic Child's law and comprehensive secondary emission models also need attention. The program specifically targets problems classes including gridded-guns, sheet-beammore » guns, multi-beam devices, and anisotropic collectors. The presentation will provide an overview of the program objectives, the approach to be taken by the development team, and a status of the project.« less
Multiscale pore structure and constitutive models of fine-grained rocks
NASA Astrophysics Data System (ADS)
Heath, J. E.; Dewers, T. A.; Shields, E. A.; Yoon, H.; Milliken, K. L.
2017-12-01
A foundational concept of continuum poromechanics is the representative elementary volume or REV: an amount of material large enough that pore- or grain-scale fluctuations in relevant properties are dissipated to a definable mean, but smaller than length scales of heterogeneity. We determine 2D-equivalent representative elementary areas (REAs) of pore areal fraction of three major types of mudrocks by applying multi-beam scanning electron microscopy (mSEM) to obtain terapixel image mosaics. Image analysis obtains pore areal fraction and pore size and shape as a function of progressively larger measurement areas. Using backscattering imaging and mSEM data, pores are identified by the components within which they occur, such as in organics or the clastic matrix. We correlate pore areal fraction with nano-indentation, micropillar compression, and axysimmetic testing at multiple length scales on a terrigenous-argillaceous mudrock sample. The combined data set is used to: investigate representative elementary volumes (and areas for the 2D images); determine if scale separation occurs; and determine if transport and mechanical properties at a given length scale can be statistically defined. Clear scale separation occurs between REAs and observable heterogeneity in two of the samples. A highly-laminated sample exhibits fine-scale heterogeneity and an overlapping in scales, in which case typical continuum assumptions on statistical variability may break down. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Physical modeling of the effects of climate change on freshwater lenses
NASA Astrophysics Data System (ADS)
Stoeckl, L.; Houben, G.
2012-04-01
The investigation of the fragile equilibrium between fresh and saline water on oceanic islands is of major importance for a sustainable management and protection of freshwater lenses. Overexploitation will lead to salt water intrusion (up-coning), in turn causing damages or even destruction of a lens in the long term. We have performed a series of experiments on the laboratory scale to investigate and visualize processes of freshwater lenses under different boundary conditions. In addition these scenarios were numerically simulated using the finite-element model FEFLOW. Results were also compared to analytical solutions for problems regarding e.g. mean travel times of flow paths within a freshwater lens. On the laboratory scale, a cross section of an island was simulated by setting up a sand-box model (200 cm x 50 cm x 5 cm). Lens dynamics are driven by density contrasts of saline and fresh water, recharge rate and Kf-values of the medium. We used a time-dependent, sequential application of the tracers uranine, eosine and indigotine, to represent different recharge events. With a stepwise increase of freshwater recharge, we could show that the maximum thickness of the lens increased in a non-linear behavior. Moreover we measured that the degradation of a freshwater lens after turning off the precipitation does not follow the same function as its development does. This means that a steady state freshwater lens does not degrade as fast as it develops under constant recharge. On the other side, we could show that this is not true for a partial degradation of the lens due to passing forces, like anthropogenic pumping or climate change. This is, because the recovery to equilibrium is always a quasi asymptotic process. Thus, times of re-equilibration to steady state will take longer after e.g. a drought, than the degradation during the draught itself. This behavior could also be verified applying the numerical finite-element model FEFLOW. In addition, numerical simulations will be used to close the gap between laboratory results and future field investigations. For example, impacts due to sea level rise induced by climate change can be up-scaled and compared to the results achieved from physical experiments. Analytical models (e.g. Fetter 1972, Vacher et al. 1990, Chesnaux & Allen 2007) were used as benchmarks in our investigations. Models in general are simplifications of a real situation trying to display the relevant processes. For further investigations it is planned to compare different models and generate new benchmark experiments to improve the accuracy of existing models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; Stanier, Adam John
Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less
Modelling utility-scale wind power plants. Part 1: Economics
NASA Astrophysics Data System (ADS)
Milligan, Michael R.
1999-10-01
As the worldwide use of wind turbine generators continues to increase in utility-scale applications, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry in the United States appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This article is the first of two which address modelling approaches and results obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This first article addresses the basic economic issues associated with electricity production from several generators that include large-scale wind power plants. An important part of this discussion is the role of unit commitment and economic dispatch in production cost models. This paper includes overviews and comparisons of the prevalent production cost modelling methods, including several case studies applied to a variety of electric utilities. The second article discusses various methods of assessing capacity credit and results from several reliability-based studies performed at NREL.
Fractal Viscous Fingering in Fracture Networks
NASA Astrophysics Data System (ADS)
Boyle, E.; Sams, W.; Ferer, M.; Smith, D. H.
2007-12-01
We have used two very different physical models and computer codes to study miscible injection of a low- viscosity fluid into a simple fracture network, where it displaces a much-more viscous "defending" fluid through "rock" that is otherwise impermeable. The one code (NETfLow) is a standard pore level model, originally intended to treat laboratory-scale experiments; it assumes negligible mixing of the two fluids. The other code (NFFLOW) was written to treat reservoir-scale engineering problems; It explicitly treats the flow through the fractures and allows for significant mixing of the fluids at the interface. Both codes treat the fractures as parallel plates, of different effective apertures. Results are presented for the composition profiles from both codes. Independent of the degree of fluid-mixing, the profiles from both models have a functional form identical to that for fractal viscous fingering (i.e., diffusion limited aggregation, DLA). The two codes that solve the equations for different models gave similar results; together they suggest that the injection of a low-viscosity fluid into large- scale fracture networks may be much more significantly affected by fractal fingering than previously illustrated.
The innovative osmotic membrane bioreactor (OMBR) for reuse of wastewater.
Cornelissen, E R; Harmsen, D; Beerendonk, E F; Qin, J J; Oo, H; de Korte, K F; Kappelhof, J W M N
2011-01-01
An innovative osmotic membrane bioreactor (OMBR) is currently under development for the reclamation of wastewater, which combines activated sludge treatment and forward osmosis (FO) membrane separation with a RO post-treatment. The research focus is FO membrane fouling and performance using different activated sludge investigated both at laboratory scale (membrane area of 112cm2) and at on-site bench scale (flat sheet membrane area of 0.1 m2). FO performance on laboratory-scale (i) increased with temperature due to a decrease in viscosity and (ii) was independent of the type of activated sludge. Draw solution leakage increased with temperature and varied for different activated sludge. FO performance on bench-scale (i) increased with osmotic driving force, (ii) depended on the membrane orientation due to internal concentration polarization and (iii) was invariant to feed flow decrease and air injection at the feed and draw side. Draw solution leakage could not be evaluated on bench-scale due to experimental limitation. Membrane fouling was not found on laboratory scale and bench-scale, however, partially reversible fouling was found on laboratory scale for FO membranes facing the draw solution. Economic assessment indicated a minimum flux of 15L.m-2 h-1 at 0.5M NaCl for OMBR-RO to be cost effective, depending on the FO membrane price.
NASA Astrophysics Data System (ADS)
Tohidi, Ali; Gollner, Michael J.; Xiao, Huahua
2018-01-01
Fire whirls present a powerful intensification of combustion, long studied in the fire research community because of the dangers they present during large urban and wildland fires. However, their destructive power has hidden many features of their formation, growth, and propagation. Therefore, most of what is known about fire whirls comes from scale modeling experiments in the laboratory. Both the methods of formation, which are dominated by wind and geometry, and the inner structure of the whirl, including velocity and temperature fields, have been studied at this scale. Quasi-steady fire whirls directly over a fuel source form the bulk of current experimental knowledge, although many other cases exist in nature. The structure of fire whirls has yet to be reliably measured at large scales; however, scaling laws have been relatively successful in modeling the conditions for formation from small to large scales. This review surveys the state of knowledge concerning the fluid dynamics of fire whirls, including the conditions for their formation, their structure, and the mechanisms that control their unique state. We highlight recent discoveries and survey potential avenues for future research, including using the properties of fire whirls for efficient remediation and energy generation.
NASA Astrophysics Data System (ADS)
Grell, G. A.; Freitas, S. R.; Olson, J.; Bela, M.
2017-12-01
We will start by providing a summary of the latest cumulus parameterization modeling efforts at NOAA's Earth System Research Laboratory (ESRL) will be presented on both regional and global scales. The physics package includes a scale-aware parameterization of subgrid cloudiness feedback to radiation (coupled PBL, microphysics, radiation, shallow and congestus type convection), the stochastic Grell-Freitas (GF) scale- and aerosol-aware convective parameterization, and an aerosol aware microphysics package. GF is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). It was expanded to include PDF's for vertical mass flux, as well as modifications to improve the diurnal cycle. This physics package will be used on different scales, spanning global to cloud resolving, to look at the impact on scalar transport and numerical weather prediction.
Cheng, Weiwei; Liu, Guoqin; Liu, Xinqi
2016-07-27
In the present study, the formation mechanisms of glycidyl fatty acid esters (GEs) were investigated both in real edible oils (soybean oil, camellia oil, and palm oil) during laboratory-scale preparation and refining and in chemical model (1,2-dipalmitin (DPG) and 1-monopalmitin (MPG)) during high temperature exposure (160-260 °C under nitrogen). The formation process of GEs in the chemical model was monitored using attenuated total reflection-Fourier transform infrared (ATR-FTIR) spectroscopy. The results showed that the roasting and pressing process could produce certain amounts of GEs that were much lower than that produced in the deodorization process. GE contents in edible oils increased continuously and significantly with increasing deodorization time below 200 °C. However, when the temperature exceeded 200 °C, GE contents sharply increased in 1-2 h followed by a gradual decrease, which could verify a simultaneous formation and degradation of GEs at high temperature. In addition, it was also found that the presence of acylglycerol (DAGs and MAGs) could significantly increase the formation yield of GEs both in real edible oils and in chemical model. Compared with DAGs, moreover, MAGs displayed a higher formation capacity but substantially lower contribution to GE formation due to their low contents in edible oils. In situ ATR-FTIR spectroscopic evidence showed that cyclic acyloxonium ion intermediate was formed during GE formation derived from DPG and MPG in chemical model heated at 200 °C.
Computational fluid dynamics modeling of laboratory flames and an industrial flare.
Singh, Kanwar Devesh; Gangadharan, Preeti; Chen, Daniel H; Lou, Helen H; Li, Xianchang; Richmond, Peyton
2014-11-01
A computational fluid dynamics (CFD) methodology for simulating the combustion process has been validated with experimental results. Three different types of experimental setups were used to validate the CFD model. These setups include an industrial-scale flare setups and two lab-scale flames. The CFD study also involved three different fuels: C3H6/CH/Air/N2, C2H4/O2/Ar and CH4/Air. In the first setup, flare efficiency data from the Texas Commission on Environmental Quality (TCEQ) 2010 field tests were used to validate the CFD model. In the second setup, a McKenna burner with flat flames was simulated. Temperature and mass fractions of important species were compared with the experimental data. Finally, results of an experimental study done at Sandia National Laboratories to generate a lifted jet flame were used for the purpose of validation. The reduced 50 species mechanism, LU 1.1, the realizable k-epsilon turbulence model, and the EDC turbulence-chemistry interaction model were usedfor this work. Flare efficiency, axial profiles of temperature, and mass fractions of various intermediate species obtained in the simulation were compared with experimental data and a good agreement between the profiles was clearly observed. In particular the simulation match with the TCEQ 2010 flare tests has been significantly improved (within 5% of the data) compared to the results reported by Singh et al. in 2012. Validation of the speciated flat flame data supports the view that flares can be a primary source offormaldehyde emission.
Stegen, James C.
2018-04-10
To improve predictions of ecosystem function in future environments, we need to integrate the ecological and environmental histories experienced by microbial communities with hydrobiogeochemistry across scales. A key issue is whether we can derive generalizable scaling relationships that describe this multiscale integration. There is a strong foundation for addressing these challenges. We have the ability to infer ecological history with null models and reveal impacts of environmental history through laboratory and field experimentation. Recent developments also provide opportunities to inform ecosystem models with targeted omics data. A major next step is coupling knowledge derived from such studies with multiscale modelingmore » frameworks that are predictive under non-steady-state conditions. This is particularly true for systems spanning dynamic interfaces, which are often hot spots of hydrobiogeochemical function. Here, we can advance predictive capabilities through a holistic perspective focused on the nexus of history, ecology, and hydrobiogeochemistry.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stegen, James C.
To improve predictions of ecosystem function in future environments, we need to integrate the ecological and environmental histories experienced by microbial communities with hydrobiogeochemistry across scales. A key issue is whether we can derive generalizable scaling relationships that describe this multiscale integration. There is a strong foundation for addressing these challenges. We have the ability to infer ecological history with null models and reveal impacts of environmental history through laboratory and field experimentation. Recent developments also provide opportunities to inform ecosystem models with targeted omics data. A major next step is coupling knowledge derived from such studies with multiscale modelingmore » frameworks that are predictive under non-steady-state conditions. This is particularly true for systems spanning dynamic interfaces, which are often hot spots of hydrobiogeochemical function. Here, we can advance predictive capabilities through a holistic perspective focused on the nexus of history, ecology, and hydrobiogeochemistry.« less
A Historical Perspective on Dynamics Testing at the Langley Research Center
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Kvaternik, Raymond G.
2000-01-01
The history of structural dynamics testing research over the past four decades at the Langley Research Center of the National Aeronautics and Space Administration is reviewed. Beginning in the early sixties, Langley investigated several scale model and full-scale spacecraft including the NIMBUS and various concepts for Apollo and Viking landers. Langley engineers pioneered the use of scaled models to study the dynamics of launch vehicles including Saturn I, Saturn V, and Titan III. In the seventies, work emphasized the Space Shuttle and advanced test and data analysis methods. In the eighties, the possibility of delivering large structures to orbit by the Space Shuttle shifted focus towards understanding the interaction of flexible space structures with attitude control systems. Although Langley has maintained a tradition of laboratory-based research, some flight experiments were supported. This review emphasizes work that, in some way, advanced the state of knowledge at the time.
Role of large-scale velocity fluctuations in a two-vortex kinematic dynamo.
Kaplan, E J; Brown, B P; Rahbarnia, K; Forest, C B
2012-06-01
This paper presents an analysis of the Dudley-James two-vortex flow, which inspired several laboratory-scale liquid-metal experiments, in order to better demonstrate its relation to astrophysical dynamos. A coordinate transformation splits the flow into components that are axisymmetric and nonaxisymmetric relative to the induced magnetic dipole moment. The reformulation gives the flow the same dynamo ingredients as are present in more complicated convection-driven dynamo simulations. These ingredients are currents driven by the mean flow and currents driven by correlations between fluctuations in the flow and fluctuations in the magnetic field. The simple model allows us to isolate the dynamics of the growing eigenvector and trace them back to individual three-wave couplings between the magnetic field and the flow. This simple model demonstrates the necessity of poloidal advection in sustaining the dynamo and points to the effect of large-scale flow fluctuations in exciting a dynamo magnetic field.
Dispersal scaling from the world's rivers
Warrick, J.A.; Fong, D.A.
2004-01-01
Although rivers provide important biogeochemical inputs to oceans, there are currently no descriptive or predictive relationships of the spatial scales of these river influences. Our combined satellite, laboratory, field and modeling results show that the coastal dispersal areas of small, mountainous rivers exhibit remarkable self-similar scaling relationships over many orders of magnitude. River plume areas scale with source drainage area to a power significantly less than one (average = 0.65), and this power relationship decreases significantly with distance offshore of the river mouth. Observations of plumes from large rivers reveal that this scaling continues over six orders of magnitude of river drainage basin areas. This suggests that the cumulative area of coastal influence for many of the smallest rivers of the world is greater than that of single rivers of equal watershed size. Copyright 2004 by the American Geophysical Union.
Jain, Vivek; Chang, Wei; Byonanebye, Dathan M.; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R.; Havlir, Diane V.; Kahn, James G.
2015-01-01
Background Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Methods Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Findings Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451–716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100–200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. Conclusions In a Ugandan HIV clinic, ART delivery costs—including VL testing—for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions. PMID:26632823
Jain, Vivek; Chang, Wei; Byonanebye, Dathan M; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R; Havlir, Diane V; Kahn, James G
2015-01-01
Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451-716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100-200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. In a Ugandan HIV clinic, ART delivery costs--including VL testing--for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions.
NASA Astrophysics Data System (ADS)
Maniatis, Georgios
2017-04-01
Fluvial sediment transport is controlled by hydraulics, sediment properties and arrangement, and flow history across a range of time scales. One reference frame descriptions (Eulerian or Lagrangian) yield useful results but restrict the theoretical understanding of the process as differences between the two phases (liquid and solid) are not explicitly accounted. Recently, affordable Inertial Measurement Units (IMUs) that can be embedded in coarse (100 mm diameter scale) natural or artificial particles became available. These sensors are subjected to technical limitations when deployed for natural sediment transport. However, they give us the ability to measure for the first time the inertial dynamics (acceleration and angular velocity) of moving sediment grains under fluvial transport. Theoretically, the assumption of an ideal (IMU), rigidly attached at the centre of the mass of a sediment particle can simplify greatly the derivation of a general Eulerian-Lagrangian (E-L) model. This approach accounts for inertial characteristics of particles in a Lagrangian (particle fixed) frame, and for the hydrodynamics in an independent Eulerian frame. Simplified versions of the E-L model have been evaluated in laboratory experiments using real-IMUs [Maniatis et. al 2015]. Here, experimental results are presented relevant to the evaluation of the complete E-L model. Artificial particles were deployed in a series of laboratory and field experiments. The particles are equipped with an IMU capable of recording acceleration at ± 400 g and angular velocities at ± 1200 rads/sec ranges. The sampling frequency ranges from 50 to 200 Hz for the total IMU measurement. Two sets of laboratory experiments were conducted in a 0.9m wide laboratory flume. The first is a set of entrainment threshold experiments using two artificial particles: a spherical of D=90mm (A) and an ellipsoid with axes of 100, 70 and 30 mm (B). For the second set of experiments, a spherical artificial enclosure of D=75 mm (C) was released to roll freely in a (> threshold for entrainment) flow and over surfaces of different roughness. Finally, the coarser spherical and elliptical sensor- assemblies (A and B) were deployed in a steep mountain stream during active sediment transport flow conditions. The results include the calculation of the inertial acceleration, the instantaneous particle velocity and the total kinetic energy of the mobile particle (including the rotational component using gyroscope measurements). The comparison of the field deployments with the laboratory experiments suggests that E-L model can be generalised from laboratory to natural conditions. Overall, the inertia of individual coarse particles is a statistically significant effect for all the modes of sediment transport (entrainment, translation, deposition) in both natural and laboratory regimes. Maniatis et. al 2015: "Calculating the Explicit Probability of Entrainment Based on Inertial Acceleration Measurements", J. Hydraulic Engineering, 04016097
A generic model for the shallow velocity structure of volcanoes
NASA Astrophysics Data System (ADS)
Lesage, Philippe; Heap, Michael J.; Kushnir, Alexandra
2018-05-01
The knowledge of the structure of volcanoes and of the physical properties of volcanic rocks is of paramount importance to the understanding of volcanic processes and the interpretation of monitoring observations. However, the determination of these structures by geophysical methods suffers limitations including a lack of resolution and poor precision. Laboratory experiments provide complementary information on the physical properties of volcanic materials and their behavior as a function of several parameters including pressure and temperature. Nevertheless combined studies and comparisons of field-based geophysical and laboratory-based physical approaches remain scant in the literature. Here, we present a meta-analysis which compares 44 seismic velocity models of the shallow structure of eleven volcanoes, laboratory velocity measurements on about one hundred rock samples from five volcanoes, and seismic well-logs from deep boreholes at two volcanoes. The comparison of these measurements confirms the strong variability of P- and S-wave velocities, which reflects the diversity of volcanic materials. The values obtained from laboratory experiments are systematically larger than those provided by seismic models. This discrepancy mainly results from scaling problems due to the difference between the sampled volumes. The averages of the seismic models are characterized by very low velocities at the surface and a strong velocity increase at shallow depth. By adjusting analytical functions to these averages, we define a generic model that can describe the variations in P- and S-wave velocities in the first 500 m of andesitic and basaltic volcanoes. This model can be used for volcanoes where no structural information is available. The model can also account for site time correction in hypocenter determination as well as for site and path effects that are commonly observed in volcanic structures.
LABORATORY SCALE STEAM INJECTION TREATABILITY STUDIES
Laboratory scale steam injection treatability studies were first developed at The University of California-Berkeley. A comparable testing facility has been developed at USEPA's Robert S. Kerr Environmental Research Center. Experience has already shown that many volatile organic...
Seasonal-Scale Optimization of Conventional Hydropower Operations in the Upper Colorado System
NASA Astrophysics Data System (ADS)
Bier, A.; Villa, D.; Sun, A.; Lowry, T. S.; Barco, J.
2011-12-01
Sandia National Laboratories is developing the Hydropower Seasonal Concurrent Optimization for Power and the Environment (Hydro-SCOPE) tool to examine basin-wide conventional hydropower operations at seasonal time scales. This tool is part of an integrated, multi-laboratory project designed to explore different aspects of optimizing conventional hydropower operations. The Hydro-SCOPE tool couples a one-dimensional reservoir model with a river routing model to simulate hydrology and water quality. An optimization engine wraps around this model framework to solve for long-term operational strategies that best meet the specific objectives of the hydrologic system while honoring operational and environmental constraints. The optimization routines are provided by Sandia's open source DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) software. Hydro-SCOPE allows for multi-objective optimization, which can be used to gain insight into the trade-offs that must be made between objectives. The Hydro-SCOPE tool is being applied to the Upper Colorado Basin hydrologic system. This system contains six reservoirs, each with its own set of objectives (such as maximizing revenue, optimizing environmental indicators, meeting water use needs, or other objectives) and constraints. This leads to a large optimization problem with strong connectedness between objectives. The systems-level approach used by the Hydro-SCOPE tool allows simultaneous analysis of these objectives, as well as understanding of potential trade-offs related to different objectives and operating strategies. The seasonal-scale tool will be tightly integrated with the other components of this project, which examine day-ahead and real-time planning, environmental performance, hydrologic forecasting, and plant efficiency.
Characteristics of HIV Care and Treatment in PEPFAR-Supported Sites
Filler, Scott; Berruti, Andres A.; Menzies, Nick; Berzon, Rick; Ellerbrock, Tedd V.; Ferris, Robert; Blandford, John M.
2011-01-01
Background The U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) has supported the extension of HIV care and treatment to 2.4 million individuals by September 2009. With increasing resources targeted toward scale-up, it is important to understand the characteristics of current PEPFAR-supported HIV care and treatment sites. Methods Forty-five sites in Botswana, Ethiopia, Nigeria, Uganda, and Vietnam were sampled. Data were collected retrospectively from successive 6-month periods of site operations, through reviews of facility records and interviews with site personnel between April 2006 and March 2007. Facility size and scale-up rate, patient characteristics, staffing models, clinical and laboratory monitoring, and intervention mix were compared. Results Sites added a median of 293 patients per quarter. By the evaluation’s end, sites supported a median of 1,649 HIV patients, 922 of them receiving antiretroviral therapy (ART). Patients were predominantly adult (97.4%) and the majority (96.5%) were receiving regimens based on nonnucleoside reverse transcriptase inhibitors (NNRTIs). The ratios of physicians to patients dropped substantially as sites matured. ART patients were commonly seen monthly or quarterly for clinical and laboratory monitoring, with CD4 counts being taken at 6-month intervals. One-third of sites provided viral load testing. Cotrimoxazole prophylaxis was the most prevalent supportive service. Conclusions HIV treatment sites scaled up rapidly with the influx of resources and technical support through PEPFAR, providing complex health services to progressively expanding patient cohorts. Human resources are stretched thin, and delivery models and intervention mix differ widely between sites. Ongoing research is needed to identify best-practice service delivery models. PMID:21346585
Validation of laboratory-scale recycling test method of paper PSA label products
Carl Houtman; Karen Scallon; Richard Oldack
2008-01-01
Starting with test methods and a specification developed by the U.S. Postal Service (USPS) Environmentally Benign Pressure Sensitive Adhesive Postage Stamp Program, a laboratory-scale test method and a specification were developed and validated for pressure-sensitive adhesive labels, By comparing results from this new test method and pilot-scale tests, which have been...
NASA Astrophysics Data System (ADS)
Haslinger, Edith; Goldbrunner, Johann; Dietzel, Martin; Leis, Albrecht; Boch, Ronny; Knauss, Ralf; Hippler, Dorothee; Shirbaz, Andrea; Fröschl, Heinz; Wyhlidal, Stefan; Plank, Otmar; Gold, Marlies; Elster, Daniel
2017-04-01
During the exploitation of thermal water for the use in a geothermal plant a series of hydrochemical reactions such as solution and precipitation processes (scaling) or corrosion processes can be caused by pressure and temperature changes and degassing of the thermal water. Operators of hydrogeothermal plants are often confronted with precipitations in water-bearing parts of their plant, such as heat exchangers and pipes, which result in considerable costs for cleaning or remediation or the use of inhibitors. In the worst case, scaling and corrosion can lead to the abandonment of the system. The effects of the fluids on the technical facilities of hydrogeothermal plants are usually difficult to predict. This applies in particular to the long-term effects in the exploitation and use as well as the aspect of the reinjection of the fluids. In publications and guides for thermal water use in Austria, it is emphasized that the hydrochemical conditions have to be checked during the operation of geothermal plants, but precise directives and thus guidance for operators as well as a scientific investigations on this topic are almost completely missing today. The aim of the research project NoScale was the assessment of deep thermal water bodies in different geological reservoirs in Austria and Bavaria and therefore different hydrochemical compositions with regard to their scaling and corrosion potential in geothermal use. In the course of parallel chemical and mineralogical laboratory investigations, conclusions were drawn about the effects of thermal water on different technical components of hydrogeothermal plants and on the other hand a data basis for the model simulation of the relevant hydrochemical processes was developed. Subsequently, on the basis of detailed hydrochemical model calculations, possible effects of the use of the thermal waters on the technical components of the geothermal plants were shown. This approach of complex process modeling, detailed laboratory studies and experimental approaches has not been followed in Austria so far. The research results contribute significantly to the increased visibility of potential risks of the exploitation and use of thermal water. Thus, the project NoScale supports the operators of hydrogeothermal plants to assess risks of scaling in corrosion already in the pre-drilling phase, which leads to a much more energy and cost efficient operation.
A boundary condition for layer to level ocean model interaction
NASA Astrophysics Data System (ADS)
Mask, A.; O'Brien, J.; Preller, R.
2003-04-01
A radiation boundary condition based on vertical normal modes is introduced to allow a physical transition between nested/coupled ocean models that are of differing vertical structure and/or differing physics. In this particular study, a fine resolution regional/coastal sigma-coordinate Naval Coastal Ocean Model (NCOM) has been successfully nested to a coarse resolution (in the horizontal and vertical) basin scale NCOM and a coarse resolution basin scale Navy Layered Ocean Model (NLOM). Both of these models were developed at the Naval Research Laboratory (NRL) at Stennis Space Center, Mississippi, USA. This new method, which decomposes the vertical structure of the models into barotropic and baroclinic modes, gives improved results in the coastal domain over Orlanski radiation boundary conditions for the test cases. The principle reason for the improvement is that each mode has the radiation boundary condition applied individually; therefore, the packet of information passing through the boundary is allowed to have multiple phase speeds instead of a single-phase speed. Allowing multiple phase speeds reduces boundary reflections, thus improving results.
Wang, Yongjiang; Pang, Li; Liu, Xinyu; Wang, Yuansheng; Zhou, Kexun; Luo, Fei
2016-04-01
A comprehensive model of thermal balance and degradation kinetics was developed to determine the optimal reactor volume and insulation material. Biological heat production and five channels of heat loss were considered in the thermal balance model for a representative reactor. Degradation kinetics was developed to make the model applicable to different types of substrates. Simulation of the model showed that the internal energy accumulation of compost was the significant heat loss channel, following by heat loss through reactor wall, and latent heat of water evaporation. Lower proportion of heat loss occurred through the reactor wall when the reactor volume was larger. Insulating materials with low densities and low conductive coefficients were more desirable for building small reactor systems. Model developed could be used to determine the optimal reactor volume and insulation material needed before the fabrication of a lab-scale composting system. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shull, H.E.
The objective of the project was to investigate the economic feasibility of converting potato waste to fuel alcohol. The source of potato starch was Troyer Farms Potato Chips. Experimental work was carried out at both the laboratory scale and the larger pilot scale batch operation at a decommissioned waste water treatment building on campus. The laboratory scale work was considerably more extensive than originally planned, resulting in a much improved scientific work. The pilot scale facility has been completed and operated successfully. In contrast, the analysis of the economic feasibility of commercial production has not yet been completed. The projectmore » was brought to a close with the successful demonstration of the fermentation and distillation using the large scale facilities described previously. Two batches of mash were cooked using the procedures established in support of the laboratory scale work. One of the batches was fermented using the optimum values of the seven controlled factors as predicted by the laboratory scale application of the Box-Wilson design. The other batch was fermented under conditions derived out of Mr. Rouse's interpretation of his long sequence of laboratory results. He was gratified to find that his commitment to the Box-Wilson experiments was justified. The productivity of the Box-Wilson design was greater. The difference between the performance of the two fermentors (one stirred, one not) has not been established yet. Both batches were then distilled together, demonstrating the satisfactory performance of the column still. 4 references.« less
Posttest destructive examination of the steel liner in a 1:6-scale reactor containment model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, L.D.
A 1:6-scale model of a nuclear reactor containment model was built and tested at Sandia National Laboratories as part of research program sponsored by the Nuclear Regulatory Commission to investigate containment overpressure test was terminated due to leakage from a large tear in the steel liner. A limited destructive examination of the liner and anchorage system was conducted to gain information about the failure mechanism and is described. Sections of liner were removed in areas where liner distress was evident or where large strains were indicated by instrumentation during the test. The condition of the liner, anchorage system, and concretemore » for each of the regions that were investigated are described. The probable cause of the observed posttest condition of the liner is discussed.« less
Large Eddy Simulation of a Turbulent Jet
NASA Technical Reports Server (NTRS)
Webb, A. T.; Mansour, Nagi N.
2001-01-01
Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.
Modelling deformation and fracture of Gilsocarbon graphite subject to service environments
NASA Astrophysics Data System (ADS)
Šavija, Branko; Smith, Gillian E.; Heard, Peter J.; Sarakinou, Eleni; Darnbrough, James E.; Hallam, Keith R.; Schlangen, Erik; Flewitt, Peter E. J.
2018-02-01
Commercial graphites are used for a wide range of applications. For example, Gilsocarbon graphite is used within the reactor core of advanced gas-cooled reactors (AGRs, UK) as a moderator. In service, the mechanical properties of the graphite are changed as a result of neutron irradiation induced defects and porosity arising from radiolytic oxidation. In this paper, we discuss measurements undertaken of mechanical properties at the micro-length-scale for virgin and irradiated graphite. These data provide the necessary inputs to an experimentally-informed model that predicts the deformation and fracture properties of Gilsocarbon graphite at the centimetre length-scale, which is commensurate with laboratory test specimen data. The model predictions provide an improved understanding of how the mechanical properties and fracture characteristics of this type of graphite change as a result of exposure to the reactor service environment.
The dark side of cosmology: dark matter and dark energy.
Spergel, David N
2015-03-06
A simple model with only six parameters (the age of the universe, the density of atoms, the density of matter, the amplitude of the initial fluctuations, the scale dependence of this amplitude, and the epoch of first star formation) fits all of our cosmological data . Although simple, this standard model is strange. The model implies that most of the matter in our Galaxy is in the form of "dark matter," a new type of particle not yet detected in the laboratory, and most of the energy in the universe is in the form of "dark energy," energy associated with empty space. Both dark matter and dark energy require extensions to our current understanding of particle physics or point toward a breakdown of general relativity on cosmological scales. Copyright © 2015, American Association for the Advancement of Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clauss, D.B.
A 1:6-scale model of a reinforced concrete containment building was pressurized incrementally to failure at a remote site at Sandia National Laboratories. The response of the model was recorded with more than 1000 channels of data (primarily strain and displacement measurements) at 37 discrete pressure levels. The primary objective of this test was to generate data that could be used to validate methods for predicting the performance of containment buildings subject to loads beyond their design basis. Extensive analyses were conducted before the test to predict the behavior of the model. Ten organizations in Europe and the US conducted independentmore » analyses of the model and contributed to a report on the pretest predictions. Predictions included structural response at certain predetermined locations in the model as well as capacity and failure mode. This report discusses comparisons between the pretest predictions and the experimental results. Posttest evaluations that were conducted to provide additional insight into the model behavior are also described. The significance of the analysis and testing of the 1:6-scale model to performance evaluations of actual containments subject to beyond design basis loads is also discussed. 70 refs., 428 figs., 24 tabs.« less
Perkins, Kimberlie; Johnson, Brittany D.; Mirus, Benjamin B.
2014-01-01
During 2013–14, the USGS, in cooperation with the U.S. Department of Energy, focused on further characterization of the sedimentary interbeds below the future site of the proposed Remote Handled Low-Level Waste (RHLLW) facility, which is intended for the long-term storage of low-level radioactive waste. Twelve core samples from the sedimentary interbeds from a borehole near the proposed facility were collected for laboratory analysis of hydraulic properties, which also allowed further testing of the property-transfer modeling approach. For each core sample, the steady-state centrifuge method was used to measure relations between matric potential, saturation, and conductivity. These laboratory measurements were compared to water-retention and unsaturated hydraulic conductivity parameters estimated using the established property-transfer models. For each core sample obtained, the agreement between measured and estimated hydraulic parameters was evaluated quantitatively using the Pearson correlation coefficient (r). The highest correlation is for saturated hydraulic conductivity (Ksat) with an r value of 0.922. The saturated water content (qsat) also exhibits a strong linear correlation with an r value of 0.892. The curve shape parameter (λ) has a value of 0.731, whereas the curve scaling parameter (yo) has the lowest r value of 0.528. The r values demonstrate that model predictions correspond well to the laboratory measured properties for most parameters, which supports the value of extending this approach for quantifying unsaturated hydraulic properties at various sites throughout INL.
Heavy-lifting of gauge theories by cosmic inflation
NASA Astrophysics Data System (ADS)
Kumar, Soubhik; Sundrum, Raman
2018-05-01
Future measurements of primordial non-Gaussianity can reveal cosmologically produced particles with masses of order the inflationary Hubble scale and their interactions with the inflaton, giving us crucial insights into the structure of fundamental physics at extremely high energies. We study gauge-Higgs theories that may be accessible in this regime, carefully imposing the constraints of gauge symmetry and its (partial) Higgsing. We distinguish two types of Higgs mechanisms: (i) a standard one in which the Higgs scale is constant before and after inflation, where the particles observable in non-Gaussianities are far heavier than can be accessed by laboratory experiments, perhaps associated with gauge unification, and (ii) a "heavy-lifting" mechanism in which couplings to curvature can result in Higgs scales of order the Hubble scale during inflation while reducing to far lower scales in the current era, where they may now be accessible to collider and other laboratory experiments. In the heavy-lifting option, renormalization-group running of terrestrial measurements yield predictions for cosmological non-Gaussianities. If the heavy-lifted gauge theory suffers a hierarchy problem, such as does the Standard Model, confirming such predictions would demonstrate a striking violation of the Naturalness Principle. While observing gauge-Higgs sectors in non-Gaussianities will be challenging given the constraints of cosmic variance, we show that it may be possible with reasonable precision given favorable couplings to the inflationary dynamics.
NASA Astrophysics Data System (ADS)
Pini, Ronny; Benson, Sally M.
2017-10-01
We report results from an experimental investigation on the hysteretic behaviour of the capillary pressure curve for the supercritical CO2-water system in a Berea Sandstone core. Previous observations have highlighted the importance of subcore-scale capillary heterogeneity in developing local saturations during drainage; we show in this study that the same is true for the imbibition process. Spatially distributed drainage and imbibition scanning curves were obtained for mm-scale subsets of the rock sample non-invasively using X-ray CT imagery. Core- and subcore-scale measurements are well described using the Brooks-Corey formalism, which uses a linear trapping model to compute mobile saturations during imbibition. Capillary scaling yields two separate universal drainage and imbibition curves that are representative of the full subcore-scale data set. This enables accurate parameterisation of rock properties at the subcore-scale in terms of capillary scaling factors and permeability, which in turn serve as effective indicators of heterogeneity at the same scale even when hysteresis is a factor. As such, the proposed core-analysis workflow is quite general and provides the required information to populate numerical models that can be used to extend core-flooding experiments to conditions prevalent in the subsurface, which would be otherwise not attainable in the laboratory.
3D-PTV around Operational Wind Turbines
NASA Astrophysics Data System (ADS)
Brownstein, Ian; Dabiri, John
2016-11-01
Laboratory studies and numerical simulations of wind turbines are typically constrained in how they can inform operational turbine behavior. Laboratory experiments are usually unable to match both pertinent parameters of full-scale wind turbines, the Reynolds number (Re) and tip speed ratio, using scaled-down models. Additionally, numerical simulations of the flow around wind turbines are constrained by the large domain size and high Re that need to be simulated. When these simulations are preformed, turbine geometry is typically simplified resulting in flow structures near the rotor not being well resolved. In order to bypass these limitations, a quantitative flow visualization method was developed to take in situ measurements of the flow around wind turbines at the Field Laboratory for Optimized Wind Energy (FLOWE) in Lancaster, CA. The apparatus constructed was able to seed an approximately 9m x 9m x 5m volume in the wake of the turbine using artificial snow. Quantitative measurements were obtained by tracking the evolution of the artificial snow using a four camera setup. The methodology for calibrating and collecting data, as well as preliminary results detailing the flow around a 2kW vertical-axis wind turbine (VAWT), will be presented.
Quantum optomechanical piston engines powered by heat
NASA Astrophysics Data System (ADS)
Mari, A.; Farace, A.; Giovannetti, V.
2015-09-01
We study two different models of optomechanical systems where a temperature gradient between two radiation baths is exploited for inducing self-sustained coherent oscillations of a mechanical resonator. From a thermodynamic perspective, such systems represent quantum instances of self-contained thermal machines converting heat into a periodic mechanical motion and thus they can be interpreted as nano-scale analogues of macroscopic piston engines. Our models are potentially suitable for testing fundamental aspects of quantum thermodynamics in the laboratory and for applications in energy efficient nanotechnology.
Observer-based monitoring of heat exchangers.
Astorga-Zaragoza, Carlos-Manuel; Alvarado-Martínez, Víctor-Manuel; Zavala-Río, Arturo; Méndez-Ocaña, Rafael-Maxim; Guerrero-Ramírez, Gerardo-Vicente
2008-01-01
The goal of this work is to provide a method for monitoring performance degradation in counter-flow double-pipe heat exchangers. The overall heat transfer coefficient is estimated by an adaptive observer and monitored in order to infer when the heat exchanger needs preventive or corrective maintenance. A simplified mathematical model is used to synthesize the adaptive observer and a more complex model is used for simulation. The reliability of the proposed method was demonstrated via numerical simulations and laboratory experiments with a bench-scale pilot plant.
2016-09-01
Laboratory Change in Weather Research and Forecasting (WRF) Model Accuracy with Age of Input Data from the Global Forecast System (GFS) by JL Cogan...analysis. As expected, accuracy generally tended to decline as the large-scale data aged , but appeared to improve slightly as the age of the large...19 Table 7 Minimum and maximum mean RMDs for each WRF time (or GFS data age ) category. Minimum and
NASA Astrophysics Data System (ADS)
Rapaka, Narsimha R.; Sarkar, Sutanu
2016-10-01
A sharp-interface Immersed Boundary Method (IBM) is developed to simulate density-stratified turbulent flows in complex geometry using a Cartesian grid. The basic numerical scheme corresponds to a central second-order finite difference method, third-order Runge-Kutta integration in time for the advective terms and an alternating direction implicit (ADI) scheme for the viscous and diffusive terms. The solver developed here allows for both direct numerical simulation (DNS) and large eddy simulation (LES) approaches. Methods to enhance the mass conservation and numerical stability of the solver to simulate high Reynolds number flows are discussed. Convergence with second-order accuracy is demonstrated in flow past a cylinder. The solver is validated against past laboratory and numerical results in flow past a sphere, and in channel flow with and without stratification. Since topographically generated internal waves are believed to result in a substantial fraction of turbulent mixing in the ocean, we are motivated to examine oscillating tidal flow over a triangular obstacle to assess the ability of this computational model to represent nonlinear internal waves and turbulence. Results in laboratory-scale (order of few meters) simulations show that the wave energy flux, mean flow properties and turbulent kinetic energy agree well with our previous results obtained using a body-fitted grid (BFG). The deviation of IBM results from BFG results is found to increase with increasing nonlinearity in the wave field that is associated with either increasing steepness of the topography relative to the internal wave propagation angle or with the amplitude of the oscillatory forcing. LES is performed on a large scale ridge, of the order of few kilometers in length, that has the same geometrical shape and same non-dimensional values for the governing flow and environmental parameters as the laboratory-scale topography, but significantly larger Reynolds number. A non-linear drag law is utilized in the large-scale application to parameterize turbulent losses due to bottom friction at high Reynolds number. The large scale problem exhibits qualitatively similar behavior to the laboratory scale problem with some differences: slightly larger intensification of the boundary flow and somewhat higher non-dimensional values for the energy fluxed away by the internal wave field. The phasing of wave breaking and turbulence exhibits little difference between small-scale and large-scale obstacles as long as the important non-dimensional parameters are kept the same. We conclude that IBM is a viable approach to the simulation of internal waves and turbulence in high Reynolds number stratified flows over topography.
NASA Astrophysics Data System (ADS)
Huang, M.
2016-12-01
Earth System models (ESMs) are effective tools for investigating the water-energy-food system interactions under climate change. In this presentation, I will introduce research efforts at the Pacific Northwest National Laboratory towards quantifying impacts of LULCC on the water-energy-food nexus in a changing climate using an integrated regional Earth system modeling framework: the Platform for Regional Integrated Modeling and Analysis (PRIMA). Two studies will be discussed to showcase the capability of PRIMA: (1) quantifying changes in terrestrial hydrology over the Conterminous US (CONUS) from 2005 to 2095 using the Community Land Model (CLM) driven by high-resolution downscaled climate and land cover products from PRIMA, which was designed for assessing the impacts of and potential responses to climate and anthropogenic changes at regional scales; (2) applying CLM over the CONUS to provide the first county-scale model validation in simulating crop yields and assessing associated impacts on the water and energy budgets using CLM. The studies demonstrate the benefits of incorporating and coupling human activities into complex ESMs, and critical needs to account for the biogeophysical and biogeochemical effects of LULCC in climate impacts studies, and in designing mitigation and adaptation strategies at a scale meaningful for decision-making. Future directions in quantifying LULCC impacts on the water-energy-food nexus under a changing climate, as well as feedbacks among climate, energy production and consumption, and natural/managed ecosystems using an Integrated Multi-scale, Multi-sector Modeling framework will also be discussed.
Phosphorus transfer in surface runoff from intensive pasture systems at various scales: a review.
Dougherty, Warwick J; Fleming, Nigel K; Cox, Jim W; Chittleborough, David J
2004-01-01
Phosphorus transfer in runoff from intensive pasture systems has been extensively researched at a range of scales. However, integration of data from the range of scales has been limited. This paper presents a conceptual model of P transfer that incorporates landscape effects and reviews the research relating to P transfer at a range of scales in light of this model. The contribution of inorganic P sources to P transfer is relatively well understood, but the contribution of organic P to P transfer is still relatively poorly defined. Phosphorus transfer has been studied at laboratory, profile, plot, field, and watershed scales. The majority of research investigating the processes of P transfer (as distinct from merely quantifying P transfer) has been undertaken at the plot scale. However, there is a growing need to integrate data gathered at a range of scales so that more effective strategies to reduce P transfer can be identified. This has been hindered by the lack of a clear conceptual framework to describe differences in the processes of P transfer at the various scales. The interaction of hydrological (transport) factors with P source factors, and their relationship to scale, require further examination. Runoff-generating areas are highly variable, both temporally and spatially. Improvement in the understanding and identification of these areas will contribute to increased effectiveness of strategies aimed at reducing P transfers in runoff. A thorough consideration of scale effects using the conceptual model of P transfer outlined in this paper will facilitate the development of improved strategies for reducing P losses in runoff.
Ramesh Murthy; Greg Barron-Gafford; Philip M. Dougherty; Victor c. Engels; Katie Grieve; Linda Handley; Christie Klimas; Mark J. Postosnaks; Stanley J. Zarnoch; Jianwei Zhang
2005-01-01
We examined the effects of atmospheric vapor pressure deficit (VPD) and soil moisture stress (SMS) on leaf- and stand-level CO2 exchange in model 3-year-old coppiced cottonwood (Populus deltoides Bartr.) plantations using the large-scale, controlled environments of the Biosphere 2 Laboratory. A short-term experiment was imposed...
From drug to protein: using yeast genetics for high-throughput target discovery.
Armour, Christopher D; Lum, Pek Yee
2005-02-01
The budding yeast Saccharomyces cerevisiae has long been an effective eukaryotic model system for understanding basic cellular processes. The genetic tractability and ease of manipulation in the laboratory make yeast well suited for large-scale chemical and genetic screens. Several recent studies describing the use of yeast genetics for high-throughput drug target identification are discussed in this review.
A Multi-Scale Modeling Framework for Shear Initiated Reactions in Energetic Materials
2013-07-01
Laboratory, 2004. 10. Fermen-Coker, M., “Numerical Simulation of Adiabatic Shear Bands in Ti-6Al-4V Alloy Due to Fragment Impact,” ARL-RP-91; U.S...V.G., “Application of the Morse Potential Function to Cubic Metals” Phys. Rev., Vol. 114, pp. 687- 690 , 1959. 15. McQuarrie, D.A., Statistical
John R. Butnor; Kurt H. Johnsen; Chris A. Maier
2005-01-01
Soil C02 efflux is a major component of net ecosystem productivity (NEP) of forest systems. Combining data from multiple researchers for larger-scale modeling and assessment will only be valid if their methodologies provide directly comparable results. We conducted a series of laboratory and field tests to assess the presence and magnitude of...
A. A. May; G. R. McMeeking; T. Lee; J. W. Taylor; J. S. Craven; I. Burling; A. P. Sullivan; S. Akagi; J. L. Collett; M. Flynn; H. Coe; S. P. Urbanski; J. H. Seinfeld; R. J. Yokelson; S. M. Kreidenweis
2014-01-01
Aerosol emissions from prescribed fires can affect air quality on regional scales. Accurate representation of these emissions in models requires information regarding the amount and composition of the emitted species. We measured a suite of submicron particulate matter species in young plumes emitted from prescribed fires (chaparral and montane ecosystems in California...
Computer Laboratory for Multi-scale Simulations of Novel Nanomaterials
2014-09-15
schemes for multiscale modeling of polymers. Permselective ion-exchange membranes for protective clothing, fuel cells , and batteries are of special...polyelectrolyte membranes ( PEM ) with chemical warfare agents (CWA) and their simulants and (2) development of new simulation methods and computational...chemical potential using gauge cell method and calculation of density profiles. However, the code does not run in parallel environments. For mesoscale
Evaluation of Laboratory Scale Testing of Tunnels and Tunnel Intersections. Volume 1
1991-11-01
Bakhtar, K. and DiBona , B. G., Dynamic Loading Experiments on Model Underground Structures," DNA-TR-85-387, prepared by Terra Tek, Inc. for Defense...34, TR 84-01, prepared by Terra Tek, Inc, for DNA Contract No. DNA 001-82-C-0253. DNA TR-85-387 Bakhtar, K. and DiBona , B. G., Dynamic Loading Experiments
Maxwell Prize Talk: Scaling Laws for the Dynamical Plasma Phenomena
NASA Astrophysics Data System (ADS)
Ryutov, Livermore, Ca 94550, Usa, D. D.
2017-10-01
The scaling and similarity technique is a powerful tool for developing and testing reduced models of complex phenomena, including plasma phenomena. The technique has been successfully used in identifying appropriate simplified models of transport in quasistationary plasmas. In this talk, the similarity and scaling arguments will be applied to highly dynamical systems, in which temporal evolution of the plasma leads to a significant change of plasma dimensions, shapes, densities, and other parameters with respect to initial state. The scaling and similarity techniques for dynamical plasma systems will be presented as a set of case studies of problems from various domains of the plasma physics, beginning with collisonless plasmas, through intermediate collisionalities, to highly collisional plasmas describable by the single-fluid MHD. Basic concepts of the similarity theory will be introduced along the way. Among the results discussed are: self-similarity of Langmuir turbulence driven by a hot electron cloud expanding into a cold background plasma; generation of particle beams in disrupting pinches; interference between collisionless and collisional phenomena in the shock physics; similarity for liner-imploded plasmas; MHD similarities with an emphasis on the effect of small-scale (turbulent) structures on global dynamics. Relations between astrophysical phenomena and scaled laboratory experiments will be discussed.
A White Paper on keV sterile neutrino Dark Matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adhikari, R.; Agostini, M.; Ky, N. Anh
We present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved—cosmology, astrophysics, nuclear, and particle physics—in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. Here, we first review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterile neutrino Dark Matter arisingmore » from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. The paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.« less
A White Paper on keV sterile neutrino Dark Matter
Adhikari, R.
2017-01-13
Here, we present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved - cosmology, astrophysics, nuclear, and particle physics - in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. First, we review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterilemore » neutrino Dark Matter arising from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. Our paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.« less
A White Paper on keV sterile neutrino Dark Matter
NASA Astrophysics Data System (ADS)
Adhikari, R.; Agostini, M.; Ky, N. Anh; Araki, T.; Archidiacono, M.; Bahr, M.; Baur, J.; Behrens, J.; Bezrukov, F.; Bhupal Dev, P. S.; Borah, D.; Boyarsky, A.; de Gouvea, A.; Pires, C. A. de S.; de Vega, H. J.; Dias, A. G.; Di Bari, P.; Djurcic, Z.; Dolde, K.; Dorrer, H.; Durero, M.; Dragoun, O.; Drewes, M.; Drexlin, G.; Düllmann, Ch. E.; Eberhardt, K.; Eliseev, S.; Enss, C.; Evans, N. W.; Faessler, A.; Filianin, P.; Fischer, V.; Fleischmann, A.; Formaggio, J. A.; Franse, J.; Fraenkle, F. M.; Frenk, C. S.; Fuller, G.; Gastaldo, L.; Garzilli, A.; Giunti, C.; Glück, F.; Goodman, M. C.; Gonzalez-Garcia, M. C.; Gorbunov, D.; Hamann, J.; Hannen, V.; Hannestad, S.; Hansen, S. H.; Hassel, C.; Heeck, J.; Hofmann, F.; Houdy, T.; Huber, A.; Iakubovskyi, D.; Ianni, A.; Ibarra, A.; Jacobsson, R.; Jeltema, T.; Jochum, J.; Kempf, S.; Kieck, T.; Korzeczek, M.; Kornoukhov, V.; Lachenmaier, T.; Laine, M.; Langacker, P.; Lasserre, T.; Lesgourgues, J.; Lhuillier, D.; Li, Y. F.; Liao, W.; Long, A. W.; Maltoni, M.; Mangano, G.; Mavromatos, N. E.; Menci, N.; Merle, A.; Mertens, S.; Mirizzi, A.; Monreal, B.; Nozik, A.; Neronov, A.; Niro, V.; Novikov, Y.; Oberauer, L.; Otten, E.; Palanque-Delabrouille, N.; Pallavicini, M.; Pantuev, V. S.; Papastergis, E.; Parke, S.; Pascoli, S.; Pastor, S.; Patwardhan, A.; Pilaftsis, A.; Radford, D. C.; Ranitzsch, P. C.-O.; Rest, O.; Robinson, D. J.; Rodrigues da Silva, P. S.; Ruchayskiy, O.; Sanchez, N. G.; Sasaki, M.; Saviano, N.; Schneider, A.; Schneider, F.; Schwetz, T.; Schönert, S.; Scholl, S.; Shankar, F.; Shrock, R.; Steinbrink, N.; Strigari, L.; Suekane, F.; Suerfu, B.; Takahashi, R.; Van, N. Thi Hong; Tkachev, I.; Totzauer, M.; Tsai, Y.; Tully, C. G.; Valerius, K.; Valle, J. W. F.; Venos, D.; Viel, M.; Vivier, M.; Wang, M. Y.; Weinheimer, C.; Wendt, K.; Winslow, L.; Wolf, J.; Wurm, M.; Xing, Z.; Zhou, S.; Zuber, K.
2017-01-01
We present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved—cosmology, astrophysics, nuclear, and particle physics—in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. Here, we first review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterile neutrino Dark Matter arising from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. The paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.
A White Paper on keV sterile neutrino Dark Matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adhikari, R.
Here, we present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved - cosmology, astrophysics, nuclear, and particle physics - in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. First, we review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterilemore » neutrino Dark Matter arising from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. Our paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.« less
Enabling UAS Research at the NASA EAV Laboratory
NASA Technical Reports Server (NTRS)
Ippolito, Corey A.
2015-01-01
The Exploration Aerial Vehicles (EAV) Laboratory at NASA Ames Research Center leads research into intelligent autonomy and advanced control systems, bridging the gap between simulation and full-scale technology through flight test experimentation on unmanned sub-scale test vehicles.
TREATMENT OF INORGANIC CONTAMINANTS USING PERMEABLE REACTIVE BARRIERS
Permeable reactive barriers are an emerging alternative to traditional pump and treat systems for groundwater remediation. This technique has progressed rapidly over the past decade from laboratory bench-scale studies to full-scale implementation. Laboratory studies indicate the ...
Ocean Renewable Energy Research at U. New Hampshire
NASA Astrophysics Data System (ADS)
Wosnik, M.; Baldwin, K.; White, C.; Carter, M.; Gress, D.; Swift, R.; Tsukrov, I.; Kraft, G.; Celikkol, B.
2008-11-01
The University of New Hampshire (UNH) is strategically positioned to develop and evaluate wave and tidal energy extraction technologies, with much of the required test site infrastructure in place already. Laboratory facilities (wave/tow tanks, flumes, water tunnels) are used to test concept validation models (scale 1:25--100) and design models (scale 1:10--30). The UNH Open Ocean Aquaculture (OOA) site located 1.6 km south of the Isles of Shoals (10 km off shore) and the General Sullivan Bridge testing facility in the Great Bay Estuary are used to test process models (scale 1:3--15) and prototype/demonstration models (scale 1:1-- 4) of wave energy and tidal energy extraction devices, respectively. Both test sites are easily accessible and in close proximity of UNH, with off-the-shelf availability. The Great Bay Estuary system is one of the most energetic tidally driven estuaries on the East Coast of the U.S. The current at the General Sullivan bridge test facility reliably exceeds four knots over part of the tidal cycle. The OOA site is a ten year old, well established offshore test facility, and is continually serviced by a dedicated research vessel and operations/diving crew. In addition to an overview of the physical resources, results of recent field testing of half- and full-scale hydrokinetic turbines, and an analysis of recent acoustic Doppler surveys of the tidal estuary will be presented.
Nonlinear plasma wave models in 3D fluid simulations of laser-plasma interaction
NASA Astrophysics Data System (ADS)
Chapman, Thomas; Berger, Richard; Arrighi, Bill; Langer, Steve; Banks, Jeffrey; Brunner, Stephan
2017-10-01
Simulations of laser-plasma interaction (LPI) in inertial confinement fusion (ICF) conditions require multi-mm spatial scales due to the typical laser beam size and durations of order 100 ps in order for numerical laser reflectivities to converge. To be computationally achievable, these scales necessitate a fluid-like treatment of light and plasma waves with a spatial grid size on the order of the light wave length. Plasma waves experience many nonlinear phenomena not naturally described by a fluid treatment, such as frequency shifts induced by trapping, a nonlinear (typically suppressed) Landau damping, and mode couplings leading to instabilities that can cause the plasma wave to decay rapidly. These processes affect the onset and saturation of stimulated Raman and Brillouin scattering, and are of direct interest to the modeling and prediction of deleterious LPI in ICF. It is not currently computationally feasible to simulate these Debye length-scale phenomena in 3D across experimental scales. Analytically-derived and/or numerically benchmarked models of processes occurring at scales finer than the fluid simulation grid offer a path forward. We demonstrate the impact of a range of kinetic processes on plasma reflectivity via models included in the LPI simulation code pF3D. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Aghaei, A.
2017-12-01
Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.
Laboratory research of fracture geometry in multistage HFF in triaxial state
NASA Astrophysics Data System (ADS)
Bondarenko, T. M.; Hou, B.; Chen, M.; Yan, L.
2017-05-01
Multistage hydraulic fracturing of formation (HFF) in wells with horizontal completion is an efficientmethod for intensifying oil extraction which, as a rule, is used to develop nontraditional collectors. It is assumed that the complicated character of HFF fractures significantly influences the fracture geometry in the rock matrix. Numerous theoretical models proposed to predict the fracture geometry and the character of interaction of mechanical stresses in the multistage HFF have not been proved experimentally. In this paper, we present the results of laboratory modeling of the multistage HFF performed on a contemporary laboratory-scale plant in the triaxial stress state by using a gel-solution as the HFF agent. As a result of the experiment, a fracturing pattern was formed in the cubic specimen of the model material. The laboratory results showed that a nearly plane fracture is formed at the firstHFF stage, while a concave fracture is formed at the second HFF stage. The interaction of the stress fields created by the two principal HFF fractures results in the growth of secondary fractures whose directions turned out to be parallel to the modeled well bore. But this stress interference leads to a decrease in the width of the second principal fracture. It is was discovered that the penny-shaped fracture model is more appropriate for predicting the geometry of HFF fractures in horizontal wells than the two-dimensional models of fracture propagation (PKN model, KGD model). A computational experiment based on the boundary element method was carried out to obtain the qualitative description of the multistage HFF processes. As a result, a mechanical model of fracture propagation was constructed,which was used to obtain the mechanical stress field (the stress contrast) and the fracture opening angle distribution over fracture length and fracture orientation direction. The conclusions made in the laboratory modeling of the multistage HFF technology agree well with the conclusions made in the computational experiment. Special attention must be paid to the design of the HFF stage spacing density in the implementation of the multistage HFF in wells with horizontal completion.
Earthquakes in the Laboratory: Continuum-Granular Interactions
NASA Astrophysics Data System (ADS)
Ecke, Robert; Geller, Drew; Ward, Carl; Backhaus, Scott
2013-03-01
Earthquakes in nature feature large tectonic plate motion at large scales of 10-100 km and local properties of the earth on the scale of the rupture width, of the order of meters. Fault gouge often fills the gap between the large slipping plates and may play an important role in the nature and dynamics of earthquake events. We have constructed a laboratory scale experiment that represents a similitude scale model of this general earthquake description. Two photo-elastic plates (50 cm x 25 cm x 1 cm) confine approximately 3000 bi-disperse nylon rods (diameters 0.12 and 0.16 cm, height 1 cm) in a gap of approximately 1 cm. The plates are held rigidly along their outer edges with one held fixed while the other edge is driven at constant speed over a range of about 5 cm. The local stresses exerted on the plates are measured using their photo-elastic response, the local relative motions of the plates, i.e., the local strains, are determined by the relative motion of small ball bearings attached to the top surface, and the configurations of the nylon rods are investigated using particle tracking tools. We find that this system has properties similar to real earthquakes and are exploring these ``lab-quake'' events with the quantitative tools we have developed.
A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model
Chacon, Luis; Stanier, Adam John
2016-12-01
Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less
Predictive model for local scour downstream of hydrokinetic turbines in erodible channels
NASA Astrophysics Data System (ADS)
Musa, Mirko; Heisel, Michael; Guala, Michele
2018-02-01
A modeling framework is derived to predict the scour induced by marine hydrokinetic turbines installed on fluvial or tidal erodible bed surfaces. Following recent advances in bridge scour formulation, the phenomenological theory of turbulence is applied to describe the flow structures that dictate the equilibrium scour depth condition at the turbine base. Using scaling arguments, we link the turbine operating conditions to the flow structures and scour depth through the drag force exerted by the device on the flow. The resulting theoretical model predicts scour depth using dimensionless parameters and considers two potential scenarios depending on the proximity of the turbine rotor to the erodible bed. The model is validated at the laboratory scale with experimental data comprising the two sediment mobility regimes (clear water and live bed), different turbine configurations, hydraulic settings, bed material compositions, and migrating bedform types. The present work provides future developers of flow energy conversion technologies with a physics-based predictive formula for local scour depth beneficial to feasibility studies and anchoring system design. A potential prototype-scale deployment in a large sandy river is also considered with our model to quantify how the expected scour depth varies as a function of the flow discharge and rotor diameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Liange; Rutqvist, Jonny; Xu, Hao
The focus of research within the Spent Fuel and Waste Science and Technology (SFWST) (formerly called Used Fuel Disposal) Campaign is on repository-induced interactions that may affect the key safety characteristics of EBS bentonite and an argillaceous rock. These include thermal-hydrologicalmechanical- chemical (THMC) process interactions that occur as a result of repository construction and waste emplacement. Some of the key questions addressed in this report include the development of fracturing in the excavation damaged zone (EDZ) and THMC effects on the near-field argillaceous rock and buffer materials and petrophysical characteristics, particularly the impacts of temperature rise caused by waste heat.more » This report documents the following research activities. Section 2 presents THM model developments and validation, including modeling of underground heater experiments at Mont Terri and Bure underground research laboratories (URLs). The heater experiments modeled are the Mont Terri FE (Full-scale Emplacement) Experiment, conducted as part of the Mont Terri Project, and the TED in heater test conducted in Callovo-Oxfordian claystone (COx) at the Meuse/Haute-Marne (MHM) underground research laboratory in France. The modeling of the TED heater test is one of the Tasks of the DEvelopment of COupled Models and their VAlidation against EXperiments (DECOVALEX)-2019 project. Section 3 presents the development and application of thermal-hydrological-mechanical-chemical (THMC) modeling to evaluate EBS bentonite and argillite rock responses under different temperatures (100 °C and 200 °C). Model results are presented to help to understand the impact of high temperatures on the properties and behavior of bentonite and argillite rock. Eventually the process model will support a robust GDSA model for repository performance assessments. Section 4 presents coupled THMC modeling for an in situ test conducted at Grimsel underground laboratory in Switzerland in the Full-Scale Engineered Barrier Experiment Dismantling Project (FEBEX-DP). The data collected in the test after almost two decades of heating and two dismantling events provide a unique opportunity of validating coupled THMC models and enhancing our understanding of coupled THMC process in EBS bentonite. Section 5 presents a planned large in-situ test, “HotBENT,” at Grimsel Test Site, Switzerland. In this test, bentonite backfilled EBS in granite will be heated up to 200 °C, where the most relevant features of future emplacement conditions can be adequately reproduced. Lawrence Berkeley National Laboratory (LBNL) has very actively participated in the project since the very beginning and have conducted scoping calculations in FY17 to facilitate the final design of the experiment. Section 6 presents present LBNL’s activities for modeling gas migration in clay related to Task A of the international DECOVALEX-2019 project. This is an international collaborative activity in which DOE and LBNL gain access to unique laboratory and field data of gas migration that are studied with numerical modeling to better understand the processes, to improve numerical models that could eventually be applied in the performance assessment for nuclear waste disposal in clay host rocks and bentonite backfill. Section 7 summarizes the main research accomplishments for FY17 and proposes future work activities.« less
The NASA Inductrack Model Rocket Launcher at the Lawrence Livermore National Laboratory
NASA Technical Reports Server (NTRS)
Tung, L. S.; Post, R. F.; Cook, E.; Martinez-Frias, J.
2000-01-01
The Inductrack magnetic levitation system, developed at the Lawrence Livermore National Laboratory, is being studied for its possible use for launching rockets. Under NASA sponsorship, a small model system is being constructed at the Laboratory to pursue key technical aspects of this proposed application. The Inductrack is a passive magnetic levitation system employing special arrays of high-field permanent magnets (Halbach arrays) on the levitating carrier, moving above a "track" consisting of a close-packed array of shorted coils with which are interleaved with special drive coils. Halbach arrays produce a strong spatially periodic magnetic field on the front surface of the arrays, while canceling the field on their back surface. Relative motion between the Halbach arrays and the track coils induces currents in those coils. These currents levitate the carrier cart by interacting with the horizontal component of the magnetic field. Pulsed currents in the drive coils, synchronized with the motion of the carrier, interact with the vertical component of the magnetic field to provide acceleration forces. Motional stability, including resistance to both vertical and lateral aerodynamic forces, is provided by having Halbach arrays that interact with both the upper and the lower sides of the track coils. In its completed form the model system that is under construction will have a track approximately 100 meters in length along which the carrier cart will be propelled up to peak speeds of Mach 0.4 to 0.5 before being decelerated. Preliminary studies of the parameters of a full-scale system have also been made. These studies address the problems of scale-up, including means to simplify the track construction and to reduce the cost of the pulsed-power systems needed for propulsion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Michael D.; Baer, Marcel D.; Mundy, Christopher J.
2016-03-10
The description of peptides and the use of molecular dynamics simulations to refine structures and investigate the dynamics on an atomistic scale are well developed. Through a consensus in this community over multiple decades, parameters were developed for molecular interactions that only require the sequence of amino-acids and an initial guess for the three-dimensional structure. The recent discovery of peptoids will require a retooling of the currently available interaction potentials in order to have the same level of confidence in the predicted structures and pathways as there is presently in the peptide counterparts. Here we present modeling of peptoids usingmore » a combination of ab initio molecular dynamics (AIMD) and atomistic resolution classical forcefield (FF) to span the relevant time and length scales. To properly account for the dominant forces that stabilize ordered structures of peptoids, namely steric-, electrostatic, and hydrophobic interactions mediated through sidechain-sidechain interactions in the FF model, those have to be first mapped out using high fidelity atomistic representations. A key feature here is not only to use gas phase quantum chemistry tools, but also account for solvation effects in the condensed phase through AIMD. One major challenge is to elucidate ion binding to charged or polar regions of the peptoid and its concomitant role in the creation of local order. Here, similar to proteins, a specific ion effect is observed suggesting that both the net charge and the precise chemical nature of the ion will need to be described. MDD was supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative at Pacific Northwest National Laboratory. Research was funded by the Laboratory Directed Research and Development program at Pacific Northwest National Laboratory. MDB acknowledges support from US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Material & Engineering. CJM acknowledges support from US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. PNNL is a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy.« less
Growth laws for sub-delta crevasses in the Mississippi River Delta
NASA Astrophysics Data System (ADS)
Yocum, T. A.; Georgiou, I. Y.; Straub, K. M.
2017-12-01
River deltas are threatened by environmental change, including subsidence, global sea level rise, reduced sediment inputs and other local factors. In the Mississippi River Delta (MRD) these impacts are exemplified, and have led to proposed solutions to build land that include sediment diversions to reinitiate the delta cycle. Deltas were studied extensively using numerical models, theoretical and conceptual frameworks, empirical scaling relationships, laboratory models and field observations. But predicting the future of deltas relies on field observations where for most deltas data are still lacking. Moreover, empirical and theoretical scaling laws may be influenced by the data used to develop them, while laboratory deltas may be influenced by scaling issues. Anthropogenic crevasses in the MRD are large enough to overcome limitations of laboratory deltas, and small enough to allow for rapid channel and wetland development, providing an ideal setting to investigate delta development mechanics. Here we assessed growth laws of sub-delta crevasses (SDC) in the MRD, in two experimental laboratory deltas (LD - weakly and strongly cohesive) and compared them to river dominated deltas worldwide. Channel and delta geometry metrics for each system were obtained using geospatial tools, bathymetric datasets, sediment size, and hydrodynamic observations. Results show that SDC follow growth laws similar to large river dominated deltas, with the exception of some that exhibit anomalous behavior with respect to the frequency and distance to a bifurcation and the fraction of wetted delta shoreline (allometry metrics). Most SDC exhibit a systematic decrease of non-dimensional channel geometries with increased bifurcation order, indicating that channels are adjusting to decreased flow after bifurcations occur, and exhibit linear trends for land allometry and width-depth ratio, although geometries decrease more rapidly per bifurcation order. Measured distance to bifurcations in SDC and LD appear longer compared to those predicted by power law metrics. With less channel splitting in some crevasses, channel extension creates wetted perimeter faster than or at the same rate as wetted area, which explains why some SDC displayed fractal growth of the wetted allometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abboud, Alexander; Guillen, Donna Post; Pokorny, Richard
At the Hanford site in the state of Washington, more than 56 million gallons of radioactive waste is stored in underground tanks. The cleanup plan for this waste is vitrification at the Waste Treatment Plant (WTP), currently under construction. At the WTP, the waste will be blended with glass-forming materials and heated to 1423K, then poured into stainless steel canisters to cool and solidify. A fundamental understanding of the glass batch melting process is needed to optimize the process to reduce cost and decrease the life cycle of the cleanup effort. The cold cap layer that floats on the surfacemore » of the glass melt is the primary reaction zone for the feed-to-glass conversion. The conversion reactions include water release, melting of salts, evolution of batch gases, dissolution of quartz and the formation of molten glass. Obtaining efficient heat transfer to this region is crucial to achieving high rates of glass conversion. Computational fluid dynamics (CFD) modeling is being used to understand the heat transfer dynamics of the system and provide insight to optimize the process. A CFD model was developed to simulate the DM1200, a pilot-scale melter that has been extensively tested by the Vitreous State Laboratory (VSL). Electrodes are built into the melter to provide Joule heating to the molten glass. To promote heat transfer from the molten glass into the reactive cold cap layer, bubbling of the molten glass is used to stimulate forced convection within the melt pool. A three-phase volume of fluid approach is utilized to model the system, wherein the molten glass and cold cap regions are modeled as separate liquid phases, and the bubbling gas and plenum regions are modeled as one lumped gas phase. The modeling of the entire system with a volume of fluid model allows for the prescription of physical properties on a per-phase basis. The molten glass phase and the gas phase physical properties are obtained from previous experimental work. Finding representative properties for the cold cap region is more difficult, as this region is not a true liquid, but rather a multilayer region consisting of a porous and a foamy layer. Physical properties affecting heat transfer, namely the thermal conductivity and heat capacity, have been fit to closely match data and observations from laboratory experiments. Data from xray tomography and quenching of laboratory-scale cold caps provide insight into the topology of bubble distribution within the cold cap at various temperatures. Heat transfer within the melter was validated by comparison with VSL data for the pilot-scale melter.« less
Modeling quiescent phase transport of air bubbles induced by breaking waves
NASA Astrophysics Data System (ADS)
Shi, Fengyan; Kirby, James T.; Ma, Gangfeng
Simultaneous modeling of both the acoustic phase and quiescent phase of breaking wave-induced air bubbles involves a large range of length scales from microns to meters and time scales from milliseconds to seconds, and thus is computational unaffordable in a surfzone-scale computational domain. In this study, we use an air bubble entrainment formula in a two-fluid model to predict air bubble evolution in the quiescent phase in a breaking wave event. The breaking wave-induced air bubble entrainment is formulated by connecting the shear production at the air-water interface and the bubble number intensity with a certain bubble size spectra observed in laboratory experiments. A two-fluid model is developed based on the partial differential equations of the gas-liquid mixture phase and the continuum bubble phase, which has multiple size bubble groups representing a polydisperse bubble population. An enhanced 2-DV VOF (Volume of Fluid) model with a k - ɛ turbulence closure is used to model the mixture phase. The bubble phase is governed by the advection-diffusion equations of the gas molar concentration and bubble intensity for groups of bubbles with different sizes. The model is used to simulate air bubble plumes measured in laboratory experiments. Numerical results indicate that, with an appropriate parameter in the air entrainment formula, the model is able to predict the main features of bubbly flows as evidenced by reasonable agreement with measured void fraction. Bubbles larger than an intermediate radius of O(1 mm) make a major contribution to void fraction in the near-crest region. Smaller bubbles tend to penetrate deeper and stay longer in the water column, resulting in significant contribution to the cross-sectional area of the bubble cloud. An underprediction of void fraction is found at the beginning of wave breaking when large air pockets take place. The core region of high void fraction predicted by the model is dislocated due to use of the shear production in the algorithm for initial bubble entrainment. The study demonstrates a potential use of an entrainment formula in simulations of air bubble population in a surfzone-scale domain. It also reveals some difficulties in use of the two-fluid model for predicting large air pockets induced by wave breaking, and suggests that it may be necessary to use a gas-liquid two-phase model as the basic model framework for the mixture phase and to develop an algorithm to allow for transfer of discrete air pockets to the continuum bubble phase. A more theoretically justifiable air entrainment formulation should be developed.
Heinz, Hendrik; Ramezani-Dakhel, Hadi
2016-01-21
Natural and man-made materials often rely on functional interfaces between inorganic and organic compounds. Examples include skeletal tissues and biominerals, drug delivery systems, catalysts, sensors, separation media, energy conversion devices, and polymer nanocomposites. Current laboratory techniques are limited to monitor and manipulate assembly on the 1 to 100 nm scale, time-consuming, and costly. Computational methods have become increasingly reliable to understand materials assembly and performance. This review explores the merit of simulations in comparison to experiment at the 1 to 100 nm scale, including connections to smaller length scales of quantum mechanics and larger length scales of coarse-grain models. First, current simulation methods, advances in the understanding of chemical bonding, in the development of force fields, and in the development of chemically realistic models are described. Then, the recognition mechanisms of biomolecules on nanostructured metals, semimetals, oxides, phosphates, carbonates, sulfides, and other inorganic materials are explained, including extensive comparisons between modeling and laboratory measurements. Depending on the substrate, the role of soft epitaxial binding mechanisms, ion pairing, hydrogen bonds, hydrophobic interactions, and conformation effects is described. Applications of the knowledge from simulation to predict binding of ligands and drug molecules to the inorganic surfaces, crystal growth and shape development, catalyst performance, as well as electrical properties at interfaces are examined. The quality of estimates from molecular dynamics and Monte Carlo simulations is validated in comparison to measurements and design rules described where available. The review further describes applications of simulation methods to polymer composite materials, surface modification of nanofillers, and interfacial interactions in building materials. The complexity of functional multiphase materials creates opportunities to further develop accurate force fields, including reactive force fields, and chemically realistic surface models, to enable materials discovery at a million times lower computational cost compared to quantum mechanical methods. The impact of modeling and simulation could further be increased by the advancement of a uniform simulation platform for organic and inorganic compounds across the periodic table and new simulation methods to evaluate system performance in silico.
Multi-scale modeling of multi-component reactive transport in geothermal aquifers
NASA Astrophysics Data System (ADS)
Nick, Hamidreza M.; Raoof, Amir; Wolf, Karl-Heinz; Bruhn, David
2014-05-01
In deep geothermal systems heat and chemical stresses can cause physical alterations, which may have a significant effect on flow and reaction rates. As a consequence it will lead to changes in permeability and porosity of the formations due to mineral precipitation and dissolution. Large-scale modeling of reactive transport in such systems is still challenging. A large area of uncertainty is the way in which the pore-scale information controlling the flow and reaction will behave at a larger scale. A possible choice is to use constitutive relationships relating, for example the permeability and porosity evolutions to the change in the pore geometry. While determining such relationships through laboratory experiments may be limited, pore-network modeling provides an alternative solution. In this work, we introduce a new workflow in which a hybrid Finite-Element Finite-Volume method [1,2] and a pore network modeling approach [3] are employed. Using the pore-scale model, relevant constitutive relations are developed. These relations are then embedded in the continuum-scale model. This approach enables us to study non-isothermal reactive transport in porous media while accounting for micro-scale features under realistic conditions. The performance and applicability of the proposed model is explored for different flow and reaction regimes. References: 1. Matthäi, S.K., et al.: Simulation of solute transport through fractured rock: a higher-order accurate finite-element finite-volume method permitting large time steps. Transport in porous media 83.2 (2010): 289-318. 2. Nick, H.M., et al.: Reactive dispersive contaminant transport in coastal aquifers: Numerical simulation of a reactive Henry problem. Journal of contaminant hydrology 145 (2012), 90-104. 3. Raoof A., et al.: PoreFlow: A Complex pore-network model for simulation of reactive transport in variably saturated porous media, Computers & Geosciences, 61, (2013), 160-174.
Scaling depth-induced wave-breaking in two-dimensional spectral wave models
NASA Astrophysics Data System (ADS)
Salmon, J. E.; Holthuijsen, L. H.; Zijlema, M.; van Vledder, G. Ph.; Pietrzak, J. D.
2015-03-01
Wave breaking in shallow water is still poorly understood and needs to be better parameterized in 2D spectral wave models. Significant wave heights over horizontal bathymetries are typically under-predicted in locally generated wave conditions and over-predicted in non-locally generated conditions. A joint scaling dependent on both local bottom slope and normalized wave number is presented and is shown to resolve these issues. Compared to the 12 wave breaking parameterizations considered in this study, this joint scaling demonstrates significant improvements, up to ∼50% error reduction, over 1D horizontal bathymetries for both locally and non-locally generated waves. In order to account for the inherent differences between uni-directional (1D) and directionally spread (2D) wave conditions, an extension of the wave breaking dissipation models is presented. By including the effects of wave directionality, rms-errors for the significant wave height are reduced for the best performing parameterizations in conditions with strong directional spreading. With this extension, our joint scaling improves modeling skill for significant wave heights over a verification data set of 11 different 1D laboratory bathymetries, 3 shallow lakes and 4 coastal sites. The corresponding averaged normalized rms-error for significant wave height in the 2D cases varied between 8% and 27%. In comparison, using the default setting with a constant scaling, as used in most presently operating 2D spectral wave models, gave equivalent errors between 15% and 38%.
Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria
2015-12-01
Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Burrage, Clare; Sakstein, Jeremy
2018-03-01
Theories of modified gravity, where light scalars with non-trivial self-interactions and non-minimal couplings to matter—chameleon and symmetron theories—dynamically suppress deviations from general relativity in the solar system. On other scales, the environmental nature of the screening means that such scalars may be relevant. The highly-nonlinear nature of screening mechanisms means that they evade classical fifth-force searches, and there has been an intense effort towards designing new and novel tests to probe them, both in the laboratory and using astrophysical objects, and by reinterpreting existing datasets. The results of these searches are often presented using different parametrizations, which can make it difficult to compare constraints coming from different probes. The purpose of this review is to summarize the present state-of-the-art searches for screened scalars coupled to matter, and to translate the current bounds into a single parametrization to survey the state of the models. Presently, commonly studied chameleon models are well-constrained but less commonly studied models have large regions of parameter space that are still viable. Symmetron models are constrained well by astrophysical and laboratory tests, but there is a desert separating the two scales where the model is unconstrained. The coupling of chameleons to photons is tightly constrained but the symmetron coupling has yet to be explored. We also summarize the current bounds on f( R) models that exhibit the chameleon mechanism (Hu and Sawicki models). The simplest of these are well constrained by astrophysical probes, but there are currently few reported bounds for theories with higher powers of R. The review ends by discussing the future prospects for constraining screened modified gravity models further using upcoming and planned experiments.
FY15 Report on Thermomechanical Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Francis D.; Buchholz, Stuart
2015-08-01
Sandia is participating in the third phase of a United States (US)-German Joint Project that compares constitutive models and simulation procedures on the basis of model calculations of the thermomechanical behavior and healing of rock salt (Salzer et al. 2015). The first goal of the project is to evaluate the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Among the numerical modeling tools required to address this are constitutive models that are used in computer simulations for the description of the thermal, mechanical, and hydraulic behavior of the host rockmore » under various influences and for the long-term prediction of this behavior. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure disposal of radioactive wastes in rock salt. Results of the Joint Project may ultimately be used to make various assertions regarding stability analysis of an underground repository in salt during the operating phase as well as long-term integrity of the geological barrier in the post-operating phase A primary evaluation of constitutive model capabilities comes by way of predicting large-scale field tests. The Joint Project partners decided to model Waste Isolation Pilot Plant (WIPP) Rooms B & D which are full-scale rooms having the same dimensions. Room D deformed under natural, ambient conditions while Room B was thermally driven by an array of waste-simulating heaters (Munson et al. 1988; 1990). Existing laboratory test data for WIPP salt were carefully scrutinized and the partners decided that additional testing would be needed to help evaluate advanced features of the constitutive models. The German partners performed over 140 laboratory tests on WIPP salt at no charge to the US Department of Energy (DOE).« less
NASA Astrophysics Data System (ADS)
Huang, Shiquan; Yi, Youping; Li, Pengchuan
2011-05-01
In recent years, multi-scale simulation technique of metal forming is gaining significant attention for prediction of the whole deformation process and microstructure evolution of product. The advances of numerical simulation at macro-scale level on metal forming are remarkable and the commercial FEM software, such as Deform2D/3D, has found a wide application in the fields of metal forming. However, the simulation method of multi-scale has little application due to the non-linearity of microstructure evolution during forming and the difficulty of modeling at the micro-scale level. This work deals with the modeling of microstructure evolution and a new method of multi-scale simulation in forging process. The aviation material 7050 aluminum alloy has been used as example for modeling of microstructure evolution. The corresponding thermal simulated experiment has been performed on Gleeble 1500 machine. The tested specimens have been analyzed for modeling of dislocation density, nucleation and growth of recrystallization(DRX). The source program using cellular automaton (CA) method has been developed to simulate the grain nucleation and growth, in which the change of grain topology structure caused by the metal deformation was considered. The physical fields at macro-scale level such as temperature field, stress and strain fields, which can be obtained by commercial software Deform 3D, are coupled with the deformed storage energy at micro-scale level by dislocation model to realize the multi-scale simulation. This method was explained by forging process simulation of the aircraft wheel hub forging. Coupled the results of Deform 3D with CA results, the forging deformation progress and the microstructure evolution at any point of forging could be simulated. For verifying the efficiency of simulation, experiments of aircraft wheel hub forging have been done in the laboratory and the comparison of simulation and experiment result has been discussed in details.
NASA Astrophysics Data System (ADS)
Trippetta, Fabio; Ruggieri, Roberta; Geremia, Davide; Brandano, Marco
2017-04-01
Understanding hydraulic and mechanical processes that acted in reservoir rocks and their effect on the rock properties is of a great interest for both scientific and industry fields. In this work we investigate the role of hydrocarbons in changing the petrophysical properties of rock by merging laboratory, outcrops, and subsurface data focusing on the carbonate-bearing Majella reservoir (Bolognano formation). This reservoir represents an interesting analogue for subsurface carbonate reservoirs and is made of high porosity (8 to 28%) ramp calcarenites saturated by hydrocarbon in the state of bitumen at the surface. Within this lithology clean and bitumen bearing samples were investigated. For both groups, density, porosity, P and S wave velocity, at increasing confining pressure and deformation tests were conducted on cylindrical specimens with BRAVA apparatus at the HP-HT Laboratory of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) in Rome, Italy. The performed petrophysical characterization, shows a very good correlation between Vp, Vs and porosity and a pressure independent Vp/Vs ratio while the presence of bitumen within samples increases both Vp and Vs. P-wave velocity hysteresis measured at ambient pressure after 100 MPa of applied confining pressure, suggests an almost pure elastic behaviour for bitumen-bearing samples and a more inelastic behaviour for cleaner samples. Calculated dynamic Young's modulus is larger for bitumen-bearing samples and these data are confirmed by cyclic deformation tests where the same samples generally record larger strength, larger Young's modulus and smaller permanent strain respect to clean samples. Starting from laboratory data, we also derived a synthetic acoustic model highlighting an increase in acoustic impedance for bitumen-bearing samples. Models have been also performed simulating a saturation with decreasing API° hydrocarbons, showing opposite effects on the seismic properties of the reservoir respect to bitumen. In order to compare our laboratory results at larger scale we selected 11 outcrops of the same lithofacies of laboratory samples both clean and bitumen-saturated. Fractures orientations, from the scan-line method, are similar for the two types of outcrops and they follow the same trends of literature data collected on older rocks. On the other hand, spacing data show very lower fracture density for bitumen-saturated outcrops confirming laboratory observations. In conclusion, laboratory experiments highlight a more elastic behaviour for bitumen-bearing samples and saturated outcrops are less prone to fracture respect to clean outcrops. Presence of bitumen has, thus, a positive influence on mechanical properties of the reservoir while acoustic model suggests that lighter oils should have an opposite effect. Geologically, this suggests that hydrocarbons migration in the study area predates the last stage of deformation giving also clues about a relatively high density of the oil when deformation began.
An increase in aerosol burden due to the land-sea warming contrast
NASA Astrophysics Data System (ADS)
Hassan, T.; Allen, R.; Randles, C. A.
2017-12-01
Climate models simulate an increase in most aerosol species in response to warming, particularly over the tropics and Northern Hemisphere midlatitudes. This increase in aerosol burden is related to a decrease in wet removal, primarily due to reduced large-scale precipitation. Here, we show that the increase in aerosol burden, and the decrease in large-scale precipitation, is related to a robust climate change phenomenon—the land/sea warming contrast. Idealized simulations with two state of the art climate models, the National Center for Atmospheric Research Community Atmosphere Model version 5 (NCAR CAM5) and the Geophysical Fluid Dynamics Laboratory Atmospheric Model 3 (GFDL AM3), show that muting the land-sea warming contrast negates the increase in aerosol burden under warming. This is related to smaller decreases in near-surface relative humidity over land, and in turn, smaller decreases in large-scale precipitation over land—especially in the NH midlatitudes. Furthermore, additional idealized simulations with an enhanced land/sea warming contrast lead to the opposite result—larger decreases in relative humidity over land, larger decreases in large-scale precipitation, and larger increases in aerosol burden. Our results, which relate the increase in aerosol burden to the robust climate projection of enhanced land warming, adds confidence that a warmer world will be associated with a larger aerosol burden.
Nanocoatings for High-Efficiency Industrial and Tooling Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blau, P; Qu, J.; Higdon, C.
This industry-driven project was the result of a successful response by Eaton Corporation to a DOE/ITP Program industry call. It consisted of three phases in which ORNL participated. In addition to Eaton Corporation and ORNL (CRADA), the project team included Ames Laboratory, who developed the underlying concept for aluminum-magnesium-boron based nanocomposite coatings [1], and Greenleaf, a small tooling manufacturer in western Pennsylvania. This report focuses on the portion of this work that was conducted by ORNL in a CRADA with Eaton Corporation. A comprehensive final report for the entire effort, which ended in September 2010, has been prepared by Eatonmore » Corporation. Phase I, “Proof of Concept” ran for one year (September 1, 2006 to September 30, 2007) during which the applicability of AlMgB14 single-phase and nanocomposite coatings on hydraulic material coupons and components as well as on tool inserts was demonstrated.. The coating processes used either plasma laser deposition (PLD) or physical vapor deposition (PVD). During Phase I, ORNL conducted laboratory-scale pin-on-disk and reciprocating pin-on-flat tests of coatings produced by PLD and PVD. Non-coated M2 tool steel was used as a baseline for comparison, and the material for the sliding counterface was Type 52100 bearing steel since it simulated the pump materials. Initial tests were run mainly in a commercial hydraulic fluid named Mobil DTE-24, but some tests were later run in a water-glycol mixture as well. A tribosystem analysis was conducted to define the operating conditions of pump components and to help develop simulative tests in Phase II. Phase II, “Coating Process Scale-up” was intended to use scaled-up process to generate prototype parts. This involved both PLD practices at Ames Lab, and a PVD scale-up study at Eaton using its production capable equipment. There was also a limited scale-up study at Greenleaf for the tooling application. ORNL continued to conduct friction and wear tests on process variants and developed tests to better simulate the applications of interest. ORNL also employed existing lubrication models to better understand hydraulic pump frictional behavior and test results. Phase III, “Functional Testing” focused on finalizing the strategy for commercialization of AlMgB14 coatings for both hydraulic and tooling systems. ORNL continued to provide tribology testing and analysis support for hydraulic pump applications. It included both laboratory-scale coupon testing and the analysis of friction and wear data from full component-level tests performed at Eaton Corp. Laboratory-scale tribology test methods are used to characterize the behavior of nanocomposite coatings prior to running them in full-sized hydraulic pumps. This task also includes developing tribosystems analyses, both to provide a better understanding of the performance of coated surfaces in alternate hydraulic fluids, and to help design useful laboratory protocols. Analysis also includes modeling the lubrication conditions and identifying the physical processes by which wear and friction of the contact interface changes over time. This final report summarizes ORNL’s portion of the nanocomposite coatings development effort and presents both generated data and the analyses that were used in the course of this effort.« less
Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2012)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David C.; Syamlal, Madhava; Cottrell, Roger
2012-09-30
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools asmore » necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is organized into 8 technical elements that fall under two focus areas. The first focus area (Physicochemical Models and Data) addresses the steps necessary to model and simulate the various technologies and processes needed to bring a new Carbon Capture and Storage (CCS) technology into production. The second focus area (Analysis & Software) is developing the software infrastructure to integrate the various components and implement the tools that are needed to make quantifiable decisions regarding the viability of new CCS technologies. CCSI also has an Industry Advisory Board (IAB). By working closely with industry from the inception of the project to identify industrial challenge problems, CCSI ensures that the simulation tools are developed for the carbon capture technologies of most relevance to industry. CCSI is led by the National Energy Technology Laboratory (NETL) and leverages the Department of Energy (DOE) national laboratories' core strengths in modeling and simulation, bringing together the best capabilities at NETL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and Pacific Northwest National Laboratory (PNNL). The CCSI's industrial partners provide representation from the power generation industry, equipment manufacturers, technology providers and engineering and construction firms. The CCSI's academic participants (Carnegie Mellon University, Princeton University, West Virginia University, and Boston University) bring unparalleled expertise in multiphase flow reactors, combustion, process synthesis and optimization, planning and scheduling, and process control techniques for energy processes. During Fiscal Year (FY) 12, CCSI released its first set of computational tools and models. This pre-release, a year ahead of the originally planned first release, is the result of intense industry interest in getting early access to the tools and the phenomenal progress of the CCSI technical team. These initial components of the CCSI Toolset provide new models and computational capabilities that will accelerate the commercial development of carbon capture technologies as well as related technologies, such as those found in the power, refining, chemicals, and gas production industries. The release consists of new tools for process synthesis and optimization to help identify promising concepts more quickly, new physics-based models of potential capture equipment and processes that will reduce the time to design and troubleshoot new systems, a framework to quantify the uncertainty of model predictions, and various enabling tools that provide new capabilities such as creating reduced order models (ROMs) from reacting multiphase flow simulations and running thousands of process simulations concurrently for optimization and UQ.« less
Computer modelling of cyclic deformation of high-temperature materials. Progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duesbery, M.S.; Louat, N.P.
1992-11-16
Current methods of lifetime assessment leave much to be desired. Typically, the expected life of a full-scale component exposed to a complex environment is based upon empirical interpretations of measurements performed on microscopic samples in controlled laboratory conditions. Extrapolation to the service component is accomplished by scaling laws which, if used at all, are empirical; little or no attention is paid to synergistic interactions between the different components of the real environment. With the increasingly hostile conditions which must be faced in modern aerospace applications, improvement in lifetime estimation is mandated by both cost and safety considerations. This program aimsmore » at improving current methods of lifetime assessment by building in the characteristics of the micro-mechanisms known to be responsible for damage and failure. The broad approach entails the integration and, where necessary, augmentation of the micro-scale research results currently available in the literature into a macro-scale model with predictive capability. In more detail, the program will develop a set of hierarchically structured models at different length scales, from atomic to macroscopic, at each level taking as parametric input the results of the model at the next smaller scale. In this way the known microscopic properties can be transported by systematic procedures to the unknown macro-scale region. It may not be possible to eliminate empiricism completely, because some of the quantities involved cannot yet be estimated to the required degree of precision. In this case the aim will be at least to eliminate functional empiricism.« less
NASA Astrophysics Data System (ADS)
Swanson, Ryan David
The advection-dispersion equation (ADE) fails to describe non-Fickian solute transport breakthrough curves (BTCs) in saturated porous media in both laboratory and field experiments, necessitating the use of other models. The dual-domain mass transfer (DDMT) model partitions the total porosity into mobile and less-mobile domains with an exchange of mass between the two domains, and this model can reproduce better fits to BTCs in many systems than ADE-based models. However, direct experimental estimation of DDMT model parameters remains elusive and model parameters are often calculated a posteriori by an optimization procedure. Here, we investigate the use of geophysical tools (direct-current resistivity, nuclear magnetic resonance, and complex conductivity) to estimate these model parameters directly. We use two different samples of the zeolite clinoptilolite, a material shown to demonstrate solute mass transfer due to a significant internal porosity, and provide the first evidence that direct-current electrical methods can track solute movement into and out of a less-mobile pore space in controlled laboratory experiments. We quantify the effects of assuming single-rate DDMT for multirate mass transfer systems. We analyze pore structures using material characterization methods (mercury porosimetry, scanning electron microscopy, and X-ray computer tomography), and compare these observations to geophysical measurements. Nuclear magnetic resonance in conjunction with direct-current resistivity measurements can constrain mobile and less-mobile porosities, but complex conductivity may have little value in relation to mass transfer despite the hypothesis that mass transfer and complex conductivity lengths scales are related. Finally, we conduct a geoelectrical monitored tracer test at the Macrodispersion Experiment (MADE) site in Columbus, MS. We relate hydraulic and electrical conductivity measurements to generate a 3D hydraulic conductivity field, and compare to hydraulic conductivity fields estimated through ordinary kriging and sequential Gaussian simulation. Time-lapse electrical measurements are used to verify or dismiss aspects of breakthrough curves for different hydraulic conductivity fields. Our results quantify the potential for geophysical measurements to infer on single-rate DDMT parameters, show site-specific relations between hydraulic and electrical conductivity, and track solute exchange into and out of less-mobile domains.
Let us keep observing and play in sand boxes (Henry Darcy Medal Lecture)
NASA Astrophysics Data System (ADS)
Illangasekare, T. H.
2012-04-01
Henry Darcy was a civil engineer recognized for a number of technical achievements and scientific discoveries. The sand column experiments for which he is known revealed the linear relationship that exists between fluid motion and driving forces at low velocities. Freeze and Back (1983) stated, ''The experiments carried out by Darcy with the help of his assistant, Ritter, in Dijon, France in 1855 and 1856 represent the beginning of groundwater hydrology as a quantitative science." Because of the prominence given to this experiment, two important facts behind Darcy's contributions to subsurface hydrology have not received much attention. First, Darcy was not only a good engineer, but he was also a highly respected scientist whose knowledge of both the fundamentals of fluid mechanics and the natural world of geology led to better conceptualizing and quantifying of groundwater processes at relevant scales to solve practical problems. The experiments for which he is known may have already been conceived, based on his theoretical understanding, and the results were anticipated (Brown 2002). Second, Darcy, through his contributions with Dupuit, showed that they understood hydrologeology at a regional scale and developed methods for quantification at the scale of geologic stratum (Ritz and Bobek, 2008). The primary thesis of this talk is that scientific contributions such as the one Darcy made require appreciation and a thorough understanding of fundamental theory coupled with observation and recording of phenomena both in nature and in the laboratory. Along with all of the significant theoretical, mathematical modeling, and computational advances we have made in the last several decades, laboratory experiments designed to observe phenomena and processes for better insight, accurate data generation, and hypothesis development are critically important to make scientific and engineering advances to address some of the emerging and societally important problems in hydrology and water resources engineering. Kleinhans et al. (2010) convincingly argued the same point, noting, "Many major issues of hydrology are open to experimental investigation." Current and emerging problems with water supply and their hydrologic implications are associated with sustainability of water as a resource for global food production, clean water for potable use, protection of human health, and impacts and implications of global warming and climate change on water resources. This talk will address the subsurface hydrologic science issues that are central to these problems and the role laboratory experimentation can play in helping to advance the basic knowledge. Improved understanding of fundamental flow, transport, reactive, and biological processes that occur at the pore-scale and their manifestation at different modeling and observational scales will continue to advance the subsurface science. Challenges also come from the need to integrate porous media systems with bio-geochemical and atmospheric systems, requiring observing and quantifying complex phenomena across interfaces (e.g., fluid/fluid in pores to land/atmospheric in the field). This talk will discuss how carefully designed and theory driven experiments at various test scales can play a central role in providing answers to critical scientific questions and how they will help to fill knowledge gaps. It will also be shown that careful observations will lead to the refinement of existing theories or the development of new ones. Focusing on the subsurface, the need to keep observing through controlled laboratory experimentation in various test scales from small cells to large sand boxes will be emphasized. How the insights obtained from such experiments will complement modeling and field investigations are highlighted through examples.
NASA Astrophysics Data System (ADS)
Ji, H.; Bhattacharjee, A.; Goodman, A.; Prager, S.; Daughton, W.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Kozub, T.; Jara-Almonte, J.; Myers, C.; Ren, Y.; Sloboda, P.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S.; Drake, J.; Egedal, J.; Sarff, J.; Wallace, J.
2017-10-01
The FLARE device (Facility for Laboratory Reconnection Experiments; flare.pppl.gov) is a new laboratory experiment under construction at Princeton with first plasmas expected in the fall of 2017, based on the design of Magnetic Reconnection Experiment (MRX; mrx.pppl.gov) with much extended parameter ranges. Its main objective is to provide an experimental platform for the studies of magnetic reconnection and related phenomena in the multiple X-line regimes directly relevant to space, solar, astrophysical and fusion plasmas. The main diagnostics is an extensive set of magnetic probe arrays, simultaneously covering multiple scales from local electron scales ( 2 mm), to intermediate ion scales ( 10 cm), and global MHD scales ( 1 m). Specific example space physics topics which can be studied on FLARE will be discussed.
High-Temperature Strain Sensing for Aerospace Applications
NASA Technical Reports Server (NTRS)
Piazza, Anthony; Richards, Lance W.; Hudson, Larry D.
2008-01-01
Thermal protection systems (TPS) and hot structures are utilizing advanced materials that operate at temperatures that exceed abilities to measure structural performance. Robust strain sensors that operate accurately and reliably beyond 1800 F are needed but do not exist. These shortcomings hinder the ability to validate analysis and modeling techniques and hinders the ability to optimize structural designs. This presentation examines high-temperature strain sensing for aerospace applications and, more specifically, seeks to provide strain data for validating finite element models and thermal-structural analyses. Efforts have been made to develop sensor attachment techniques for relevant structural materials at the small test specimen level and to perform laboratory tests to characterize sensor and generate corrections to apply to indicated strains. Areas highlighted in this presentation include sensors, sensor attachment techniques, laboratory evaluation/characterization of strain measurement, and sensor use in large-scale structures.
The Effect of CO2 Pressure on Chromia Scale Microstructure at 750°C
NASA Astrophysics Data System (ADS)
Pint, B. A.; Unocic, K. A.
2018-06-01
To understand and model performance in supercritical CO2 (sCO2) for high-efficiency, concentrating solar power (CSP) and fossil energy power cycles, reaction rates are compared at 750°C in 0.1 MPa CO2 and 30 MPa sCO2 as well as laboratory air as a baseline on structural materials such as Ni-based alloy 625. Due to the thin reaction products formed even after 5000 h, scanning transmission electron microscopy was used to study the Cr-rich surface oxide scale. The scales formed in CO2 and sCO2 had a much finer grain size with more voids observed in CO2. However, the observations on alloy 625 were complicated by Mo and Nb-rich precipitates in the adjacent substrate and Al internal oxidation. To simplify the system, a binary Ni-22Cr alloy was exposed for 1000 h in similar environments. After exposure in sCO2, there was an indication of carbon segregation detected on the Cr2O3 grain boundaries. After exposure in air, metallic Ni precipitates were observed in the scale that were not observed in the scale formed on alloy 625. The scale formed in air on a second Ni-22Cr model alloy with Mn and Si additions did not contain Ni precipitates, suggesting caution when drawing conclusions from model alloys.
NASA Astrophysics Data System (ADS)
Fraser, R.; Coulaud, M.; Aeschlimann, V.; Lemay, J.; Deschenes, C.
2016-11-01
With the growing proportion of inconstant energy source as wind and solar, hydroelectricity becomes a first class source of peak energy in order to regularize the grid. The important increase of start - stop cycles may then cause a premature ageing of runners by both a higher number of cycles in stress fluctuations and by reaching a higher stress level in absolute. Aiming to sustain good quality development on fully homologous scale model turbines, the Hydraulic Machines Laboratory (LAMH) of Laval University has developed a methodology to operate model size turbines on transient regimes such as start-up, stop or load rejection on its test stand. This methodology allows maintaining a constant head while the wicket gates are opening or closing in a representative speed on the model scale of what is made on the prototype. This paper first presents the opening speed on model based on dimensionless numbers, the methodology itself and its application. Then both its limitation and the first results using a bulb turbine are detailed.