Sample records for laboratory scale model

  1. A Simple Laboratory Scale Model of Iceberg Dynamics and its Role in Undergraduate Education

    NASA Astrophysics Data System (ADS)

    Burton, J. C.; MacAyeal, D. R.; Nakamura, N.

    2011-12-01

    Lab-scale models of geophysical phenomena have a long history in research and education. For example, at the University of Chicago, Dave Fultz developed laboratory-scale models of atmospheric flows. The results from his laboratory were so stimulating that similar laboratories were subsequently established at a number of other institutions. Today, the Dave Fultz Memorial Laboratory for Hydrodynamics (http://geosci.uchicago.edu/~nnn/LAB/) teaches general circulation of the atmosphere and oceans to hundreds of students each year. Following this tradition, we have constructed a lab model of iceberg-capsize dynamics for use in the Fultz Laboratory, which focuses on the interface between glaciology and physical oceanography. The experiment consists of a 2.5 meter long wave tank containing water and plastic "icebergs". The motion of the icebergs is tracked using digital video. Movies can be found at: http://geosci.uchicago.edu/research/glaciology_files/tsunamigenesis_research.shtml. We have had 3 successful undergraduate interns with backgrounds in mathematics, engineering, and geosciences perform experiments, analyze data, and interpret results. In addition to iceberg dynamics, the wave-tank has served as a teaching tool in undergraduate classes studying dam-breaking and tsunami run-up. Motivated by the relatively inexpensive cost of our apparatus (~1K-2K dollars) and positive experiences of undergraduate students, we hope to serve as a model for undergraduate research and education that other universities may follow.

  2. Validation of mathematical model for CZ process using small-scale laboratory crystal growth furnace

    NASA Astrophysics Data System (ADS)

    Bergfelds, Kristaps; Sabanskis, Andrejs; Virbulis, Janis

    2018-05-01

    The present material is focused on the modelling of small-scale laboratory NaCl-RbCl crystal growth furnace. First steps towards fully transient simulations are taken in the form of stationary simulations that deal with the optimization of material properties to match the model to experimental conditions. For this purpose, simulation software primarily used for the modelling of industrial-scale silicon crystal growth process was successfully applied. Finally, transient simulations of the crystal growth are presented, giving a sufficient agreement to experimental results.

  3. Laboratory and theoretical models of planetary-scale instabilities and waves

    NASA Technical Reports Server (NTRS)

    Hart, John E.; Toomre, Juri

    1990-01-01

    Meteorologists and planetary astronomers interested in large-scale planetary and solar circulations recognize the importance of rotation and stratification in determining the character of these flows. In the past it has been impossible to accurately model the effects of sphericity on these motions in the laboratory because of the invariant relationship between the uni-directional terrestrial gravity and the rotation axis of an experiment. Researchers studied motions of rotating convecting liquids in spherical shells using electrohydrodynamic polarization forces to generate radial gravity, and hence centrally directed buoyancy forces, in the laboratory. The Geophysical Fluid Flow Cell (GFFC) experiments performed on Spacelab 3 in 1985 were analyzed. Recent efforts at interpretation led to numerical models of rotating convection with an aim to understand the possible generation of zonal banding on Jupiter and the fate of banana cells in rapidly rotating convection as the heating is made strongly supercritical. In addition, efforts to pose baroclinic wave experiments for future space missions using a modified version of the 1985 instrument led to theoretical and numerical models of baroclinic instability. Rather surprising properties were discovered, which may be useful in generating rational (rather than artificially truncated) models for nonlinear baroclinic instability and baroclinic chaos.

  4. EPOS-WP16: A Platform for European Multi-scale Laboratories

    NASA Astrophysics Data System (ADS)

    Spiers, Chris; Drury, Martyn; Kan-Parker, Mirjam; Lange, Otto; Willingshofer, Ernst; Funiciello, Francesca; Rosenau, Matthias; Scarlato, Piergiorgio; Sagnotti, Leonardo; W16 Participants

    2016-04-01

    The participant countries in EPOS embody a wide range of world-class laboratory infrastructures ranging from high temperature and pressure experimental facilities, to electron microscopy, micro-beam analysis, analogue modeling and paleomagnetic laboratories. Most data produced by the various laboratory centres and networks are presently available only in limited "final form" in publications. As such many data remain inaccessible and/or poorly preserved. However, the data produced at the participating laboratories are crucial to serving society's need for geo-resources exploration and for protection against geo-hazards. Indeed, to model resource formation and system behaviour during exploitation, we need an understanding from the molecular to the continental scale, based on experimental data. This contribution will describe the work plans that the laboratories community in Europe is making, in the context of EPOS. The main objectives are: - To collect and harmonize available and emerging laboratory data on the properties and processes controlling rock system behaviour at multiple scales, in order to generate products accessible and interoperable through services for supporting research activities. - To co-ordinate the development, integration and trans-national usage of the major solid Earth Science laboratory centres and specialist networks. The length scales encompassed by the infrastructures included range from the nano- and micrometer levels (electron microscopy and micro-beam analysis) to the scale of experiments on centimetre sized samples, and to analogue model experiments simulating the reservoir scale, the basin scale and the plate scale. - To provide products and services supporting research into Geo-resources and Geo-storage, Geo-hazards and Earth System Evolution.

  5. The design of dapog rice seeder model for laboratory scale

    NASA Astrophysics Data System (ADS)

    Purba, UI; Rizaldi, T.; Sumono; Sigalingging, R.

    2018-02-01

    The dapog system is seeding rice seeds using a special nursery tray. Rice seedings with dapog systems can produce seedlings in the form of higher quality and uniform seed rolls. This study aims to reduce the cost of making large-scale apparatus by designing models for small-scale and can be used for learning in the laboratory. Parameters observed were soil uniformity, seeds and fertilizers, soil looses, seeds and fertilizers, effective capacity of apparatus, and power requirements. The results showed a high uniformity in soil, seed and fertilizer respectively 92.8%, 1-3 seeds / cm2 and 82%. The scattered materials for soil, seed and fertilizer were respectively 6.23%, 2.7% and 2.23%. The effective capacity of apparatus was 360 boxes / hour with 237.5 kWh of required power.

  6. Cross-flow turbines: progress report on physical and numerical model studies at large laboratory scale

    NASA Astrophysics Data System (ADS)

    Wosnik, Martin; Bachant, Peter

    2016-11-01

    Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.

  7. Computational simulation of laboratory-scale volcanic jets

    NASA Astrophysics Data System (ADS)

    Solovitz, S.; Van Eaton, A. R.; Mastin, L. G.; Herzog, M.

    2017-12-01

    Volcanic eruptions produce ash clouds that may travel great distances, significantly impacting aviation and communities downwind. Atmospheric hazard forecasting relies partly on numerical models of the flow physics, which incorporate data from eruption observations and analogue laboratory tests. As numerical tools continue to increase in complexity, they must be validated to fine-tune their effectiveness. Since eruptions are relatively infrequent and challenging to observe in great detail, analogue experiments can provide important insights into expected behavior over a wide range of input conditions. Unfortunately, laboratory-scale jets cannot easily attain the high Reynolds numbers ( 109) of natural volcanic eruption columns. Comparisons between the computational models and analogue experiments can help bridge this gap. In this study, we investigate a 3-D volcanic plume model, the Active Tracer High-resolution Atmospheric Model (ATHAM), which has been used to simulate a variety of eruptions. However, it has not been previously validated using laboratory-scale data. We conducted numerical simulations of three flows that we have studied in the laboratory: a vertical jet in a quiescent environment, a vertical jet in horizontal cross flow, and a particle-laden jet. We considered Reynolds numbers from 10,000 to 50,000, jet-to-cross flow velocity ratios of 2 to 10, and particle mass loadings of up to 25% of the exit mass flow rate. Vertical jet simulations produce Gaussian velocity profiles in the near exit region by 3 diameters downstream, matching the mean experimental profiles. Simulations of air entrainment are of the correct order of magnitude, but they show decreasing entrainment with vertical distance from the vent. Cross flow simulations reproduce experimental trajectories for the jet centerline initially, although confinement appears to impact the response later. Particle-laden simulations display minimal variation in concentration profiles between cases with

  8. Laboratory and theoretical models of planetary-scale instabilities and waves

    NASA Technical Reports Server (NTRS)

    Hart, John E.; Toomre, Juri

    1991-01-01

    Meteorologists and planetary astronomers interested in large-scale planetary and solar circulations recognize the importance of rotation and stratification in determining the character of these flows. The two outstanding problems of interest are: (1) the origins and nature of chaos in baroclinically unstable flows; and (2) the physical mechanisms responsible for high speed zonal winds and banding on the giant planets. The methods used to study these problems, and the insights gained, are useful in more general atmospheric and climate dynamic settings. Because the planetary curvature or beta-effect is crucial in the large scale nonlinear dynamics, the motions of rotating convecting liquids in spherical shells were studied using electrohydrodynamic polarization forces to generate radial gravity and centrally directed buoyancy forces in the laboratory. The Geophysical Fluid Flow Cell (GFFC) experiments performed on Spacelab 3 in 1985 were analyzed. The interpretation and extension of these results have led to the construction of efficient numerical models of rotating convection with an aim to understand the possible generation of zonal banding on Jupiter and the fate of banana cells in rapidly rotating convection as the heating is made strongly supercritical. Efforts to pose baroclinic wave experiments for future space missions using a modified version of the 1985 instrument have led us to develop theoretical and numerical models of baroclinic instability. Some surprising properties of both these models were discovered.

  9. Hydrodynamic Scalings: from Astrophysics to Laboratory

    NASA Astrophysics Data System (ADS)

    Ryutov, D. D.; Remington, B. A.

    2000-05-01

    A surprisingly general hydrodynamic similarity has been recently described in Refs. [1,2]. One can call it the Euler similarity because it works for the Euler equations (with MHD effects included). Although the dissipation processes are assumed to be negligible, the presence of shocks is allowed. For the polytropic medium (i.e., the medium where the energy density is proportional to the pressure), an evolution of an arbitrarily chosen 3D initial state can be scaled to another system, if a single dimensionless parameter (the Euler number) is the same for both initial states. The Euler similarity allows one to properly design laboratory experiments modeling astrophysical phenomena. We discuss several examples of such experiments related to the physics of supernovae [3]. For the problems with a single spatial scale, the condition of the smallness of dissipative processes can be adequately described in terms of the Reynolds, Peclet, and magnetic Reynolds numbers related to this scale (all three numbers must be large). However, if the system develops small-scale turbulence, dissipation may become important at these smaller scales, thereby affecting the gross behavior of the system. We analyze the corresponding constraints. We discuss also constraints imposed by the presence of interfaces between the substances with different polytropic index. Another set of similarities governs evolution of photoevaporation fronts in astrophysics. Convenient scaling laws exist in situations where the density of the ablated material is very low compared to the bulk density. We conclude that a number of hydrodynamical problems related to such objects as the Eagle Nebula can be adequately simulated in the laboratory. We discuss also possible scalings for radiative astrophysical jets (see Ref. [3] and references therein). This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract W-7405-Eng-48

  10. Scaling of Sediment Dynamics in a Reach-Scale Laboratory Model of a Sand-Bed Stream with Riparian Vegetation

    NASA Astrophysics Data System (ADS)

    Gorrick, S.; Rodriguez, J. F.

    2011-12-01

    A movable bed physical model was designed in a laboratory flume to simulate both bed and suspended load transport in a mildly sinuous sand-bed stream. Model simulations investigated the impact of different vegetation arrangements along the outer bank to evaluate rehabilitation options. Preserving similitude in the 1:16 laboratory model was very important. In this presentation the scaling approach, as well as the successes and challenges of the strategy are outlined. Firstly a near-bankfull flow event was chosen for laboratory simulation. In nature, bankfull events at the field site deposit new in-channel features but cause only small amounts of bank erosion. Thus the fixed banks in the model were not a drastic simplification. Next, and as in other studies, the flow velocity and turbulence measurements were collected in separate fixed bed experiments. The scaling of flow in these experiments was simply maintained by matching the Froude number and roughness levels. The subsequent movable bed experiments were then conducted under similar hydrodynamic conditions. In nature, the sand-bed stream is fairly typical; in high flows most sediment transport occurs in suspension and migrating dunes cover the bed. To achieve similar dynamics in the model equivalent values of the dimensionless bed shear stress and the particle Reynolds number were important. Close values of the two dimensionless numbers were achieved with lightweight sediments (R=0.3) including coal and apricot pips with a particle size distribution similar to that of the field site. Overall the moveable bed experiments were able to replicate the dominant sediment dynamics present in the stream during a bankfull flow and yielded relevant information for the analysis of the effects of riparian vegetation. There was a potential conflict in the strategy, in that grain roughness was exaggerated with respect to nature. The advantage of this strategy is that although grain roughness is exaggerated, the similarity of

  11. Laboratory Modelling of Volcano Plumbing Systems: a review

    NASA Astrophysics Data System (ADS)

    Galland, Olivier; Holohan, Eoghan P.; van Wyk de Vries, Benjamin; Burchardt, Steffi

    2015-04-01

    Earth scientists have, since the XIX century, tried to replicate or model geological processes in controlled laboratory experiments. In particular, laboratory modelling has been used study the development of volcanic plumbing systems, which sets the stage for volcanic eruptions. Volcanic plumbing systems involve complex processes that act at length scales of microns to thousands of kilometres and at time scales from milliseconds to billions of years, and laboratory models appear very suitable to address them. This contribution reviews laboratory models dedicated to study the dynamics of volcano plumbing systems (Galland et al., Accepted). The foundation of laboratory models is the choice of relevant model materials, both for rock and magma. We outline a broad range of suitable model materials used in the literature. These materials exhibit very diverse rheological behaviours, so their careful choice is a crucial first step for the proper experiment design. The second step is model scaling, which successively calls upon: (1) the principle of dimensional analysis, and (2) the principle of similarity. The dimensional analysis aims to identify the dimensionless physical parameters that govern the underlying processes. The principle of similarity states that "a laboratory model is equivalent to his geological analogue if the dimensionless parameters identified in the dimensional analysis are identical, even if the values of the governing dimensional parameters differ greatly" (Barenblatt, 2003). The application of these two steps ensures a solid understanding and geological relevance of the laboratory models. In addition, this procedure shows that laboratory models are not designed to exactly mimic a given geological system, but to understand underlying generic processes, either individually or in combination, and to identify or demonstrate physical laws that govern these processes. From this perspective, we review the numerous applications of laboratory models to

  12. Experimental and operational modal analysis of a laboratory scale model of a tripod support structure.

    NASA Astrophysics Data System (ADS)

    Luczak, M. M.; Mucchi, E.; Telega, J.

    2016-09-01

    The goal of the research is to develop a vibration-based procedure for the identification of structural failures in a laboratory scale model of a tripod supporting structure of an offshore wind turbine. In particular, this paper presents an experimental campaign on the scale model tested in two stages. Stage one encompassed the model tripod structure tested in air. The second stage was done in water. The tripod model structure allows to investigate the propagation of a circumferential representative crack of a cylindrical upper brace. The in-water test configuration included the tower with three bladed rotor. The response of the structure to the different waves loads were measured with accelerometers. Experimental and operational modal analysis was applied to identify the dynamic properties of the investigated scale model for intact and damaged state with different excitations and wave patterns. A comprehensive test matrix allows to assess the differences in estimated modal parameters due to damage or as potentially introduced by nonlinear structural response. The presented technique proves to be effective for detecting and assessing the presence of representative cracks.

  13. 10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  14. Numerical modeling of seismic anomalies at impact craters on a laboratory scale

    NASA Astrophysics Data System (ADS)

    Wuennemann, K.; Grosse, C. U.; Hiermaier, S.; Gueldemeister, N.; Moser, D.; Durr, N.

    2011-12-01

    Almost all terrestrial impact craters exhibit a typical geophysical signature. The usually observed circular negative gravity anomaly and reduced seismic velocities in the vicinity of crater structures are presumably related to an approximately hemispherical zone underneath craters where rocks have experienced intense brittle plastic deformation and fracturing during formation (see Fig.1). In the framework of the "MEMIN" (multidisciplinary experimental and modeling impact crater research network) project we carried out hypervelocity cratering experiments at the Fraunhofer Institute for High-Speed Dynamics on a decimeter scale to study the spatiotemporal evolution of the damage zone using ultrasound, acoustic emission techniques, and numerical modeling of crater formation. 2.5-10 mm iron projectiles were shot at 2-5.5 km/s on dry and water-saturated sandstone targets. The target material was characterized before, during and after the impact with high spatial resolution acoustic techniques to detect the extent of the damage zone, the state of rocks therein and to record the growth of cracks. The ultrasound measurements are applied analog to seismic surveys at natural craters but used on a different - i.e. much smaller - scale. We compare the measured data with dynamic models of crater formation, shock, plastic and elastic wave propagation, and tensile/shear failure of rocks in the impacted sandstone blocks. The presence of porosity and pore water significantly affects the propagation of waves. In particular the crushing of pores due to shock compression has to be taken into account. We present preliminary results showing good agreement between experiments and numerical model. In a next step we plan to use the numerical models to upscale the results from laboratory dimensions to the scale of natural impact craters.

  15. Achieving across-laboratory replicability in psychophysical scaling

    PubMed Central

    Ward, Lawrence M.; Baumann, Michael; Moffat, Graeme; Roberts, Larry E.; Mori, Shuji; Rutledge-Taylor, Matthew; West, Robert L.

    2015-01-01

    It is well known that, although psychophysical scaling produces good qualitative agreement between experiments, precise quantitative agreement between experimental results, such as that routinely achieved in physics or biology, is rarely or never attained. A particularly galling example of this is the fact that power function exponents for the same psychological continuum, measured in different laboratories but ostensibly using the same scaling method, magnitude estimation, can vary by a factor of three. Constrained scaling (CS), in which observers first learn a standardized meaning for a set of numerical responses relative to a standard sensory continuum and then make magnitude judgments of other sensations using the learned response scale, has produced excellent quantitative agreement between individual observers’ psychophysical functions. Theoretically it could do the same for across-laboratory comparisons, although this needs to be tested directly. We compared nine different experiments from four different laboratories as an example of the level of across experiment and across-laboratory agreement achievable using CS. In general, we found across experiment and across-laboratory agreement using CS to be significantly superior to that typically obtained with conventional magnitude estimation techniques, although some of its potential remains to be realized. PMID:26191019

  16. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  17. EPOS-WP16: A coherent and collaborative network of Solid Earth Multi-scale laboratories

    NASA Astrophysics Data System (ADS)

    Calignano, Elisa; Rosenau, Matthias; Lange, Otto; Spiers, Chris; Willingshofer, Ernst; Drury, Martyn; van Kan-Parker, Mirjam; Elger, Kirsten; Ulbricht, Damian; Funiciello, Francesca; Trippanera, Daniele; Sagnotti, Leonardo; Scarlato, Piergiorgio; Tesei, Telemaco; Winkler, Aldo

    2017-04-01

    Laboratory facilities are an integral part of Earth Science research. The diversity of methods employed in such infrastructures reflects the multi-scale nature of the Earth system and is essential for the understanding of its evolution, for the assessment of geo-hazards and for the sustainable exploitation of geo-resources. In the frame of EPOS (European Plate Observing System), the Working Package 16 represents a developing community of European Geoscience Multi-scale laboratories. The participant and collaborating institutions (Utrecht University, GFZ, RomaTre University, INGV, NERC, CSIC-ICTJA, CNRS, LMU, C4G-UBI, ETH, CNR*) embody several types of laboratory infrastructures, engaged in different fields of interest of Earth Science: from high temperature and pressure experimental facilities, to electron microscopy, micro-beam analysis, analogue tectonic and geodynamic modelling and paleomagnetic laboratories. The length scales encompassed by these infrastructures range from the nano- and micrometre levels (electron microscopy and micro-beam analysis) to the scale of experiments on centimetres-sized samples, and to analogue model experiments simulating the reservoir scale, the basin scale and the plate scale. The aim of WP16 is to provide two services by the year 2019: first, providing virtual access to data from laboratories (data service) and, second, providing physical access to laboratories (transnational access, TNA). Regarding the development of a data service, the current status is such that most data produced by the various laboratory centres and networks are available only in limited "final form" in publications, many data remain inaccessible and/or poorly preserved. Within EPOS the TCS Multi-scale laboratories is collecting and harmonizing available and emerging laboratory data on the properties and process controlling rock system behaviour at all relevant scales, in order to generate products accessible and interoperable through services for supporting

  18. Modelling high Reynolds number wall–turbulence interactions in laboratory experiments using large-scale free-stream turbulence

    PubMed Central

    Dogan, Eda; Hearst, R. Jason

    2017-01-01

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to ‘simulate’ high Reynolds number wall–turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167584

  19. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    PubMed

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  20. 30 CFR 14.21 - Laboratory-scale flame test apparatus.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Laboratory-scale flame test apparatus. 14.21 Section 14.21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING... Technical Requirements § 14.21 Laboratory-scale flame test apparatus. The principal parts of the apparatus...

  1. 30 CFR 14.21 - Laboratory-scale flame test apparatus.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Laboratory-scale flame test apparatus. 14.21 Section 14.21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING... Technical Requirements § 14.21 Laboratory-scale flame test apparatus. The principal parts of the apparatus...

  2. A laboratory scale model of abrupt ice-shelf disintegration

    NASA Astrophysics Data System (ADS)

    Macayeal, D. R.; Boghosian, A.; Styron, D. D.; Burton, J. C.; Amundson, J. M.; Cathles, L. M.; Abbot, D. S.

    2010-12-01

    An important mode of Earth’s disappearing cryosphere is the abrupt disintegration of ice shelves along the Peninsula of Antarctica. This disintegration process may be triggered by climate change, however the work needed to produce the spectacular, explosive results witnessed with the Larsen B and Wilkins ice-shelf events of the last decade comes from the large potential energy release associated with iceberg capsize and fragmentation. To gain further insight into the underlying exchanges of energy involved in massed iceberg movements, we have constructed a laboratory-scale model designed to explore the physical and hydrodynamic interactions between icebergs in a confined channel of water. The experimental apparatus consists of a 2-meter water tank that is 30 cm wide. Within the tank, we introduce fresh water and approximately 20-100 rectangular plastic ‘icebergs’ having the appropriate density contrast with water to mimic ice. The blocks are initially deployed in a tight pack, with all blocks arranged in a manner to represent the initial state of an integrated ice shelf or ice tongue. The system is allowed to evolve through time under the driving forces associated with iceberg hydrodynamics. Digitized videography is used to quantify how the system of plastic icebergs evolves between states of quiescence to states of mobilization. Initial experiments show that, after a single ‘agitator’ iceberg begins to capsize, an ‘avalanche’ of capsizing icebergs ensues which drives horizontal expansion of the massed icebergs across the water surface, and which stimulates other icebergs to capsize. A surprise initially evident in the experiments is the fact that the kinetic energy of the expanding mass of icebergs is only a small fraction of the net potential energy released by the rearrangement of mass via capsize. Approximately 85 - 90 % of the energy released by the system goes into water motion modes, including a pervasive, easily observed seich mode of the tank

  3. Biodegradation modelling of a dissolved gasoline plume applying independent laboratory and field parameters

    NASA Astrophysics Data System (ADS)

    Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.

    2000-12-01

    Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.

  4. Infrared thermography applied to the study of heated and solar pavement: from numerical modeling to small scale laboratory experiments

    NASA Astrophysics Data System (ADS)

    Le Touz, N.; Toullier, T.; Dumoulin, J.

    2017-05-01

    The present study addresses the thermal behaviour of a modified pavement structure to prevent icing at its surface in adverse winter time conditions or overheating in hot summer conditions. First a multi-physic model based on infinite elements method was built to predict the evolution of the surface temperature. In a second time, laboratory experiments on small specimen were carried out and the surface temperature was monitored by infrared thermography. Results obtained are analyzed and performances of the numerical model for real scale outdoor application are discussed. Finally conclusion and perspectives are proposed.

  5. LABORATORY SCALE STEAM INJECTION TREATABILITY STUDIES

    EPA Science Inventory

    Laboratory scale steam injection treatability studies were first developed at The University of California-Berkeley. A comparable testing facility has been developed at USEPA's Robert S. Kerr Environmental Research Center. Experience has already shown that many volatile organic...

  6. Trajectory Reconstruction and Uncertainty Analysis Using Mars Science Laboratory Pre-Flight Scale Model Aeroballistic Testing

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark

    2013-01-01

    As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.

  7. Characterization of seismic properties across scales: from the laboratory- to the field scale

    NASA Astrophysics Data System (ADS)

    Grab, Melchior; Quintal, Beatriz; Caspari, Eva; Maurer, Hansruedi; Greenhalgh, Stewart

    2016-04-01

    When exploring geothermal systems, the main interest is on factors controlling the efficiency of the heat exchanger. This includes the energy state of the pore fluids and the presence of permeable structures building part of the fluid transport system. Seismic methods are amongst the most common exploration techniques to image the deep subsurface in order to evaluate such a geothermal heat exchanger. They make use of the fact that a seismic wave caries information on the properties of the rocks in the subsurface through which it passes. This enables the derivation of the stiffness and the density of the host rock from the seismic velocities. Moreover, it is well-known that the seismic waveforms are modulated while propagating trough the subsurface by visco-elastic effects due to wave induced fluid flow, hence, delivering information about the fluids in the rock's pore space. To constrain the interpretation of seismic data, that is, to link seismic properties with the fluid state and host rock permeability, it is common practice to measure the rock properties of small rock specimens in the laboratory under in-situ conditions. However, in magmatic geothermal systems or in systems situated in the crystalline basement, the host rock is often highly impermeable and fluid transport predominately takes place in fracture networks, consisting of fractures larger than the rock samples investigated in the laboratory. Therefore, laboratory experiments only provide the properties of relatively intact rock and an up-scaling procedure is required to characterize the seismic properties of large rock volumes containing fractures and fracture networks and to study the effects of fluids in such fractured rock. We present a technique to parameterize fractured rock volumes as typically encountered in Icelandic magmatic geothermal systems, by combining laboratory experiments with effective medium calculations. The resulting models can be used to calculate the frequency-dependent bulk

  8. Bioreactor Scalability: Laboratory-Scale Bioreactor Design Influences Performance, Ecology, and Community Physiology in Expanded Granular Sludge Bed Bioreactors

    PubMed Central

    Connelly, Stephanie; Shin, Seung G.; Dillon, Robert J.; Ijaz, Umer Z.; Quince, Christopher; Sloan, William T.; Collins, Gavin

    2017-01-01

    Studies investigating the feasibility of new, or improved, biotechnologies, such as wastewater treatment digesters, inevitably start with laboratory-scale trials. However, it is rarely determined whether laboratory-scale results reflect full-scale performance or microbial ecology. The Expanded Granular Sludge Bed (EGSB) bioreactor, which is a high-rate anaerobic digester configuration, was used as a model to address that knowledge gap in this study. Two laboratory-scale idealizations of the EGSB—a one-dimensional and a three- dimensional scale-down of a full-scale design—were built and operated in triplicate under near-identical conditions to a full-scale EGSB. The laboratory-scale bioreactors were seeded using biomass obtained from the full-scale bioreactor, and, spent water from the distillation of whisky from maize was applied as substrate at both scales. Over 70 days, bioreactor performance, microbial ecology, and microbial community physiology were monitored at various depths in the sludge-beds using 16S rRNA gene sequencing (V4 region), specific methanogenic activity (SMA) assays, and a range of physical and chemical monitoring methods. SMA assays indicated dominance of the hydrogenotrophic pathway at full-scale whilst a more balanced activity profile developed during the laboratory-scale trials. At each scale, Methanobacterium was the dominant methanogenic genus present. Bioreactor performance overall was better at laboratory-scale than full-scale. We observed that bioreactor design at laboratory-scale significantly influenced spatial distribution of microbial community physiology and taxonomy in the bioreactor sludge-bed, with 1-D bioreactor types promoting stratification of each. In the 1-D laboratory bioreactors, increased abundance of Firmicutes was associated with both granule position in the sludge bed and increased activity against acetate and ethanol as substrates. We further observed that stratification in the sludge-bed in 1-D laboratory-scale

  9. Building a Laboratory-Scale Biogas Plant and Verifying its Functionality

    NASA Astrophysics Data System (ADS)

    Boleman, Tomáš; Fiala, Jozef; Blinová, Lenka; Gerulová, Kristína

    2011-01-01

    The paper deals with the process of building a laboratory-scale biogas plant and verifying its functionality. The laboratory-scale prototype was constructed in the Department of Safety and Environmental Engineering at the Faculty of Materials Science and Technology in Trnava, of the Slovak University of Technology. The Department has already built a solar laboratory to promote and utilise solar energy, and designed SETUR hydro engine. The laboratory is the next step in the Department's activities in the field of renewable energy sources and biomass. The Department is also involved in the European Union project, where the goal is to upgrade all existed renewable energy sources used in the Department.

  10. Coupled numerical modeling of gas hydrates bearing sediments from laboratory to field-scale conditions

    NASA Astrophysics Data System (ADS)

    Sanchez, M. J.; Santamarina, C.; Gai, X., Sr.; Teymouri, M., Sr.

    2017-12-01

    Stability and behavior of Hydrate Bearing Sediments (HBS) are characterized by the metastable character of the gas hydrate structure which strongly depends on thermo-hydro-chemo-mechanical (THCM) actions. Hydrate formation, dissociation and methane production from hydrate bearing sediments are coupled THCM processes that involve, amongst other, exothermic formation and endothermic dissociation of hydrate and ice phases, mixed fluid flow and large changes in fluid pressure. The analysis of available data from past field and laboratory experiments, and the optimization of future field production studies require a formal and robust numerical framework able to capture the very complex behavior of this type of soil. A comprehensive fully coupled THCM formulation has been developed and implemented into a finite element code to tackle problems involving gas hydrates sediments. Special attention is paid to the geomechanical behavior of HBS, and particularly to their response upon hydrate dissociation under loading. The numerical framework has been validated against recent experiments conducted under controlled conditions in the laboratory that challenge the proposed approach and highlight the complex interaction among THCM processes in HBS. The performance of the models in these case studies is highly satisfactory. Finally, the numerical code is applied to analyze the behavior of gas hydrate soils under field-scale conditions exploring different features of material behavior under possible reservoir conditions.

  11. MHD scaling: from astrophysics to the laboratory

    NASA Astrophysics Data System (ADS)

    Ryutov, Dmitri

    2000-10-01

    During the last few years, considerable progress has been made in simulating astrophysical phenomena in laboratory experiments with high power lasers [1]. Astrophysical phenomena that have drawn particular interest include supernovae explosions; young supernova remnants; galactic jets; the formation of fine structures in late supernova remnants by instabilities; and the ablation driven evolution of molecular clouds illuminated by nearby bright stars, which may affect star formation. A question may arise as to what extent the laser experiments, which deal with targets of a spatial scale 0.01 cm and occur at a time scale of a few nanoseconds, can reproduce phenomena occurring at spatial scales of a million or more kilometers and time scales from hours to many years. Quite remarkably, if dissipative processes (like, e.g., viscosity, Joule dissipation, etc.) are subdominant in both systems, and the matter behaves as a polytropic gas, there exists a broad hydrodynamic similarity (the ``Euler similarity" of Ref. [2]) that allows a direct scaling of laboratory results to astrophysical phenomena. Following a review of relevant earlier work (in particular, [3]-[5]), discussion is presented of the details of the Euler similarity related to the presence of shocks and to a special case of a strong drive. After that, constraints stemming from possible development of small-scale turbulence are analyzed. Generalization of the Euler similarity to the case of a gas with spatially varying polytropic index is presented. A possibility of scaled simulations of ablation front dynamics is one more topic covered in this paper. It is shown that, with some additional constraints, a simple similarity exists. This, in particular, opens up the possibility of scaled laboratory simulation of the aforementioned ablation (photoevaporation) fronts. A nonlinear transformation [6] that establishes a duality between implosion and explosion processes is also discussed in the paper. 1. B.A. Remington et

  12. A comparison of refuse attenuation in laboratory and field scale lysimeters.

    PubMed

    Youcai, Zhao; Luochun, Wang; Renhua, Hua; Dimin, Xu; Guowei, Gu

    2002-01-01

    For this study, small and middle scale laboratory lysimeters, and a large scale field lysimeter in situ in Shanghai Refuse Landfill, with refuse weights of 187,600 and 10,800,000 kg, respectively, were created. These lysimeters are compared in terms of leachate quality (pH, concentrations of COD, BOD and NH3-N), refuse composition (biodegradable matter and volatile solid) and surface settlement for a monitoring period of 0-300 days. The objectives of this study were to explore both the similarities and disparities between laboratory and field scale lysimeters, and to compare degradation behaviors of refuse at the intensive reaction phase in the different scale lysimeters. Quantitative relationships of leachate quality and refuse composition with placement time show that degradation behaviors of refuse seem to depend heavily on the scales of the lysimeters and the parameters of concern, especially in the starting period of 0-6 months. However, some similarities exist between laboratory and field lysimeters after 4-6 months of placement because COD and BOD concentrations in leachate in the field lysimeter decrease regularly in a parallel pattern with those in the laboratory lysimeters. NH3-N, volatile solid (VS) and biodegradable matter (BDM) also gradually decrease in parallel in this intensive reaction phase for all scale lysimeters as refuse ages. Though the concrete data are different among the different scale lysimeters, it may be considered that laboratory lysimeters with sufficient scale are basically applicable for a rough simulation of a real landfill, especially for illustrating the degradation pattern and mechanism. Settlement of refuse surface is roughly proportional to the initial refuse height.

  13. Numerical modeling of laboratory-scale surface-to-crown fire transition

    NASA Astrophysics Data System (ADS)

    Castle, Drew Clayton

    Understanding the conditions leading to the transition of fire spread from a surface fuel to an elevated (crown) fuel is critical to effective fire risk assessment and management. Surface fires that successfully transition to crown fires can be very difficult to suppress, potentially leading to damages in the natural and built environments. This is relevant to chaparral shrub lands which are common throughout parts of the Southwest U.S. and represent a significant part of the wildland urban interface. The ability of the Wildland-Urban Interface Fire Dynamic Simulator (WFDS) to model surface-to-crown fire transition was evaluated through comparison to laboratory experiments. The WFDS model is being developed by the U.S. Forest Service (USFS) and the National Institute of Standards and Technology. The experiments were conducted at the USFS Forest Fire Laboratory in Riverside, California. The experiments measured the ignition of chamise (Adenostoma fasciculatum) crown fuel held above a surface fire spreading through excelsior fuel. Cases with different crown fuel bulk densities, crown fuel base heights, and imposed wind speeds were considered. Cold-flow simulations yielded wind speed profiles that closely matched the experimental measurements. Next, fire simulations with only the surface fuel were conducted to verify the rate of spread while factors such as substrate properties were varied. Finally, simulations with both a surface fuel and a crown fuel were completed. Examination of specific surface fire characteristics (rate of spread, flame angle, etc.) and the corresponding experimental surface fire behavior provided a basis for comparison of the factors most responsible for transition from a surface fire to the raised fuel ignition. The rate of spread was determined by tracking the flame in the Smokeview animations using a tool developed for tracking an actual flame in a video. WFDS simulations produced results in both surface fire spread and raised fuel bed

  14. Design of a laboratory scale fluidized bed reactor

    NASA Astrophysics Data System (ADS)

    Wikström, E.; Andersson, P.; Marklund, S.

    1998-04-01

    The aim of this project was to construct a laboratory scale fluidized bed reactor that simulates the behavior of full scale municipal solid waste combustors. The design of this reactor is thoroughly described. The size of the laboratory scale fluidized bed reactor is 5 kW, which corresponds to a fuel-feeding rate of approximately 1 kg/h. The reactor system consists of four parts: a bed section, a freeboard section, a convector (postcombustion zone), and an air pollution control (APC) device system. The inside diameter of the reactor is 100 mm at the bed section and it widens to 200 mm in diameter in the freeboard section; the total height of the reactor is 1760 mm. The convector part consists of five identical sections; each section is 2700 mm long and has an inside diameter of 44.3 mm. The reactor is flexible regarding the placement and number of sampling ports. At the beginning of the first convector unit and at the end of each unit there are sampling ports for organic micropollutants (OMP). This makes it possible to study the composition of the flue gases at various residence times. Sampling ports for inorganic compounds and particulate matter are also placed in the convector section. All operating parameters, reactor temperatures, concentrations of CO, CO2, O2, SO2, NO, and NO2 are continuously measured and stored at selected intervals for further evaluation. These unique features enable full control over the fuel feed, air flows, and air distribution as well as over the temperature profile. Elaborate details are provided regarding the configuration of the fuel-feeding systems, the fluidized bed, the convector section, and the APC device. This laboratory reactor enables detailed studies of the formation mechanisms of OMP, such as polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs), poly-chlorinated biphenyls (PCBs), and polychlorinated benzenes (PCBzs). With this system formation mechanisms of OMP occurring in both the combustion

  15. Cold-Cap Temperature Profile Comparison between the Laboratory and Mathematical Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Derek R.; Schweiger, Michael J.; Riley, Brian J.

    2015-06-01

    The rate of waste vitrification in an electric melter is connected to the feed-to-glass conversion process, which occurs in the cold cap, a layer of reacting feed on top of molten glass. The cold cap consists of two layers: a low temperature (~100°C – ~800°C) region of unconnected feed and a high temperature (~800°C – ~1100°C) region of foam with gas bubbles and cavities mixed in the connected glass melt. A recently developed mathematical model describes the effect of the cold cap on glass production. For verification of the mathematical model, a laboratory-scale melter was used to produce a coldmore » cap that could be cross-sectioned and polished in order to determine the temperature profile related to position in the cold cap. The cold cap from the laboratory-scale melter exhibited an accumulation of feed ~400°C due to radiant heat from the molten glass creating dry feed conditions in the melter, which was not the case in the mathematical model where wet feed conditions were calculated. Through the temperature range from ~500°C – ~1100°C, there was good agreement between the model and the laboratory cold cap. Differences were observed between the two temperature profiles due to the temperature of the glass melts and the lack of secondary foam, large cavities, and shrinkage of the primary foam bubbles upon the cooling of the laboratory-scale cold cap.« less

  16. Modeling Soil Organic Carbon at Regional Scale by Combining Multi-Spectral Images with Laboratory Spectra.

    PubMed

    Peng, Yi; Xiong, Xiong; Adhikari, Kabindra; Knadel, Maria; Grunwald, Sabine; Greve, Mogens Humlekrog

    2015-01-01

    There is a great challenge in combining soil proximal spectra and remote sensing spectra to improve the accuracy of soil organic carbon (SOC) models. This is primarily because mixing of spectral data from different sources and technologies to improve soil models is still in its infancy. The first objective of this study was to integrate information of SOC derived from visible near-infrared reflectance (Vis-NIR) spectra in the laboratory with remote sensing (RS) images to improve predictions of topsoil SOC in the Skjern river catchment, Denmark. The second objective was to improve SOC prediction results by separately modeling uplands and wetlands. A total of 328 topsoil samples were collected and analyzed for SOC. Satellite Pour l'Observation de la Terre (SPOT5), Landsat Data Continuity Mission (Landsat 8) images, laboratory Vis-NIR and other ancillary environmental data including terrain parameters and soil maps were compiled to predict topsoil SOC using Cubist regression and Bayesian kriging. The results showed that the model developed from RS data, ancillary environmental data and laboratory spectral data yielded a lower root mean square error (RMSE) (2.8%) and higher R2 (0.59) than the model developed from only RS data and ancillary environmental data (RMSE: 3.6%, R2: 0.46). Plant-available water (PAW) was the most important predictor for all the models because of its close relationship with soil organic matter content. Moreover, vegetation indices, such as the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), were very important predictors in SOC spatial models. Furthermore, the 'upland model' was able to more accurately predict SOC compared with the 'upland & wetland model'. However, the separately calibrated 'upland and wetland model' did not improve the prediction accuracy for wetland sites, since it was not possible to adequately discriminate the vegetation in the RS summer images. We conclude that laboratory Vis

  17. Fracture induced electromagnetic emissions: extending laboratory findings by observations at the geophysical scale

    NASA Astrophysics Data System (ADS)

    Potirakis, Stelios M.; Contoyiannis, Yiannis; Kopanas, John; Kalimeris, Anastasios; Antonopoulos, George; Peratzakis, Athanasios; Eftaxias, Konstantinos; Nomicos, Constantinos

    2014-05-01

    Under natural conditions, it is practically impossible to install an experimental network on the geophysical scale using the same instrumentations as in laboratory experiments for understanding, through the states of stress and strain and their time variation, the laws that govern the friction during the last stages of EQ generation, or to monitor (much less to control) the principal characteristics of a fracture process. Fracture-induced electromagnetic emissions (EME) in a wide range of frequency bands are sensitive to the micro-structural chances. Thus, their study constitutes a nondestructive method for the monitoring of the evolution of damage process at the laboratory scale. It has been suggested that fracture induced MHz-kHz electromagnetic (EM) emissions, which emerge from a few days up to a few hours before the main seismic shock occurrence permit a real time monitoring of the damage process during the last stages of earthquake preparation, as it happens at the laboratory scale. Since the EME are produced both in the case of the laboratory scale fracture and the EQ preparation process (geophysical scale fracture) they should present similar characteristics in these two scales. Therefore, both the laboratory experimenting scientists and the experimental scientists studying the pre-earthquake EME could benefit from each- other's results. Importantly, it is noted that when studying the fracture process by means of laboratory experiments, the fault growth process normally occurs violently in a fraction of a second. However, a major difference between the laboratory and natural processes is the order-of-magnitude differences in scale (in space and time), allowing the possibility of experimental observation at the geophysical scale for a range of physical processes which are not observable at the laboratory scale. Therefore, the study of fracture-induced EME is expected to reveal more information, especially for the last stages of the fracture process, when it

  18. Data Services and Transnational Access for European Geosciences Multi-Scale Laboratories

    NASA Astrophysics Data System (ADS)

    Funiciello, Francesca; Rosenau, Matthias; Sagnotti, Leonardo; Scarlato, Piergiorgio; Tesei, Telemaco; Trippanera, Daniele; Spires, Chris; Drury, Martyn; Kan-Parker, Mirjam; Lange, Otto; Willingshofer, Ernst

    2016-04-01

    The EC policy for research in the new millennium supports the development of european-scale research infrastructures. In this perspective, the existing research infrastructures are going to be integrated with the objective to increase their accessibility and to enhance the usability of their multidisciplinary data. Building up integrating Earth Sciences infrastructures in Europe is the mission of the Implementation Phase (IP) of the European Plate Observing System (EPOS) project (2015-2019). The integration of european multiscale laboratories - analytical, experimental petrology and volcanology, magnetic and analogue laboratories - plays a key role in this context and represents a specific task of EPOS IP. In the frame of the WP16 of EPOS IP working package 16, European geosciences multiscale laboratories aims to be linked, merging local infrastructures into a coherent and collaborative network. In particular, the EPOS IP WP16-task 4 "Data services" aims at standardize data and data products, already existing and newly produced by the participating laboratories, and made them available through a new digital platform. The following data and repositories have been selected for the purpose: 1) analytical and properties data a) on volcanic ash from explosive eruptions, of interest to the aviation industry, meteorological and government institutes, b) on magmas in the context of eruption and lava flow hazard evaluation, and c) on rock systems of key importance in mineral exploration and mining operations; 2) experimental data describing: a) rock and fault properties of importance for modelling and forecasting natural and induced subsidence, seismicity and associated hazards, b) rock and fault properties relevant for modelling the containment capacity of rock systems for CO2, energy sources and wastes, c) crustal and upper mantle rheology as needed for modelling sedimentary basin formation and crustal stress distributions, d) the composition, porosity, permeability, and

  19. Modeling Soil Organic Carbon at Regional Scale by Combining Multi-Spectral Images with Laboratory Spectra

    PubMed Central

    Peng, Yi; Xiong, Xiong; Adhikari, Kabindra; Knadel, Maria; Grunwald, Sabine; Greve, Mogens Humlekrog

    2015-01-01

    There is a great challenge in combining soil proximal spectra and remote sensing spectra to improve the accuracy of soil organic carbon (SOC) models. This is primarily because mixing of spectral data from different sources and technologies to improve soil models is still in its infancy. The first objective of this study was to integrate information of SOC derived from visible near-infrared reflectance (Vis-NIR) spectra in the laboratory with remote sensing (RS) images to improve predictions of topsoil SOC in the Skjern river catchment, Denmark. The second objective was to improve SOC prediction results by separately modeling uplands and wetlands. A total of 328 topsoil samples were collected and analyzed for SOC. Satellite Pour l’Observation de la Terre (SPOT5), Landsat Data Continuity Mission (Landsat 8) images, laboratory Vis-NIR and other ancillary environmental data including terrain parameters and soil maps were compiled to predict topsoil SOC using Cubist regression and Bayesian kriging. The results showed that the model developed from RS data, ancillary environmental data and laboratory spectral data yielded a lower root mean square error (RMSE) (2.8%) and higher R2 (0.59) than the model developed from only RS data and ancillary environmental data (RMSE: 3.6%, R2: 0.46). Plant-available water (PAW) was the most important predictor for all the models because of its close relationship with soil organic matter content. Moreover, vegetation indices, such as the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), were very important predictors in SOC spatial models. Furthermore, the ‘upland model’ was able to more accurately predict SOC compared with the ‘upland & wetland model’. However, the separately calibrated ‘upland and wetland model’ did not improve the prediction accuracy for wetland sites, since it was not possible to adequately discriminate the vegetation in the RS summer images. We conclude that laboratory

  20. Beyond-laboratory-scale prediction for channeling flows through subsurface rock fractures with heterogeneous aperture distributions revealed by laboratory evaluation

    NASA Astrophysics Data System (ADS)

    Ishibashi, Takuya; Watanabe, Noriaki; Hirano, Nobuo; Okamoto, Atsushi; Tsuchiya, Noriyoshi

    2015-01-01

    The present study evaluates aperture distributions and fluid flow characteristics for variously sized laboratory-scale granite fractures under confining stress. As a significant result of the laboratory investigation, the contact area in fracture plane was found to be virtually independent of scale. By combining this characteristic with the self-affine fractal nature of fracture surfaces, a novel method for predicting fracture aperture distributions beyond laboratory scale is developed. Validity of this method is revealed through reproduction of the results of laboratory investigation and the maximum aperture-fracture length relations, which are reported in the literature, for natural fractures. The present study finally predicts conceivable scale dependencies of fluid flows through joints (fractures without shear displacement) and faults (fractures with shear displacement). Both joint and fault aperture distributions are characterized by a scale-independent contact area, a scale-dependent geometric mean, and a scale-independent geometric standard deviation of aperture. The contact areas for joints and faults are approximately 60% and 40%. Changes in the geometric means of joint and fault apertures (µm), em, joint and em, fault, with fracture length (m), l, are approximated by em, joint = 1 × 102 l0.1 and em, fault = 1 × 103 l0.7, whereas the geometric standard deviations of both joint and fault apertures are approximately 3. Fluid flows through both joints and faults are characterized by formations of preferential flow paths (i.e., channeling flows) with scale-independent flow areas of approximately 10%, whereas the joint and fault permeabilities (m2), kjoint and kfault, are scale dependent and are approximated as kjoint = 1 × 10-12 l0.2 and kfault = 1 × 10-8 l1.1.

  1. EFFECTS OF LARVAL STOCKING DENSITY ON LABORATORY-SCALE AND COMMERICAL-SCALE PRODUCTION OF SUMMER FLOUNDER, PARALICHTHYS DENTATUS

    EPA Science Inventory

    Three experiments investigating larval stocking densities of summer flounder from hatch to metamorphosis, Paralichthys dentatus, were conducted at laboratory-scale (75-L aquaria) and at commercial scale (1,000-L tanks). Experiments 1 and 2 at commercial scale tested the densities...

  2. Vacuum packing: a model system for laboratory-scale silage fermentations.

    PubMed

    Johnson, H E; Merry, R J; Davies, D R; Kell, D B; Theodorou, M K; Griffith, G W

    2005-01-01

    To determine the utility of vacuum-packed polythene bags as a convenient, flexible and cost-effective alternative to fixed volume glass vessels for lab-scale silage studies. Using perennial ryegrass or red clover forage, similar fermentations (as assessed by pH measurement) occurred in glass tube and vacuum-packed silos over a 35-day period. As vacuum-packing devices allow modification of initial packing density, the effect of four different settings (initial packing densities of 0.397, 0.435, 0.492 and 0.534 g cm(-3)) on the silage fermentation over 16 days was examined. Significant differences in pH decline and lactate accumulation were observed at different vacuum settings. Gas accumulation was apparent within all bags and changes in bag volume with time was observed to vary according to initial packing density. Vacuum-packed silos do provide a realistic model system for lab-scale silage fermentations. Use of vacuum-packed silos holds potential for lab-scale evaluations of silage fermentations, allowing higher throughput of samples, more consistent packing as well as the possibility of investigating the effects of different initial packing densities and use of different wrapping materials.

  3. Effect of nacelle on wake meandering in a laboratory scale wind turbine using LES

    NASA Astrophysics Data System (ADS)

    Foti, Daniel; Yang, Xiaolei; Guala, Michele; Sotiropoulos, Fotis

    2015-11-01

    Wake meandering, large scale motion in the wind turbine wakes, has considerable effects on the velocity deficit and turbulence intensity in the turbine wake from the laboratory scale to utility scale wind turbines. In the dynamic wake meandering model, the wake meandering is assumed to be caused by large-scale atmospheric turbulence. On the other hand, Kang et al. (J. Fluid Mech., 2014) demonstrated that the nacelle geometry has a significant effect on the wake meandering of a hydrokinetic turbine, through the interaction of the inner wake of the nacelle vortex with the outer wake of the tip vortices. In this work, the significance of the nacelle on the wake meandering of a miniature wind turbine previously used in experiments (Howard et al., Phys. Fluid, 2015) is demonstrated with large eddy simulations (LES) using immersed boundary method with fine enough grids to resolve the turbine geometric characteristics. The three dimensionality of the wake meandering is analyzed in detail through turbulent spectra and meander reconstruction. The computed flow fields exhibit wake dynamics similar to those observed in the wind tunnel experiments and are analyzed to shed new light into the role of the energetic nacelle vortex on wake meandering. This work was supported by Department of Energy DOE (DE-EE0002980, DE-EE0005482 and DE-AC04-94AL85000), and Sandia National Laboratories. Computational resources were provided by Sandia National Laboratories and the University of Minnesota Supercomputing.

  4. A laboratory-scale comparison of rate of spread model predictions using chaparral fuel beds – preliminary results

    Treesearch

    D.R. Weise; E. Koo; X. Zhou; S. Mahalingam

    2011-01-01

    Observed fire spread rates from 240 laboratory fires in horizontally-oriented single-species live fuel beds were compared to predictions from various implementations and modifications of the Rothermel rate of spread model and a physical fire spread model developed by Pagni and Koo. Packing ratio of the laboratory fuel beds was generally greater than that observed in...

  5. Validity of thermally-driven small-scale ventilated filling box models

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie L.; Linden, P. F.

    2013-11-01

    The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.

  6. 12. PHOTOGRAPH OF A PHOTOGRAPH OF A SCALE MODEL OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. PHOTOGRAPH OF A PHOTOGRAPH OF A SCALE MODEL OF THE WASTE CALCINER FACILITY, SHOWING WEST ELEVATION. (THE ORIGINAL MODEL HAS BEEN LOST.) INEEL PHOTO NUMBER 95-903-1-3. - Idaho National Engineering Laboratory, Old Waste Calcining Facility, Scoville, Butte County, ID

  7. Fate of estrone in laboratory-scale constructed wetlands

    USDA-ARS?s Scientific Manuscript database

    A horizontal, subsurface, laboratory-scale constructed wetland (CW) consisting of four cells in series was used to determine the attenuation of the steroid hormone estrone (E1) present in animal wastewater. Liquid swine manure diluted 1:80 with farm pond water and dosed with [14C]E1 flowed through ...

  8. Scaled laboratory experiments explain the kink behaviour of the Crab Nebula jet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C. K.; Tzeferacos, P.; Lamb, D.

    X-ray images from the Chandra X-ray Observatory show that the South-East jet in the Crab nebula changes direction every few years. This remarkable phenomenon is also observed in jets associated with pulsar wind nebulae and other astrophysical objects, and therefore is a fundamental feature of astrophysical jet evolution that needs to be understood. Theoretical modeling and numerical simulations have suggested that this phenomenon may be a consequence of magnetic fields (B) and current-driven magnetohydrodynamic (MHD) instabilities taking place in the jet, but until now there has been no verification of this process in a controlled laboratory environment. Here we reportmore » the first such experiments, using scaled laboratory plasma jets generated by high-power lasers to model the Crab jet and monoenergetic-proton radiography to provide direct visualization and measurement of magnetic fields and their behavior. The toroidal magnetic field embedded in the supersonic jet triggered plasma instabilities and resulted in considerable deflections throughout the jet propagation, mimicking the kinks in the Crab jet. We also demonstrated that these kinks are stabilized by high jet velocity, consistent with the observation that instabilities alter the jet orientation but do not disrupt the overall jet structure. We successfully modeled these laboratory experiments with a validated three-dimensional (3D) numerical simulation, which in conjunction with the experiments provide compelling evidence that we have an accurate model of the most important physics of magnetic fields and MHD instabilities in the observed, kinked jet in the Crab nebula. The experiments initiate a novel approach in the laboratory for visualizing fields and instabilities associated with jets observed in various astrophysical objects, ranging from stellar to extragalactic systems. We expect that future work along this line will have important impact on the study and understanding of such fundamental

  9. Scaled laboratory experiments explain the kink behaviour of the Crab Nebula jet

    DOE PAGES

    Li, C. K.; Tzeferacos, P.; Lamb, D.; ...

    2016-10-07

    X-ray images from the Chandra X-ray Observatory show that the South-East jet in the Crab nebula changes direction every few years. This remarkable phenomenon is also observed in jets associated with pulsar wind nebulae and other astrophysical objects, and therefore is a fundamental feature of astrophysical jet evolution that needs to be understood. Theoretical modeling and numerical simulations have suggested that this phenomenon may be a consequence of magnetic fields (B) and current-driven magnetohydrodynamic (MHD) instabilities taking place in the jet, but until now there has been no verification of this process in a controlled laboratory environment. Here we reportmore » the first such experiments, using scaled laboratory plasma jets generated by high-power lasers to model the Crab jet and monoenergetic-proton radiography to provide direct visualization and measurement of magnetic fields and their behavior. The toroidal magnetic field embedded in the supersonic jet triggered plasma instabilities and resulted in considerable deflections throughout the jet propagation, mimicking the kinks in the Crab jet. We also demonstrated that these kinks are stabilized by high jet velocity, consistent with the observation that instabilities alter the jet orientation but do not disrupt the overall jet structure. We successfully modeled these laboratory experiments with a validated three-dimensional (3D) numerical simulation, which in conjunction with the experiments provide compelling evidence that we have an accurate model of the most important physics of magnetic fields and MHD instabilities in the observed, kinked jet in the Crab nebula. The experiments initiate a novel approach in the laboratory for visualizing fields and instabilities associated with jets observed in various astrophysical objects, ranging from stellar to extragalactic systems. We expect that future work along this line will have important impact on the study and understanding of such fundamental

  10. Hypersonic Glider Model in Full Scale Tunnel 1957

    NASA Image and Video Library

    1957-09-07

    L57-1439 A model based on Langley s concept of a hypersonic glider was test flown on an umbilical cord inside the Full Scale Tunnel in 1957. Photograph published in Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917-1958 by James R. Hansen. Page 374.

  11. Numerical Investigation of Earthquake Nucleation on a Laboratory-Scale Heterogeneous Fault with Rate-and-State Friction

    NASA Astrophysics Data System (ADS)

    Higgins, N.; Lapusta, N.

    2014-12-01

    Many large earthquakes on natural faults are preceded by smaller events, often termed foreshocks, that occur close in time and space to the larger event that follows. Understanding the origin of such events is important for understanding earthquake physics. Unique laboratory experiments of earthquake nucleation in a meter-scale slab of granite (McLaskey and Kilgore, 2013; McLaskey et al., 2014) demonstrate that sample-scale nucleation processes are also accompanied by much smaller seismic events. One potential explanation for these foreshocks is that they occur on small asperities - or bumps - on the fault interface, which may also be the locations of smaller critical nucleation size. We explore this possibility through 3D numerical simulations of a heterogeneous 2D fault embedded in a homogeneous elastic half-space, in an attempt to qualitatively reproduce the laboratory observations of foreshocks. In our model, the simulated fault interface is governed by rate-and-state friction with laboratory-relevant frictional properties, fault loading, and fault size. To create favorable locations for foreshocks, the fault surface heterogeneity is represented as patches of increased normal stress, decreased characteristic slip distance L, or both. Our simulation results indicate that one can create a rate-and-state model of the experimental observations. Models with a combination of higher normal stress and lower L at the patches are closest to matching the laboratory observations of foreshocks in moment magnitude, source size, and stress drop. In particular, we find that, when the local compression is increased, foreshocks can occur on patches that are smaller than theoretical critical nucleation size estimates. The additional inclusion of lower L for these patches helps to keep stress drops within the range observed in experiments, and is compatible with the asperity model of foreshock sources, since one would expect more compressed spots to be smoother (and hence have

  12. Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media

    USGS Publications Warehouse

    Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.

    2000-01-01

    To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic

  13. Modeling Small-Scale Nearshore Processes

    NASA Astrophysics Data System (ADS)

    Slinn, D.; Holland, T.; Puleo, J.; Puleo, J.; Hanes, D.

    2001-12-01

    In recent years advances in high performance computing have made it possible to gain new qualitative and quantitative insights into the behavior and effects of coastal processes using high-resolution physical-mathematical models. The Coastal Dynamics program at the U.S. Office of Naval Research under the guidance of Dr. Thomas Kinder has encouraged collaboration between modelers, theoreticians, and field and laboratory experimentalists and supported innovative modeling efforts to examine a wide range of nearshore processes. An area of emphasis has been small-scale, time-dependent, turbulent flows, such as the wave bottom boundary layer, breaking surface waves, and the swash zone and their effects on shoaling waves, mean currents, and sediment transport that integrate to impact the long-term and large-scale response of the beach system to changing environmental conditions. Examples of small-scale modeling studies supported by CD-321 related to our work include simulation of the wave bottom boundary layer. Under mild wave field conditions the seabed forms sand ripples and simulations demonstrate that the ripples cause increases in the bed friction, the kinetic energy dissipation rates, the boundary layer thickness, and turbulence in the water column. Under energetic wave field conditions the ripples are sheared smooth and sheet flow conditions can predominate, causing the top few layers of sand grains to move as a fluidized bed, making large aggregate contributions to sediment transport. Complementary models of aspects of these processes have been developed simultaneously in various directions (e.g., Jenkins and Hanes, JFM 1998; Drake and Calantoni, 2001; Trowbridge and Madsen, JGR, 1984). Insight into near-bed fluid-sediment interactions has also been advanced using Navier-Stokes based models of swash events. Our recent laboratory experiments at the Waterways Experiment Station demonstrate that volume-of-fluid models can predict salient features of swash uprush

  14. Fast laboratory-based micro-computed tomography for pore-scale research: Illustrative experiments and perspectives on the future

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Boone, Marijn A.; Boone, Matthieu N.; De Schryver, Thomas; Masschaele, Bert; Van Hoorebeke, Luc; Cnudde, Veerle

    2016-09-01

    Over the past decade, the wide-spread implementation of laboratory-based X-ray micro-computed tomography (micro-CT) scanners has revolutionized both the experimental and numerical research on pore-scale transport in geological materials. The availability of these scanners has opened up the possibility to image a rock's pore space in 3D almost routinely to many researchers. While challenges do persist in this field, we treat the next frontier in laboratory-based micro-CT scanning: in-situ, time-resolved imaging of dynamic processes. Extremely fast (even sub-second) micro-CT imaging has become possible at synchrotron facilities over the last few years, however, the restricted accessibility of synchrotrons limits the amount of experiments which can be performed. The much smaller X-ray flux in laboratory-based systems bounds the time resolution which can be attained at these facilities. Nevertheless, progress is being made to improve the quality of measurements performed on the sub-minute time scale. We illustrate this by presenting cutting-edge pore scale experiments visualizing two-phase flow and solute transport in real-time with a lab-based environmental micro-CT set-up. To outline the current state of this young field and its relevance to pore-scale transport research, we critically examine its current bottlenecks and their possible solutions, both on the hardware and the software level. Further developments in laboratory-based, time-resolved imaging could prove greatly beneficial to our understanding of transport behavior in geological materials and to the improvement of pore-scale modeling by providing valuable validation.

  15. Non-Fickian dispersive transport of strontium in laboratory-scale columns: Modelling and evaluation

    NASA Astrophysics Data System (ADS)

    Liu, Dongxu; Jivkov, Andrey P.; Wang, Lichun; Si, Gaohua; Yu, Jing

    2017-06-01

    In the context of environmental remediation of contaminated sites and safety assessment of nuclear waste disposal in the near-surface zone, we investigate the leaching and non-Fickian dispersive migration with sorption of strontium (mocking strontium-90) through columns packed with sand and clay. Analysis is based on breakthrough curves (BTCs) from column experiments, which simulated rainfall infiltration and source term release scenario, rather than applying constant tracer solution at the inlet as commonly used. BTCs are re-evaluated and transport parameters are estimated by inverse modelling using two approaches: (1) equilibrium advection-dispersion equation (ADE); and (2) continuous time random walk (CTRW). Firstly, based on a method for calculating leach concentration, the inlet condition with an exponential decay input is identified. Secondly, the results show that approximately 39%-58% of Br- and 16%-49% of Sr2+ are eluted from the columns at the end of the breakthrough experiments. This suggests that trapping mechanisms, including diffusion into immobile zones and attachment of tracer on mineral surfaces, are more pronounced for Sr2+ than for Br-. Thirdly, we demonstrate robustness of CTRW-based truncated power-law (TPL) model in capturing non-Fickian reactive transport with 0 < β < 2, and Fickian transport with β > 2. The non-Fickian dispersion observed experimentally is explained by variations of local flow field from preferential flow paths due to physical heterogeneities. Particularly, the additional sorption process of strontium on clay minerals contributes to the delay of the peak concentration and the tailing features, which leads to an enhanced non-Fickian transport for strontium. Finally, the ADE and CTRW approaches to environmental modelling are evaluated. It is shown that CTRW with a sorption term can describe non-Fickian dispersive transport of strontium at laboratory scale by identifying appropriate parameters, while the traditional ADE with

  16. Laboratory meter-scale seismic monitoring of varying water levels in granular media

    NASA Astrophysics Data System (ADS)

    Pasquet, S.; Bodet, L.; Bergamo, P.; Guérin, R.; Martin, R.; Mourgues, R.; Tournat, V.

    2016-12-01

    Laboratory physical modelling and non-contacting ultrasonic techniques are frequently proposed to tackle theoretical and methodological issues related to geophysical prospecting. Following recent developments illustrating the ability of seismic methods to image spatial and/or temporal variations of water content in the vadose zone, we developed laboratory experiments aimed at testing the sensitivity of seismic measurements (i.e., pressure-wave travel times and surface-wave phase velocities) to water saturation variations. Ultrasonic techniques were used to simulate typical seismic acquisitions on small-scale controlled granular media presenting different water levels. Travel times and phase velocity measurements obtained at the dry state were validated with both theoretical models and numerical simulations and serve as reference datasets. The increasing water level clearly affects the recorded wave field in both its phase and amplitude, but the collected data cannot yet be inverted in the absence of a comprehensive theoretical model for such partially saturated and unconsolidated granular media. The differences in travel time and phase velocity observed between the dry and wet models show patterns that are interestingly coincident with the observed water level and depth of the capillary fringe, thus offering attractive perspectives for studying soil water content variations in the field.

  17. LABORATORY-SCALE ANALYSIS OF AQUIFER REMEDIATION BY IN-WELL VAPOR STRIPPING 2. MODELING RESULTS. (R825689C061)

    EPA Science Inventory

    Abstract

    The removal of volatile organic compounds (VOCs) from groundwater through in-well vapor stripping has been demonstrated by Gonen and Gvirtzman (1997, J. Contam. Hydrol., 00: 000-000) at the laboratory scale. The present study compares experimental breakthrough...

  18. SIMILARITY PROPERTIES AND SCALING LAWS OF RADIATION HYDRODYNAMIC FLOWS IN LABORATORY ASTROPHYSICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falize, E.; Bouquet, S.; Michaut, C., E-mail: emeric.falize@cea.fr

    The spectacular recent development of modern high-energy density laboratory facilities which concentrate more and more energy in millimetric volumes allows the astrophysical community to reproduce and to explore, in millimeter-scale targets and during very short times, astrophysical phenomena where radiation and matter are strongly coupled. The astrophysical relevance of these experiments can be checked from the similarity properties and especially scaling law establishment, which constitutes the keystone of laboratory astrophysics. From the radiating optically thin regime to the so-called optically thick radiative pressure regime, we present in this paper, for the first time, a complete analysis of the main radiatingmore » regimes that we encountered in laboratory astrophysics with the same formalism based on Lie group theory. The use of the Lie group method appears to be a systematic method which allows us to construct easily and systematically the scaling laws of a given problem. This powerful tool permits us to unify the recent major advances on scaling laws and to identify new similarity concepts that we discuss in this paper, and suggests important applications for present and future laboratory astrophysics experiments. All these results enable us to demonstrate theoretically that astrophysical phenomena in such radiating regimes can be explored experimentally thanks to powerful facilities. Consequently, the results presented here are a fundamental tool for the high-energy density laboratory astrophysics community in order to quantify the astrophysics relevance and justify laser experiments. Moreover, relying on Lie group theory, this paper constitutes the starting point of any analysis of the self-similar dynamics of radiating fluids.« less

  19. Simulating flow in karst aquifers at laboratory and sub-regional scales using MODFLOW-CFP

    NASA Astrophysics Data System (ADS)

    Gallegos, Josue Jacob; Hu, Bill X.; Davis, Hal

    2013-12-01

    Groundwater flow in a well-developed karst aquifer dominantly occurs through bedding planes, fractures, conduits, and caves created by and/or enlarged by dissolution. Conventional groundwater modeling methods assume that groundwater flow is described by Darcian principles where primary porosity (i.e. matrix porosity) and laminar flow are dominant. However, in well-developed karst aquifers, the assumption of Darcian flow can be questionable. While Darcian flow generally occurs in the matrix portion of the karst aquifer, flow through conduits can be non-laminar where the relation between specific discharge and hydraulic gradient is non-linear. MODFLOW-CFP is a relatively new modeling program that accounts for non-laminar and laminar flow in pipes, like karst caves, within an aquifer. In this study, results from MODFLOW-CFP are compared to those from MODFLOW-2000/2005, a numerical code based on Darcy's law, to evaluate the accuracy that CFP can achieve when modeling flows in karst aquifers at laboratory and sub-regional (Woodville Karst Plain, Florida, USA) scales. In comparison with laboratory experiments, simulation results by MODFLOW-CFP are more accurate than MODFLOW 2005. At the sub-regional scale, MODFLOW-CFP was more accurate than MODFLOW-2000 for simulating field measurements of peak flow at one spring and total discharges at two springs for an observed storm event.

  20. A comparison of relative toxicity rankings by some small-scale laboratory tests

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Cumming, H. J.

    1977-01-01

    Small-scale laboratory tests for fire toxicity, suitable for use in the average laboratory hood, are needed for screening and ranking materials on the basis of relative toxicity. The performance of wool, cotton, and aromatic polyamide under several test procedures is presented.

  1. Scale-model charge-transfer technique for measuring enhancement factors

    NASA Technical Reports Server (NTRS)

    Kositsky, J.; Nanevicz, J. E.

    1991-01-01

    Determination of aircraft electric field enhancement factors is crucial when using airborne field mill (ABFM) systems to accurately measure electric fields aloft. SRI used the scale model charge transfer technique to determine enhancement factors of several canonical shapes and a scale model Learjet 36A. The measured values for the canonical shapes agreed with known analytic solutions within about 6 percent. The laboratory determined enhancement factors for the aircraft were compared with those derived from in-flight data gathered by a Learjet 36A outfitted with eight field mills. The values agreed to within experimental error (approx. 15 percent).

  2. Predictive models of lyophilization process for development, scale-up/tech transfer and manufacturing.

    PubMed

    Zhu, Tong; Moussa, Ehab M; Witting, Madeleine; Zhou, Deliang; Sinha, Kushal; Hirth, Mario; Gastens, Martin; Shang, Sherwin; Nere, Nandkishor; Somashekar, Shubha Chetan; Alexeenko, Alina; Jameel, Feroz

    2018-07-01

    Scale-up and technology transfer of lyophilization processes remains a challenge that requires thorough characterization of the laboratory and larger scale lyophilizers. In this study, computational fluid dynamics (CFD) was employed to develop computer-based models of both laboratory and manufacturing scale lyophilizers in order to understand the differences in equipment performance arising from distinct designs. CFD coupled with steady state heat and mass transfer modeling of the vial were then utilized to study and predict independent variables such as shelf temperature and chamber pressure, and response variables such as product resistance, product temperature and primary drying time for a given formulation. The models were then verified experimentally for the different lyophilizers. Additionally, the models were applied to create and evaluate a design space for a lyophilized product in order to provide justification for the flexibility to operate within a certain range of process parameters without the need for validation. Published by Elsevier B.V.

  3. Comparing field investigations with laboratory models to predict landfill leachate emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fellner, Johann; Doeberl, Gernot; Allgaier, Gerhard

    2009-06-15

    Investigations into laboratory reactors and landfills are used for simulating and predicting emissions from municipal solid waste landfills. We examined water flow and solute transport through the same waste body for different volumetric scales (laboratory experiment: 0.08 m{sup 3}, landfill: 80,000 m{sup 3}), and assessed the differences in water flow and leachate emissions of chloride, total organic carbon and Kjeldahl nitrogen. The results indicate that, due to preferential pathways, the flow of water in field-scale landfills is less uniform than in laboratory reactors. Based on tracer experiments, it can be discerned that in laboratory-scale experiments around 40% of pore watermore » participates in advective solute transport, whereas this fraction amounts to less than 0.2% in the investigated full-scale landfill. Consequences of the difference in water flow and moisture distribution are: (1) leachate emissions from full-scale landfills decrease faster than predicted by laboratory experiments, and (2) the stock of materials remaining in the landfill body, and thus the long-term emission potential, is likely to be underestimated by laboratory landfill simulations.« less

  4. EPOS Multi-Scale Laboratory platform: a long-term reference tool for experimental Earth Sciences

    NASA Astrophysics Data System (ADS)

    Trippanera, Daniele; Tesei, Telemaco; Funiciello, Francesca; Sagnotti, Leonardo; Scarlato, Piergiorgio; Rosenau, Matthias; Elger, Kirsten; Ulbricht, Damian; Lange, Otto; Calignano, Elisa; Spiers, Chris; Drury, Martin; Willingshofer, Ernst; Winkler, Aldo

    2017-04-01

    With continuous progress on scientific research, a large amount of datasets has been and will be produced. The data access and sharing along with their storage and homogenization within a unique and coherent framework is a new challenge for the whole scientific community. This is particularly emphasized for geo-scientific laboratories, encompassing the most diverse Earth Science disciplines and typology of data. To this aim the "Multiscale Laboratories" Work Package (WP16), operating in the framework of the European Plate Observing System (EPOS), is developing a virtual platform of geo-scientific data and services for the worldwide community of laboratories. This long-term project aims at merging the top class multidisciplinary laboratories in Geoscience into a coherent and collaborative network, facilitating the standardization of virtual access to data, data products and software. This will help our community to evolve beyond the stage in which most of data produced by the different laboratories are available only within the related scholarly publications (often as print-version only) or they remain unpublished and inaccessible on local devices. The EPOS multi-scale laboratory platform will provide the possibility to easily share and discover data by means of open access, DOI-referenced, online data publication including long-term storage, managing and curation services and to set up a cohesive community of laboratories. The WP16 is starting with three pilot cases laboratories: (1) rock physics, (2) palaeomagnetic, and (3) analogue modelling. As a proof of concept, first analogue modelling datasets have been published via GFZ Data Services (http://doidb.wdc-terra.org/search/public/ui?&sort=updated+desc&q=epos). The datasets include rock analogue material properties (e.g. friction data, rheology data, SEM imagery), as well as supplementary figures, images and movies from experiments on tectonic processes. A metadata catalogue tailored to the specific communities

  5. Laboratory and pilot-scale bioremediation of pentaerythritol tetranitrate (PETN) contaminated soil.

    PubMed

    Zhuang, Li; Gui, Lai; Gillham, Robert W; Landis, Richard C

    2014-01-15

    PETN (pentaerythritol tetranitrate), a munitions constituent, is commonly encountered in munitions-contaminated soils, and pose a serious threat to aquatic organisms. This study investigated anaerobic remediation of PETN-contaminated soil at a site near Denver Colorado. Both granular iron and organic carbon amendments were used in both laboratory and pilot-scale tests. The laboratory results showed that, with various organic carbon amendments, PETN at initial concentrations of between 4500 and 5000mg/kg was effectively removed within 84 days. In the field trial, after a test period of 446 days, PETN mass removal of up to 53,071mg/kg of PETN (80%) was achieved with an organic carbon amendment (DARAMEND) of 4% by weight. In previous laboratory studies, granular iron has shown to be highly effective in degrading PETN. However, for both the laboratory and pilot-scale tests, granular iron was proven to be ineffective. This was a consequence of passivation of the iron surfaces caused by the very high concentrations of nitrate in the contaminated soil. This study indicated that low concentration of organic carbon was a key factor limiting bioremediation of PETN in the contaminated soil. Furthermore, the addition of organic carbon amendments such as the DARAMEND materials or brewers grain, proved to be highly effective in stimulating the biodegradation of PETN and could provide the basis for full-scale remediation of PETN-contaminated sites. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Replicating the microbial community and water quality performance of full-scale slow sand filters in laboratory-scale filters.

    PubMed

    Haig, Sarah-Jane; Quince, Christopher; Davies, Robert L; Dorea, Caetano C; Collins, Gavin

    2014-09-15

    Previous laboratory-scale studies to characterise the functional microbial ecology of slow sand filters have suffered from methodological limitations that could compromise their relevance to full-scale systems. Therefore, to ascertain if laboratory-scale slow sand filters (L-SSFs) can replicate the microbial community and water quality production of industrially operated full-scale slow sand filters (I-SSFs), eight cylindrical L-SSFs were constructed and were used to treat water from the same source as the I-SSFs. Half of the L-SSFs sand beds were composed of sterilized sand (sterile) from the industrial filters and the other half with sand taken directly from the same industrial filter (non-sterile). All filters were operated for 10 weeks, with the microbial community and water quality parameters sampled and analysed weekly. To characterize the microbial community phyla-specific qPCR assays and 454 pyrosequencing of the 16S rRNA gene were used in conjunction with an array of statistical techniques. The results demonstrate that it is possible to mimic both the water quality production and the structure of the microbial community of full-scale filters in the laboratory - at all levels of taxonomic classification except OTU - thus allowing comparison of LSSF experiments with full-scale units. Further, it was found that the sand type composing the filter bed (non-sterile or sterile), the water quality produced, the age of the filters and the depth of sand samples were all significant factors in explaining observed differences in the structure of the microbial consortia. This study is the first to the authors' knowledge that demonstrates that scaled-down slow sand filters can accurately reproduce the water quality and microbial consortia of full-scale slow sand filters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Modeling hexavalent chromium reduction in groundwater in field-scale transport and laboratory batch experiments

    USGS Publications Warehouse

    Friedly, J.C.; Davis, J.A.; Kent, D.B.

    1995-01-01

    A plausible and consistent model is developed to obtain a quantitative description of the gradual disappearance of hexavalent chromium (Cr(VI)) from groundwater in a small-scale field tracer test and in batch kinetic experiments using aquifer sediments under similar chemical conditions. The data exhibit three distinct timescales. Fast reduction occurs in well-stirred batch reactors in times much less than 1 hour and is followed by slow reduction over a timescale of the order of 2 days. In the field, reduction occurs on a timescale of the order of 8 days. The model is based on the following hypotheses. The chemical reduction reaction occurs very fast, and the longer timescales are caused by diffusion resistance. Diffusion into the secondary porosity of grains causes the apparent slow reduction rate in batch experiments. In the model of the field experiments, the reducing agent, heavy Fe(II)-bearing minerals, is heterogeneously distributed in thin strata located between larger nonreducing sand lenses that comprise the bulk of the aquifer solids. It is found that reducing strata of the order of centimeters thick are sufficient to contribute enough diffusion resistance to cause the observed longest timescale in the field. A one-dimensional advection/dispersion model is formulated that describes the major experimental trends. Diffusion rates are estimated in terms of an elementary physical picture of flow through a stratified medium containing identically sized spherical grains. Both reduction and sorption reactions are included. Batch simulation results are sensitive to the fraction of reductant located at or near the surface of grains, which controls the amount of rapid reduction, and the secondary porosity, which controls the rate of slow reduction observed in batch experiments. Results of Cr(VI) transport simulations are sensitive to the thickness and relative size of the reducing stratum. Transport simulation results suggest that nearly all of the reductant must be

  8. Model Scaling of Hydrokinetic Ocean Renewable Energy Systems

    NASA Astrophysics Data System (ADS)

    von Ellenrieder, Karl; Valentine, William

    2013-11-01

    Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).

  9. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is

  10. Zero-valent iron/biotic treatment system for perchlorate-contaminated water: lab-scale performance, modeling, and full-scale implications

    EPA Science Inventory

    The computer program AQUASIM was used to model biological treatment of perchlorate-contaminated water using zero-valent iron corrosion as the hydrogen gas source. The laboratory-scale column was seeded with an autohydrogenotrophic microbial consortium previously shown to degrade ...

  11. A professional development model for medical laboratory scientists working in the immunohematology laboratory.

    PubMed

    Garza, Melinda N; Pulido, Lila A; Amerson, Megan; Ali, Faheem A; Greenhill, Brandy A; Griffin, Gary; Alvarez, Enrique; Whatley, Marsha; Hu, Peter C

    2012-01-01

    Transfusion medicine, a section of the Department of Laboratory Medicine at The University of Texas MD Anderson Cancer Center is committed to the education and advancement of its health care professionals. It is our belief that giving medical laboratory professionals a path for advancement leads to excellence and increases overall professionalism in the Immunohematology Laboratory. As a result of this strong commitment to excellence and professionalism, the Immunohematology laboratory has instituted a Professional Development Model (PDM) that aims to create Medical Laboratory Scientists (MLS) that are not only more knowledgeable, but are continually striving for excellence. In addition, these MLS are poised for advancement in their careers. The professional development model consists of four levels: Discovery, Application, Maturation, and Expert. The model was formulated to serve as a detailed path to the mastery of all process and methods in the Immunohematology Laboratory. Each level in the professional development model consists of tasks that optimize the laboratory workflow and allow for concurrent training. Completion of a level in the PDM is rewarded with financial incentive and further advancement in the field. The PDM for Medical Laboratory Scientists in the Immunohematology Laboratory fosters personal development, rewards growth and competency, and sets high standards for all services and skills provided. This model is a vital component of the Immunohematology Laboratory and aims to ensure the highest quality of care and standards in their testing. It is because of the success of this model and the robustness of its content that we hope other medical laboratories aim to reach the same level of excellence and professionalism, and adapt this model into their own environment.

  12. A professional development model for medical laboratory scientists working in the microbiology laboratory.

    PubMed

    Amerson, Megan H; Pulido, Lila; Garza, Melinda N; Ali, Faheem A; Greenhill, Brandy; Einspahr, Christopher L; Yarsa, Joseph; Sood, Pramilla K; Hu, Peter C

    2012-01-01

    The University of Texas M.D. Anderson Cancer Center, Division of Pathology and Laboratory Medicine is committed to providing the best pathology and medicine through: state-of-the art techniques, progressive ground-breaking research, education and training for the clinical diagnosis and research of cancer and related diseases. After surveying the laboratory staff and other hospital professionals, the Department administrators and Human Resource generalists developed a professional development model for Microbiology to support laboratory skills, behavior, certification, and continual education within its staff. This model sets high standards for the laboratory professionals to allow the labs to work at their fullest potential; it provides organization to training technologists based on complete laboratory needs instead of training technologists in individual areas in which more training is required if the laboratory needs them to work in other areas. This model is a working example for all microbiology based laboratories who want to set high standards and want their staff to be acknowledged for demonstrated excellence and professional development in the laboratory. The PDM model is designed to focus on the needs of the laboratory as well as the laboratory professionals.

  13. Accuracy of finite-difference modeling of seismic waves : Simulation versus laboratory measurements

    NASA Astrophysics Data System (ADS)

    Arntsen, B.

    2017-12-01

    The finite-difference technique for numerical modeling of seismic waves is still important and for some areas extensively used.For exploration purposes is finite-difference simulation at the core of both traditional imaging techniques such as reverse-time migration and more elaborate Full-Waveform Inversion techniques.The accuracy and fidelity of finite-difference simulation of seismic waves are hard to quantify and meaningfully error analysis is really onlyeasily available for simplistic media. A possible alternative to theoretical error analysis is provided by comparing finite-difference simulated data with laboratory data created using a scale model. The advantage of this approach is the accurate knowledge of the model, within measurement precision, and the location of sources and receivers.We use a model made of PVC immersed in water and containing horizontal and tilted interfaces together with several spherical objects to generateultrasonic pressure reflection measurements. The physical dimensions of the model is of the order of a meter, which after scaling represents a model with dimensions of the order of 10 kilometer and frequencies in the range of one to thirty hertz.We find that for plane horizontal interfaces the laboratory data can be reproduced by the finite-difference scheme with relatively small error, but for steeply tilted interfaces the error increases. For spherical interfaces the discrepancy between laboratory data and simulated data is sometimes much more severe, to the extent that it is not possible to simulate reflections from parts of highly curved bodies. The results are important in view of the fact that finite-difference modeling is often at the core of imaging and inversion algorithms tackling complicatedgeological areas with highly curved interfaces.

  14. CORRELATIONS BETWEEN HOMOLOGUE CONCENTRATIONS OF PCDD/FS AND TOXIC EQUIVALENCY VALUES IN LABORATORY-, PACKAGE BOILER-, AND FIELD-SCALE INCINERATORS

    EPA Science Inventory

    The toxic equivalency (TEQ) values of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) are predicted with a model based on the homologue concentrations measured from a laboratory-scale reactor (124 data points), a package boiler (61 data points), and ...

  15. Measured acoustic characteristics of ducted supersonic jets at different model scales

    NASA Technical Reports Server (NTRS)

    Jones, R. R., III; Ahuja, K. K.; Tam, Christopher K. W.; Abdelwahab, M.

    1993-01-01

    A large-scale (about a 25x enlargement) model of the Georgia Tech Research Institute (GTRI) hardware was installed and tested in the Propulsion Systems Laboratory of the NASA Lewis Research Center. Acoustic measurements made in these two facilities are compared and the similarity in acoustic behavior over the scale range under consideration is highlighted. The study provide the acoustic data over a relatively large-scale range which may be used to demonstrate the validity of scaling methods employed in the investigation of this phenomena.

  16. Fate of Methane Emitted from Dissociating Marine Hydrates: Modeling, Laboratory, and Field Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juanes, Ruben

    The overall goals of this research are: (1) to determine the physical fate of single and multiple methane bubbles emitted to the water column by dissociating gas hydrates at seep sites deep within the hydrate stability zone or at the updip limit of gas hydrate stability, and (2) to quantitatively link theoretical and laboratory findings on methane transport to the analysis of real-world field-scale methane plume data placed within the context of the degrading methane hydrate province on the US Atlantic margin. The project is arranged to advance on three interrelated fronts (numerical modeling, laboratory experiments, and analysis of field-basedmore » plume data) simultaneously. The fundamental objectives of each component are the following: Numerical modeling: Constraining the conditions under which rising bubbles become armored with hydrate, the impact of hydrate armoring on the eventual fate of a bubble’s methane, and the role of multiple bubble interactions in survival of methane plumes to very shallow depths in the water column. Laboratory experiments: Exploring the parameter space (e.g., bubble size, gas saturation in the liquid phase, “proximity” to the stability boundary) for formation of a hydrate shell around a free bubble in water, the rise rate of such bubbles, and the bubble’s acoustic characteristics using field-scale frequencies. Field component: Extending the results of numerical modeling and laboratory experiments to the field-scale using brand new, existing, public-domain, state-of-the-art real world data on US Atlantic margin methane seeps, without acquiring new field data in the course of this particular project. This component quantitatively analyzes data on Atlantic margin methane plumes and place those new plumes and their corresponding seeps within the context of gas hydrate degradation processes on this margin.« less

  17. Laboratory-Scale Evidence for Lightning-Mediated Gene Transfer in Soil

    PubMed Central

    Demanèche, Sandrine; Bertolla, Franck; Buret, François; Nalin, Renaud; Sailland, Alain; Auriol, Philippe; Vogel, Timothy M.; Simonet, Pascal

    2001-01-01

    Electrical fields and current can permeabilize bacterial membranes, allowing for the penetration of naked DNA. Given that the environment is subjected to regular thunderstorms and lightning discharges that induce enormous electrical perturbations, the possibility of natural electrotransformation of bacteria was investigated. We demonstrated with soil microcosm experiments that the transformation of added bacteria could be increased locally via lightning-mediated current injection. The incorporation of three genes coding for antibiotic resistance (plasmid pBR328) into the Escherichia coli strain DH10B recipient previously added to soil was observed only after the soil had been subjected to laboratory-scale lightning. Laboratory-scale lightning had an electrical field gradient (700 versus 600 kV m−1) and current density (2.5 versus 12.6 kA m−2) similar to those of full-scale lightning. Controls handled identically except for not being subjected to lightning produced no detectable antibiotic-resistant clones. In addition, simulated storm cloud electrical fields (in the absence of current) did not produce detectable clones (transformation detection limit, 10−9). Natural electrotransformation might be a mechanism involved in bacterial evolution. PMID:11472916

  18. High Resolution ground penetrating radar (GPR) measurements at the laboratory scale to model porosity and permeability in the Miami Limestone in South Florida.

    NASA Astrophysics Data System (ADS)

    Mount, G. J.; Comas, X.

    2015-12-01

    Subsurface water flow within the Biscayne aquifer is controlled by the heterogeneous distribution of porosity and permeability in the karst Miami Limestone and the presence of numerous dissolution and mega-porous features. The dissolution features and other high porosity areas can create preferential flow paths and direct recharge to the aquifer, which may not be accurately conceptualized in groundwater flow models. As hydrologic conditions are undergoing restoration in the Everglades, understanding the distribution of these high porosity areas within the subsurface would create a better understanding of subsurface flow. This research utilizes ground penetrating radar to estimate the spatial variability of porosity and dielectric permittivity of the Miami Limestone at centimeter scale resolution at the laboratory scale. High frequency GPR antennas were used to measure changes in electromagnetic wave velocity through limestone samples under varying volumetric water contents. The Complex Refractive Index Model (CRIM) was then applied in order to estimate porosity and dielectric permittivity of the solid phase of the limestone. Porosity estimates ranged from 45.2-66.0% from the CRIM model and correspond well with estimates of porosity from analytical and digital image techniques. Dielectric permittivity values of the limestone solid phase ranged from 7.0 and 13.0, which are similar to values in the literature. This research demonstrates the ability of GPR to identify the cm scale spatial variability of aquifer properties that influence subsurface water flow which could have implications for groundwater flow models in the Biscayne and potentially other shallow karst aquifers.

  19. Anthropometric measures in cardiovascular disease prediction: comparison of laboratory-based versus non-laboratory-based model.

    PubMed

    Dhana, Klodian; Ikram, M Arfan; Hofman, Albert; Franco, Oscar H; Kavousi, Maryam

    2015-03-01

    Body mass index (BMI) has been used to simplify cardiovascular risk prediction models by substituting total cholesterol and high-density lipoprotein cholesterol. In the elderly, the ability of BMI as a predictor of cardiovascular disease (CVD) declines. We aimed to find the most predictive anthropometric measure for CVD risk to construct a non-laboratory-based model and to compare it with the model including laboratory measurements. The study included 2675 women and 1902 men aged 55-79 years from the prospective population-based Rotterdam Study. We used Cox proportional hazard regression analysis to evaluate the association of BMI, waist circumference, waist-to-hip ratio and a body shape index (ABSI) with CVD, including coronary heart disease and stroke. The performance of the laboratory-based and non-laboratory-based models was evaluated by studying the discrimination, calibration, correlation and risk agreement. Among men, ABSI was the most informative measure associated with CVD, therefore ABSI was used to construct the non-laboratory-based model. Discrimination of the non-laboratory-based model was not different than laboratory-based model (c-statistic: 0.680-vs-0.683, p=0.71); both models were well calibrated (15.3% observed CVD risk vs 16.9% and 17.0% predicted CVD risks by the non-laboratory-based and laboratory-based models, respectively) and Spearman rank correlation and the agreement between non-laboratory-based and laboratory-based models were 0.89 and 91.7%, respectively. Among women, none of the anthropometric measures were independently associated with CVD. Among middle-aged and elderly where the ability of BMI to predict CVD declines, the non-laboratory-based model, based on ABSI, could predict CVD risk as accurately as the laboratory-based model among men. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. Scale-Up of GRCop: From Laboratory to Rocket Engines

    NASA Technical Reports Server (NTRS)

    Ellis, David L.

    2016-01-01

    GRCop is a high temperature, high thermal conductivity copper-based series of alloys designed primarily for use in regeneratively cooled rocket engine liners. It began with laboratory-level production of a few grams of ribbon produced by chill block melt spinning and has grown to commercial-scale production of large-scale rocket engine liners. Along the way, a variety of methods of consolidating and working the alloy were examined, a database of properties was developed and a variety of commercial and government applications were considered. This talk will briefly address the basic material properties used for selection of compositions to scale up, the methods used to go from simple ribbon to rocket engines, the need to develop a suitable database, and the issues related to getting the alloy into a rocket engine or other application.

  1. The use of laboratory-determined ion exchange parameters in the predictive modelling of field-scale major cation migration in groundwater over a 40-year period.

    PubMed

    Carlyle, Harriet F; Tellam, John H; Parker, Karen E

    2004-01-01

    An attempt has been made to estimate quantitatively cation concentration changes as estuary water invades a Triassic Sandstone aquifer in northwest England. Cation exchange capacities and selectivity coefficients for Na(+), K(+), Ca(2+), and Mg(2+) were measured in the laboratory using standard techniques. Selectivity coefficients were also determined using a method involving optimized back-calculation from flushing experiments, thus permitting better representation of field conditions; in all cases, the Gaines-Thomas/constant cation exchange capacity (CEC) model was found to be a reasonable, though not perfect, first description. The exchange parameters interpreted from the laboratory experiments were used in a one-dimensional reactive transport mixing cell model, and predictions compared with field pumping well data (Cl and hardness spanning a period of around 40 years, and full major ion analyses in approximately 1980). The concentration patterns predicted using Gaines-Thomas exchange with calcite equilibrium were similar to the observed patterns, but the concentrations of the divalent ions were significantly overestimated, as were 1980 sulphate concentrations, and 1980 alkalinity concentrations were underestimated. Including representation of sulphate reduction in the estuarine alluvium failed to replicate 1980 HCO(3) and pH values. However, by including partial CO(2) degassing following sulphate reduction, a process for which there is 34S and 18O evidence from a previous study, a good match for SO(4), HCO(3), and pH was attained. Using this modified estuary water and averaged values from the laboratory ion exchange parameter determinations, good predictions for the field cation data were obtained. It is concluded that the Gaines-Thomas/constant exchange capacity model with averaged parameter values can be used successfully in ion exchange predictions in this aquifer at a regional scale and over extended time scales, despite the numerous assumptions inherent in

  2. The use of laboratory-determined ion exchange parameters in the predictive modelling of field-scale major cation migration in groundwater over a 40-year period

    NASA Astrophysics Data System (ADS)

    Carlyle, Harriet F.; Tellam, John H.; Parker, Karen E.

    2004-01-01

    An attempt has been made to estimate quantitatively cation concentration changes as estuary water invades a Triassic Sandstone aquifer in northwest England. Cation exchange capacities and selectivity coefficients for Na +, K +, Ca 2+, and Mg 2+ were measured in the laboratory using standard techniques. Selectivity coefficients were also determined using a method involving optimized back-calculation from flushing experiments, thus permitting better representation of field conditions; in all cases, the Gaines-Thomas/constant cation exchange capacity (CEC) model was found to be a reasonable, though not perfect, first description. The exchange parameters interpreted from the laboratory experiments were used in a one-dimensional reactive transport mixing cell model, and predictions compared with field pumping well data (Cl and hardness spanning a period of around 40 years, and full major ion analyses in ˜1980). The concentration patterns predicted using Gaines-Thomas exchange with calcite equilibrium were similar to the observed patterns, but the concentrations of the divalent ions were significantly overestimated, as were 1980 sulphate concentrations, and 1980 alkalinity concentrations were underestimated. Including representation of sulphate reduction in the estuarine alluvium failed to replicate 1980 HCO 3 and pH values. However, by including partial CO 2 degassing following sulphate reduction, a process for which there is 34S and 18O evidence from a previous study, a good match for SO 4, HCO 3, and pH was attained. Using this modified estuary water and averaged values from the laboratory ion exchange parameter determinations, good predictions for the field cation data were obtained. It is concluded that the Gaines-Thomas/constant exchange capacity model with averaged parameter values can be used successfully in ion exchange predictions in this aquifer at a regional scale and over extended time scales, despite the numerous assumptions inherent in the approach; this

  3. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  4. Modeling fast and slow earthquakes at various scales.

    PubMed

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  5. Validation of laboratory-scale recycling test method of paper PSA label products

    Treesearch

    Carl Houtman; Karen Scallon; Richard Oldack

    2008-01-01

    Starting with test methods and a specification developed by the U.S. Postal Service (USPS) Environmentally Benign Pressure Sensitive Adhesive Postage Stamp Program, a laboratory-scale test method and a specification were developed and validated for pressure-sensitive adhesive labels, By comparing results from this new test method and pilot-scale tests, which have been...

  6. Ensemble urban flood simulation in comparison with laboratory-scale experiments: Impact of interaction models for manhole, sewer pipe, and surface flow

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Lee, Seungsoo; An, Hyunuk; Kawaike, Kenji; Nakagawa, Hajime

    2016-11-01

    An urban flood is an integrated phenomenon that is affected by various uncertainty sources such as input forcing, model parameters, complex geometry, and exchanges of flow among different domains in surfaces and subsurfaces. Despite considerable advances in urban flood modeling techniques, limited knowledge is currently available with regard to the impact of dynamic interaction among different flow domains on urban floods. In this paper, an ensemble method for urban flood modeling is presented to consider the parameter uncertainty of interaction models among a manhole, a sewer pipe, and surface flow. Laboratory-scale experiments on urban flood and inundation are performed under various flow conditions to investigate the parameter uncertainty of interaction models. The results show that ensemble simulation using interaction models based on weir and orifice formulas reproduces experimental data with high accuracy and detects the identifiability of model parameters. Among interaction-related parameters, the parameters of the sewer-manhole interaction show lower uncertainty than those of the sewer-surface interaction. Experimental data obtained under unsteady-state conditions are more informative than those obtained under steady-state conditions to assess the parameter uncertainty of interaction models. Although the optimal parameters vary according to the flow conditions, the difference is marginal. Simulation results also confirm the capability of the interaction models and the potential of the ensemble-based approaches to facilitate urban flood simulation.

  7. Scaling methane oxidation: From laboratory incubation experiments to landfill cover field conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abichou, Tarek, E-mail: abichou@eng.fsu.edu; Mahieu, Koenraad; Chanton, Jeff

    2011-05-15

    Evaluating field-scale methane oxidation in landfill cover soils using numerical models is gaining interest in the solid waste industry as research has made it clear that methane oxidation in the field is a complex function of climatic conditions, soil type, cover design, and incoming flux of landfill gas from the waste mass. Numerical models can account for these parameters as they change with time and space under field conditions. In this study, we developed temperature, and water content correction factors for methane oxidation parameters. We also introduced a possible correction to account for the different soil structure under field conditions.more » These parameters were defined in laboratory incubation experiments performed on homogenized soil specimens and were used to predict the actual methane oxidation rates to be expected under field conditions. Water content and temperature corrections factors were obtained for the methane oxidation rate parameter to be used when modeling methane oxidation in the field. To predict in situ measured rates of methane with the model it was necessary to set the half saturation constant of methane and oxygen, K{sub m}, to 5%, approximately five times larger than laboratory measured values. We hypothesize that this discrepancy reflects differences in soil structure between homogenized soil conditions in the lab and actual aggregated soil structure in the field. When all of these correction factors were re-introduced into the oxidation module of our model, it was able to reproduce surface emissions (as measured by static flux chambers) and percent oxidation (as measured by stable isotope techniques) within the range measured in the field.« less

  8. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    NASA Astrophysics Data System (ADS)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  9. Chlor-Alkali Industry: A Laboratory Scale Approach

    ERIC Educational Resources Information Center

    Sanchez-Sanchez, C. M.; Exposito, E.; Frias-Ferrer, A.; Gonzalez-Garaia, J.; Monthiel, V.; Aldaz, A.

    2004-01-01

    A laboratory experiment for students in the last year of degree program in chemical engineering, chemistry, or industrial chemistry is presented. It models the chlor-alkali process, one of the most important industrial applications of electrochemical technology and the second largest industrial consumer of electricity after aluminium industry.

  10. Potential for improved radiation thermometry measurement uncertainty through implementing a primary scale in an industrial laboratory

    NASA Astrophysics Data System (ADS)

    Willmott, Jon R.; Lowe, David; Broughton, Mick; White, Ben S.; Machin, Graham

    2016-09-01

    A primary temperature scale requires realising a unit in terms of its definition. For high temperature radiation thermometry in terms of the International Temperature Scale of 1990 this means extrapolating from the signal measured at the freezing temperature of gold, silver or copper using Planck’s radiation law. The difficulty in doing this means that primary scales above 1000 °C require specialist equipment and careful characterisation in order to achieve the extrapolation with sufficient accuracy. As such, maintenance of the scale at high temperatures is usually only practicable for National Metrology Institutes, and calibration laboratories have to rely on a scale calibrated against transfer standards. At lower temperatures it is practicable for an industrial calibration laboratory to have its own primary temperature scale, which reduces the number of steps between the primary scale and end user. Proposed changes to the SI that will introduce internationally accepted high temperature reference standards might make it practicable to have a primary high temperature scale in a calibration laboratory. In this study such a scale was established by calibrating radiation thermometers directly to high temperature reference standards. The possible reduction in uncertainty to an end user as a result of the reduced calibration chain was evaluated.

  11. Estimation of waste component-specific landfill decay rates using laboratory-scale decomposition data.

    PubMed

    De la Cruz, Florentino B; Barlaz, Morton A

    2010-06-15

    The current methane generation model used by the U.S. EPA (Landfill Gas Emissions Model) treats municipal solid waste (MSW) as a homogeneous waste with one decay rate. However, component-specific decay rates are required to evaluate the effects of changes in waste composition on methane generation. Laboratory-scale rate constants, k(lab), for the major biodegradable MSW components were used to derive field-scale decay rates (k(field)) for each waste component using the assumption that the average of the field-scale decay rates for each waste component, weighted by its composition, is equal to the bulk MSW decay rate. For an assumed bulk MSW decay rate of 0.04 yr(-1), k(field) was estimated to be 0.298, 0.171, 0.015, 0.144, 0.033, 0.02, 0.122, and 0.029 yr(-1), for grass, leaves, branches, food waste, newsprint, corrugated containers, coated paper, and office paper, respectively. The effect of landfill waste diversion programs on methane production was explored to illustrate the use of component-specific decay rates. One hundred percent diversion of yard waste and food waste reduced the year 20 methane production rate by 45%. When a landfill gas collection schedule was introduced, collectable methane was most influenced by food waste diversion at years 10 and 20 and paper diversion at year 40.

  12. The Subsurface Flow and Transport Laboratory: A New Department of Energy User's Facility for Intermediate-Scale Experimentation

    NASA Astrophysics Data System (ADS)

    Wietsma, T. W.; Oostrom, M.; Foster, N. S.

    2003-12-01

    Intermediate-scale experiments (ISEs) for flow and transport are a valuable tool for simulating subsurface features and conditions encountered in the field at government and private sites. ISEs offer the ability to study, under controlled laboratory conditions, complicated processes characteristic of mixed wastes and heterogeneous subsurface environments, in multiple dimensions and at different scales. ISEs may, therefore, result in major cost savings if employed prior to field studies. A distinct advantage of ISEs is that researchers can design physical and/or chemical heterogeneities in the porous media matrix that better approximate natural field conditions and therefore address research questions that contain the additional complexity of processes often encountered in the natural environment. A new Subsurface Flow and Transport Laboratory (SFTL) has been developed for ISE users in the Environmental Spectroscopy & Biogeochemistry Facility in the Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory (PNNL). The SFTL offers a variety of columns and flow cells, a new state-of-the-art dual-energy gamma system, a fully automated saturation-pressure apparatus, and analytical equipment for sample processing. The new facility, including qualified staff, is available for scientists interested in collaboration on conducting high-quality flow and transport experiments, including contaminant remediation. Close linkages exist between the SFTL and numerical modelers to aid in experimental design and interpretation. This presentation will discuss the facility and outline the procedures required to submit a proposal to use this unique facility for research purposes. The W. R. Wiley Environmental Molecular Sciences Laboratory, a national scientific user facility, is sponsored by the U.S. Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory.

  13. Countercurrent fixed-bed gasification of biomass at laboratory scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Blasi, C.; Signorelli, G.; Portoricco, G.

    1999-07-01

    A laboratory-scale countercurrent fixed-bed gasification plant has been designed and constructed to produce data for process modeling and to compare the gasification characteristics of several biomasses (beechwood, nutshells, olive husks, and grape residues). The composition of producer gas and spatial temperature profiles have been measured for biomass gasification at different air flow rates. The gas-heating value always attains a maximum as a function of this operating variable, associated with a decrease of the air-to-fuel ratio. Optical gasification conditions of wood and agricultural residues give rise to comparable gas-heating values, comprised in the range 5--5.5 MJ/Nm{sup 3} with 28--30% CO, 5--7%more » CO{sub 2}, 6--8% H{sub 2}, 1--2% CH{sub 4}, and small amounts of C{sub 2}- hydrocarbons (apart from nitrogen). However, gasification of agricultural residues is more difficult because of bed transport, partial ash sintering, nonuniform flow distribution, and the presence of a muddy phase in the effluents, so that proper pretreatments are needed for largescale applications.« less

  14. Comparative evaluation of laboratory-scale silages using standard glass jar silages or vacuum-packed model silages.

    PubMed

    Hoedtke, Sandra; Zeyner, Annette

    2011-03-30

    The objective of this study was to compare the fermentation variables of laboratory-scale silages made in glass preserving jars (GLASS) and vacuum-packed plastic bags (Rostock model silages, ROMOS). Silages were prepared from perennial ryegrass (fresh and wilted, 151 g kg(-1) and 286 g kg(-1) dry matter (DM), respectively) and remoistened coarsely ground rye grain (650 g kg(-1) DM) either with or without the addition of a lactic acid bacteria inoculant (3×10(5) colony forming units (cfu) g(-1) , LAB). Quintuplicate silos were opened on days 2, 4, 8, 49 and 90. Silage pH (P=0.073), acetic acid content (P=0.608) and ethanol content (P=0.223) were not influenced by the ensiling method. The contents of DM (P<0.001) and propionic acid (P=0.008) were affected by the ensiling method, but mean differences were only marginal. In ROMOS the concentration of lactic acid was increased (P=0.007) whereas butyric acid was produced less (P=0.001) when compared to GLASS. This suggested slightly better ensiling conditions for ROMOS. ROMOS represents a reasonable alternative to glass jar silages and opens the possibility for further investigations, e.g. studying the impact of packing density as well as the quantitative and qualitative analysis of fermentation gases. Copyright © 2010 Society of Chemical Industry.

  15. Safety in the Chemical Laboratory: Laboratory Air Quality: Part I. A Concentration Model.

    ERIC Educational Resources Information Center

    Butcher, Samuel S.; And Others

    1985-01-01

    Offers a simple model for estimating vapor concentrations in instructional laboratories. Three methods are described for measuring ventilation rates, and the results of measurements in six laboratories are presented. The model should provide a simple screening tool for evaluating worst-case personal exposures. (JN)

  16. Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate-Scale Hydrodynamic Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Khangaonkar, Tarang; Labiosa, Rochelle G.

    2010-11-30

    The Washington State Department of Ecology contracted with Pacific Northwest National Laboratory to develop an intermediate-scale hydrodynamic and water quality model to study dissolved oxygen and nutrient dynamics in Puget Sound and to help define potential Puget Sound-wide nutrient management strategies and decisions. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or dominate human impacts to dissolved oxygen levels in the sensitive areas. In this study, anmore » intermediate-scale hydrodynamic model of Puget Sound was developed to simulate the hydrodynamics of Puget Sound and the Northwest Straits for the year 2006. The model was constructed using the unstructured Finite Volume Coastal Ocean Model. The overall model grid resolution within Puget Sound in its present configuration is about 880 m. The model was driven by tides, river inflows, and meteorological forcing (wind and net heat flux) and simulated tidal circulations, temperature, and salinity distributions in Puget Sound. The model was validated against observed data of water surface elevation, velocity, temperature, and salinity at various stations within the study domain. Model validation indicated that the model simulates tidal elevations and currents in Puget Sound well and reproduces the general patterns of the temperature and salinity distributions.« less

  17. Dynamic modelling of high biomass density cultivation and biohydrogen production in different scales of flat plate photobioreactors.

    PubMed

    Zhang, Dongda; Dechatiwongse, Pongsathorn; Del Rio-Chanona, Ehecatl Antonio; Maitland, Geoffrey C; Hellgardt, Klaus; Vassiliadis, Vassilios S

    2015-12-01

    This paper investigates the scaling-up of cyanobacterial biomass cultivation and biohydrogen production from laboratory to industrial scale. Two main aspects are investigated and presented, which to the best of our knowledge have never been addressed, namely the construction of an accurate dynamic model to simulate cyanobacterial photo-heterotrophic growth and biohydrogen production and the prediction of the maximum biomass and hydrogen production in different scales of photobioreactors. To achieve the current goals, experimental data obtained from a laboratory experimental setup are fitted by a dynamic model. Based on the current model, two key original findings are made in this work. First, it is found that selecting low-chlorophyll mutants is an efficient way to increase both biomass concentration and hydrogen production particularly in a large scale photobioreactor. Second, the current work proposes that the width of industrial scale photobioreactors should not exceed 0.20 m for biomass cultivation and 0.05 m for biohydrogen production, as severe light attenuation can be induced in the reactor beyond this threshold. © 2015 Wiley Periodicals, Inc.

  18. Laboratory formation of a scaled protostellar jet by coaligned poloidal magnetic field.

    PubMed

    Albertazzi, B; Ciardi, A; Nakatsutsumi, M; Vinci, T; Béard, J; Bonito, R; Billette, J; Borghesi, M; Burkley, Z; Chen, S N; Cowan, T E; Herrmannsdörfer, T; Higginson, D P; Kroll, F; Pikuz, S A; Naughton, K; Romagnani, L; Riconda, C; Revet, G; Riquier, R; Schlenvoigt, H-P; Skobelev, I Yu; Faenov, A Ya; Soloviev, A; Huarte-Espinosa, M; Frank, A; Portugall, O; Pépin, H; Fuchs, J

    2014-10-17

    Although bipolar jets are seen emerging from a wide variety of astrophysical systems, the issue of their formation and morphology beyond their launching is still under study. Our scaled laboratory experiments, representative of young stellar object outflows, reveal that stable and narrow collimation of the entire flow can result from the presence of a poloidal magnetic field whose strength is consistent with observations. The laboratory plasma becomes focused with an interior cavity. This gives rise to a standing conical shock from which the jet emerges. Following simulations of the process at the full astrophysical scale, we conclude that it can also explain recently discovered x-ray emission features observed in low-density regions at the base of protostellar jets, such as the well-studied jet HH 154. Copyright © 2014, American Association for the Advancement of Science.

  19. Preliminary design, analysis, and costing of a dynamic scale model of the NASA space station

    NASA Technical Reports Server (NTRS)

    Gronet, M. J.; Pinson, E. D.; Voqui, H. L.; Crawley, E. F.; Everman, M. R.

    1987-01-01

    The difficulty of testing the next generation of large flexible space structures on the ground places an emphasis on other means for validating predicted on-orbit dynamic behavior. Scale model technology represents one way of verifying analytical predictions with ground test data. This study investigates the preliminary design, scaling and cost trades for a Space Station dynamic scale model. The scaling of nonlinear joint behavior is studied from theoretical and practical points of view. Suspension system interaction trades are conducted for the ISS Dual Keel Configuration and Build-Up Stages suspended in the proposed NASA/LaRC Large Spacecraft Laboratory. Key issues addressed are scaling laws, replication vs. simulation of components, manufacturing, suspension interactions, joint behavior, damping, articulation capability, and cost. These issues are the subject of parametric trades versus the scale model factor. The results of these detailed analyses are used to recommend scale factors for four different scale model options, each with varying degrees of replication. Potential problems in constructing and testing the scale model are identified, and recommendations for further study are outlined.

  20. CFD analysis of laboratory scale phase equilibrium cell operation

    NASA Astrophysics Data System (ADS)

    Jama, Mohamed Ali; Nikiforow, Kaj; Qureshi, Muhammad Saad; Alopaeus, Ville

    2017-10-01

    For the modeling of multiphase chemical reactors or separation processes, it is essential to predict accurately chemical equilibrium data, such as vapor-liquid or liquid-liquid equilibria [M. Šoóš et al., Chem. Eng. Process.: Process Intensif. 42(4), 273-284 (2003)]. The instruments used in these experiments are typically designed based on previous experiences, and their operation verified based on known equilibria of standard components. However, mass transfer limitations with different chemical systems may be very different, potentially falsifying the measured equilibrium compositions. In this work, computational fluid dynamics is utilized to design and analyze laboratory scale experimental gas-liquid equilibrium cell for the first time to augment the traditional analysis based on plug flow assumption. Two-phase dilutor cell, used for measuring limiting activity coefficients at infinite dilution, is used as a test case for the analysis. The Lagrangian discrete model is used to track each bubble and to study the residence time distribution of the carrier gas bubbles in the dilutor cell. This analysis is necessary to assess whether the gas leaving the cell is in equilibrium with the liquid, as required in traditional analysis of such apparatus. Mass transfer for six different bio-oil compounds is calculated to determine the approach equilibrium concentration. Also, residence times assuming plug flow and ideal mixing are used as reference cases to evaluate the influence of mixing on the approach to equilibrium in the dilutor. Results show that the model can be used to predict the dilutor operating conditions for which each of the studied gas-liquid systems reaches equilibrium.

  1. CFD analysis of laboratory scale phase equilibrium cell operation.

    PubMed

    Jama, Mohamed Ali; Nikiforow, Kaj; Qureshi, Muhammad Saad; Alopaeus, Ville

    2017-10-01

    For the modeling of multiphase chemical reactors or separation processes, it is essential to predict accurately chemical equilibrium data, such as vapor-liquid or liquid-liquid equilibria [M. Šoóš et al., Chem. Eng. Process Intensif. 42(4), 273-284 (2003)]. The instruments used in these experiments are typically designed based on previous experiences, and their operation verified based on known equilibria of standard components. However, mass transfer limitations with different chemical systems may be very different, potentially falsifying the measured equilibrium compositions. In this work, computational fluid dynamics is utilized to design and analyze laboratory scale experimental gas-liquid equilibrium cell for the first time to augment the traditional analysis based on plug flow assumption. Two-phase dilutor cell, used for measuring limiting activity coefficients at infinite dilution, is used as a test case for the analysis. The Lagrangian discrete model is used to track each bubble and to study the residence time distribution of the carrier gas bubbles in the dilutor cell. This analysis is necessary to assess whether the gas leaving the cell is in equilibrium with the liquid, as required in traditional analysis of such apparatus. Mass transfer for six different bio-oil compounds is calculated to determine the approach equilibrium concentration. Also, residence times assuming plug flow and ideal mixing are used as reference cases to evaluate the influence of mixing on the approach to equilibrium in the dilutor. Results show that the model can be used to predict the dilutor operating conditions for which each of the studied gas-liquid systems reaches equilibrium.

  2. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  3. Note: Measurement system for the radiative forcing of greenhouse gases in a laboratory scale.

    PubMed

    Kawamura, Yoshiyuki

    2016-01-01

    The radiative forcing of the greenhouse gases has been studied being based on computational simulations or the observation of the real atmosphere meteorologically. In order to know the greenhouse effect more deeply and to study it from various viewpoints, the study on it in a laboratory scale is important. We have developed a direct measurement system for the infrared back radiation from the carbon dioxide (CO2) gas. The system configuration is similar with that of the practical earth-atmosphere-space system. Using this system, the back radiation from the CO2 gas was directly measured in a laboratory scale, which roughly coincides with meteorologically predicted value.

  4. Large-scale functional models of visual cortex for remote sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less

  5. Interpreting DNAPL saturations in a laboratory-scale injection using one- and two-dimensional modeling of GPR Data

    USGS Publications Warehouse

    Johnson, R.H.; Poeter, E.P.

    2005-01-01

    Ground-penetrating radar (GPR) is used to track a dense non-aqueous phase liquid (DNAPL) injection in a laboratory sand tank. Before modeling, the GPR data provide a qualitative image of DNAPL saturation and movement. One-dimensional (1D) GPR modeling provides a quantitative interpretation of DNAPL volume within a given thickness during and after the injection. DNAPL saturation in sublayers of a specified thickness could not be quantified because calibration of the 1D GPR model is nonunique when both permittivity and depth of multiple layers are unknown. One-dimensional GPR modeling of the sand tank indicates geometric interferences in a small portion of the tank. These influences are removed from the interpretation using an alternate matching target. Two-dimensional (2D) GPR modeling provides a qualitative interpretation of the DNAPL distribution through pattern matching and tests for possible 2D influences that are not accounted for in the 1D GPR modeling. Accurate quantitative interpretation of DNAPL volumes using GPR modeling requires (1) identification of a suitable target that produces a strong reflection and is not subject to any geometric interference; (2) knowledge of the exact depth of that target; and (3) use of two-way radar-wave travel times through the medium to the target to determine the permittivity of the intervening material, which eliminates reliance on signal amplitude. With geologic conditions that are suitable for GPR surveys (i.e., shallow depths, low electrical conductivities, and a known reflective target), the procedures in this laboratory study can be adapted to a field site to delineate shallow DNAPL source zones.

  6. The total laboratory solution: a new laboratory E-business model based on a vertical laboratory meta-network.

    PubMed

    Friedman, B A

    2001-08-01

    Major forces are now reshaping all businesses on a global basis, including the healthcare and clinical laboratory industries. One of the major forces at work is information technology (IT), which now provides the opportunity to create a new economic and business model for the clinical laboratory industry based on the creation of an integrated vertical meta-network, referred to here as the "total laboratory solution" (TLS). Participants at the most basic level of such a network would include a hospital-based laboratory, a reference laboratory, a laboratory information system/application service provider/laboratory portal vendor, an in vitro diagnostic manufacturer, and a pharmaceutical/biotechnology manufacturer. It is suggested that each of these participants would add value to the network primarily in its area of core competency. Subvariants of such a network have evolved over recent years, but a TLS comprising all or most of these participants does not exist at this time. Although the TLS, enabled by IT and closely akin to the various e-businesses that are now taking shape, offers many advantages from a theoretical perspective over the current laboratory business model, its success will depend largely on (a) market forces, (b) how the collaborative networks are organized and managed, and (c) whether the network can offer healthcare organizations higher quality testing services at lower cost. If the concept is successful, new demands will be placed on hospital-based laboratory professionals to shift the range of professional services that they offer toward clinical consulting, integration of laboratory information from multiple sources, and laboratory information management. These information management and integration tasks can only increase in complexity in the future as new genomic and proteomics testing modalities are developed and come on-line in clinical laboratories.

  7. Improving laboratory efficiencies to scale-up HIV viral load testing.

    PubMed

    Alemnji, George; Onyebujoh, Philip; Nkengasong, John N

    2017-03-01

    Viral load measurement is a key indicator that determines patients' response to treatment and risk for disease progression. Efforts are ongoing in different countries to scale-up access to viral load testing to meet the Joint United Nations Programme on HIV and AIDS target of achieving 90% viral suppression among HIV-infected patients receiving antiretroviral therapy. However, the impact of these initiatives may be challenged by increased inefficiencies along the viral load testing spectrum. This will translate to increased costs and ineffectiveness of scale-up approaches. This review describes different parameters that could be addressed across the viral load testing spectrum aimed at improving efficiencies and utilizing test results for patient management. Though progress is being made in some countries to scale-up viral load, many others still face numerous challenges that may affect scale-up efficiencies: weak demand creation, ineffective supply chain management systems; poor specimen referral systems; inadequate data and quality management systems; and weak laboratory-clinical interface leading to diminished uptake of test results. In scaling up access to viral load testing, there should be a renewed focus to address efficiencies across the entire spectrum, including factors related to access, uptake, and impact of test results.

  8. Formulation and development of tablets based on Ludipress and scale-up from laboratory to production scale.

    PubMed

    Heinz, R; Wolf, H; Schuchmann, H; End, L; Kolter, K

    2000-05-01

    In spite of the wealth of experience available in the pharmaceutical industry, tablet formulations are still largely developed on an empirical basis, and the scale-up from laboratory to production is a time-consuming and costly process. Using Ludipress greatly simplifies formulation development and the manufacturing process because only the active ingredient Ludipress and a lubricant need to be mixed briefly before being compressed into tablets. The studies described here were designed to investigate the scale-up of Ludipress-based formulations from laboratory to production scale, and to predict changes in tablet properties due to changes in format, compaction pressure, and the use of different tablet presses. It was found that the tensile strength of tablets made of Ludipress increased linearly with compaction pressures up to 300 MPa. It was also independent of the geometry of the tablets (diameter, thickness, shape). It is therefore possible to give an equation with which the compaction pressure required to achieve a given hardness can be calculated for a given tablet form. The equation has to be modified slightly to convert from a single-punch press to a rotary tableting machine. Tablets produced in the rotary machine at the same pressure have a slightly higher tensile strength. The rate of increase in pressure, and therefore the throughput, has no effect on the tensile strength of Ludipress tablets. It is thought that a certain minimum dwell time is responsible for this difference. The production of tablets based on Ludipress can be scaled up from one rotary press to another without problem if the powder mixtures are prepared with the same mixing energy. The tensile strength curve determined for tablets made with Ludipress alone can also be applied to tablets with a small quantity (< 10%) of an active ingredient.

  9. MODELING HEXAVALENT CHROMIUM REDUCTION IN GROUND- WATER IN FIELD-SCALE TRANSPORT AND LABORATORY BATCH EXPERIMENTS

    EPA Science Inventory

    A plausible and consistent model is developed to obtain a quantitative description of the gradual disappearance of hexavalent chromium (Cr(VI)) from groundwater in a small-scale field tracer test and in batch kinetic experiments using aquifer sediments under similar chemical cond...

  10. Full-scale and laboratory-scale anaerobic treatment of citric acid production wastewater.

    PubMed

    Colleran, E; Pender, S; Philpott, U; O'Flaherty, V; Leahy, B

    1998-01-01

    This paper reviews the operation of a full-scale, fixed-bed digester treating a citric acid production wastewater with a COD:sulphate ratio of 3-4:1. Support matrix pieces were removed from the digester at intervals during the first 5 years of operation in order to quantify the vertical distribution of biomass within the digester. Detailed analysis of the digester biomass after 5 years of operation indicated that H2 and propionate-utilising SRB had outcompeted hydrogenophilic methanogens and propionate syntrophs. Acetoclastic methanogens were shown to play the dominant role in acetate conversion. Butyrate and ethanol-degrading syntrophs also remained active in the digester after 5 years of operation. Laboratory-scale hybrid reactor treatment at 55 degrees C of a diluted molasses influent, with and without sulphate supplementation, showed that the reactors could be operated with high stability at volumetric loading rates of 24 kgCOD.m-3.d-1 (12 h HRT). In the presence of sulphate (2 g/l-1; COD/sulphate ratio of 6:1), acetate conversion was severely inhibited, resulting in effluent acetate concentrations of up to 4000 mg.l-1.

  11. Fluid dynamics structures in a fire environment observed in laboratory-scale experiments

    Treesearch

    J. Lozano; W. Tachajapong; D.R. Weise; S. Mahalingam; M. Princevac

    2010-01-01

    Particle Image Velocimetry (PIV) measurements were performed in laboratory-scale experimental fires spreading across horizontal fuel beds composed of aspen (Populus tremuloides Michx) excelsior. The continuous flame, intermittent flame, and thermal plume regions of a fire were investigated. Utilizing a PIV system, instantaneous velocity fields for...

  12. On the dominant noise components of tactical aircraft: Laboratory to full scale

    NASA Astrophysics Data System (ADS)

    Tam, Christopher K. W.; Aubert, Allan C.; Spyropoulos, John T.; Powers, Russell W.

    2018-05-01

    This paper investigates the dominant noise components of a full-scale high performance tactical aircraft. The present study uses acoustic measurements of the exhaust jet from a single General Electric F414-400 turbofan engine installed in a Boeing F/A-18E Super Hornet aircraft operating from flight idle to maximum afterburner. The full-scale measurements are to the ANSI S12.75-2012 standard employing about 200 microphones. By comparing measured noise spectra with those from hot supersonic jets observed in the laboratory, the dominant noise components specific to the F/A-18E aircraft at different operating power levels are identified. At intermediate power, it is found that the dominant noise components of an F/A-18E aircraft are essentially the same as those of high temperature supersonic laboratory jets. However, at military and afterburner powers, there are new dominant noise components. Their characteristics are then documented and analyzed. This is followed by an investigation of their origin and noise generation mechanisms.

  13. Next-generation genome-scale models for metabolic engineering.

    PubMed

    King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O

    2015-12-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Scaled laboratory experiments explain the kink behaviour of the Crab Nebula jet

    PubMed Central

    Li, C. K.; Tzeferacos, P.; Lamb, D.; Gregori, G.; Norreys, P. A.; Rosenberg, M. J.; Follett, R. K.; Froula, D. H.; Koenig, M.; Seguin, F. H.; Frenje, J. A.; Rinderknecht, H. G.; Sio, H.; Zylstra, A. B.; Petrasso, R. D.; Amendt, P. A.; Park, H. S.; Remington, B. A.; Ryutov, D. D.; Wilks, S. C.; Betti, R.; Frank, A.; Hu, S. X.; Sangster, T. C.; Hartigan, P.; Drake, R. P.; Kuranz, C. C.; Lebedev, S. V.; Woolsey, N. C.

    2016-01-01

    The remarkable discovery by the Chandra X-ray observatory that the Crab nebula's jet periodically changes direction provides a challenge to our understanding of astrophysical jet dynamics. It has been suggested that this phenomenon may be the consequence of magnetic fields and magnetohydrodynamic instabilities, but experimental demonstration in a controlled laboratory environment has remained elusive. Here we report experiments that use high-power lasers to create a plasma jet that can be directly compared with the Crab jet through well-defined physical scaling laws. The jet generates its own embedded toroidal magnetic fields; as it moves, plasma instabilities result in multiple deflections of the propagation direction, mimicking the kink behaviour of the Crab jet. The experiment is modelled with three-dimensional numerical simulations that show exactly how the instability develops and results in changes of direction of the jet. PMID:27713403

  15. Scaled laboratory experiments explain the kink behaviour of the Crab Nebula jet.

    PubMed

    Li, C K; Tzeferacos, P; Lamb, D; Gregori, G; Norreys, P A; Rosenberg, M J; Follett, R K; Froula, D H; Koenig, M; Seguin, F H; Frenje, J A; Rinderknecht, H G; Sio, H; Zylstra, A B; Petrasso, R D; Amendt, P A; Park, H S; Remington, B A; Ryutov, D D; Wilks, S C; Betti, R; Frank, A; Hu, S X; Sangster, T C; Hartigan, P; Drake, R P; Kuranz, C C; Lebedev, S V; Woolsey, N C

    2016-10-07

    The remarkable discovery by the Chandra X-ray observatory that the Crab nebula's jet periodically changes direction provides a challenge to our understanding of astrophysical jet dynamics. It has been suggested that this phenomenon may be the consequence of magnetic fields and magnetohydrodynamic instabilities, but experimental demonstration in a controlled laboratory environment has remained elusive. Here we report experiments that use high-power lasers to create a plasma jet that can be directly compared with the Crab jet through well-defined physical scaling laws. The jet generates its own embedded toroidal magnetic fields; as it moves, plasma instabilities result in multiple deflections of the propagation direction, mimicking the kink behaviour of the Crab jet. The experiment is modelled with three-dimensional numerical simulations that show exactly how the instability develops and results in changes of direction of the jet.

  16. Intermediate Scale Laboratory Testing to Understand Mechanisms of Capillary and Dissolution Trapping during Injection and Post-Injection of CO 2 in Heterogeneous Geological Formations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Illangasekare, Tissa; Trevisan, Luca; Agartan, Elif

    2015-03-31

    Carbon Capture and Storage (CCS) represents a technology aimed to reduce atmospheric loading of CO 2 from power plants and heavy industries by injecting it into deep geological formations, such as saline aquifers. A number of trapping mechanisms contribute to effective and secure storage of the injected CO 2 in supercritical fluid phase (scCO 2) in the formation over the long term. The primary trapping mechanisms are structural, residual, dissolution and mineralization. Knowledge gaps exist on how the heterogeneity of the formation manifested at all scales from the pore to the site scales affects trapping and parameterization of contributing mechanismsmore » in models. An experimental and modeling study was conducted to fill these knowledge gaps. Experimental investigation of fundamental processes and mechanisms in field settings is not possible as it is not feasible to fully characterize the geologic heterogeneity at all relevant scales and gathering data on migration, trapping and dissolution of scCO 2. Laboratory experiments using scCO 2 under ambient conditions are also not feasible as it is technically challenging and cost prohibitive to develop large, two- or three-dimensional test systems with controlled high pressures to keep the scCO 2 as a liquid. Hence, an innovative approach that used surrogate fluids in place of scCO 2 and formation brine in multi-scale, synthetic aquifers test systems ranging in scales from centimeter to meter scale developed used. New modeling algorithms were developed to capture the processes controlled by the formation heterogeneity, and they were tested using the data from the laboratory test systems. The results and findings are expected to contribute toward better conceptual models, future improvements to DOE numerical codes, more accurate assessment of storage capacities, and optimized placement strategies. This report presents the experimental and modeling methods and research results.« less

  17. Evaluation of Surface Runoff Generation Processes Using a Rainfall Simulator: A Small Scale Laboratory Experiment

    NASA Astrophysics Data System (ADS)

    Danáčová, Michaela; Valent, Peter; Výleta, Roman

    2017-12-01

    of 5 mm/min was used to irrigate a corrupted soil sample. The experiment was undertaken for several different slopes, under the condition of no vegetation cover. The results of the rainfall simulation experiment complied with the expectations of a strong relationship between the slope gradient, and the amount of surface runoff generated. The experiments with higher slope gradients were characterised by larger volumes of surface runoff generated, and by shorter times after which it occurred. The experiments with rainfall simulators in both laboratory and field conditions play an important role in better understanding of runoff generation processes. The results of such small scale experiments could be used to estimate some of the parameters of complex hydrological models, which are used to model rainfall-runoff and erosion processes at catchment scale.

  18. Terminology modeling for an enterprise laboratory orders catalog.

    PubMed

    Zhou, Li; Goldberg, Howard; Pabbathi, Deepika; Wright, Adam; Goldman, Debora S; Van Putten, Cheryl; Barley, Amanda; Rocha, Roberto A

    2009-11-14

    Laboratory test orders are used in a variety of clinical information systems at Partners HealthCare. At present, each site at Partners manages its own set of laboratory orders with locally defined codes. Our current plan is to implement an enterprise catalog, where laboratory test orders are mapped to reference terminologies and codes from different sites are mapped to each other. This paper describes the terminology modeling effort that preceded the implementation of the enterprise laboratory orders catalog. In particular, we present our experience in adapting HL7's "Common Terminology Services 2 - Upper Level Class Model" as a terminology metamodel for guiding the development of fully specified laboratory orders and related services.

  19. Multiscale Laboratory Infrastructure and Services to users: Plans within EPOS

    NASA Astrophysics Data System (ADS)

    Spiers, Chris; Willingshofer, Ernst; Drury, Martyn; Funiciello, Francesca; Rosenau, Matthias; Scarlato, Piergiorgio; Sagnotti, Leonardo; EPOS WG6, Corrado Cimarelli

    2015-04-01

    The participant countries in EPOS embody a wide range of world-class laboratory infrastructures ranging from high temperature and pressure experimental facilities, to electron microscopy, micro-beam analysis, analogue modeling and paleomagnetic laboratories. Most data produced by the various laboratory centres and networks are presently available only in limited "final form" in publications. Many data remain inaccessible and/or poorly preserved. However, the data produced at the participating laboratories are crucial to serving society's need for geo-resources exploration and for protection against geo-hazards. Indeed, to model resource formation and system behaviour during exploitation, we need an understanding from the molecular to the continental scale, based on experimental data. This contribution will describe the plans that the laboratories community in Europe is making, in the context of EPOS. The main objectives are: • To collect and harmonize available and emerging laboratory data on the properties and processes controlling rock system behaviour at multiple scales, in order to generate products accessible and interoperable through services for supporting research activities. • To co-ordinate the development, integration and trans-national usage of the major solid Earth Science laboratory centres and specialist networks. The length scales encompassed by the infrastructures included range from the nano- and micrometer levels (electron microscopy and micro-beam analysis) to the scale of experiments on centimetre sized samples, and to analogue model experiments simulating the reservoir scale, the basin scale and the plate scale. • To provide products and services supporting research into Geo-resources and Geo-storage, Geo-hazards and Earth System Evolution. If the EPOS Implementation Phase proposal presently under construction is successful, then a range of services and transnational activities will be put in place to realize these objectives.

  20. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on

  1. Modelling utility-scale wind power plants. Part 1: Economics

    NASA Astrophysics Data System (ADS)

    Milligan, Michael R.

    1999-10-01

    As the worldwide use of wind turbine generators continues to increase in utility-scale applications, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry in the United States appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This article is the first of two which address modelling approaches and results obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This first article addresses the basic economic issues associated with electricity production from several generators that include large-scale wind power plants. An important part of this discussion is the role of unit commitment and economic dispatch in production cost models. This paper includes overviews and comparisons of the prevalent production cost modelling methods, including several case studies applied to a variety of electric utilities. The second article discusses various methods of assessing capacity credit and results from several reliability-based studies performed at NREL.

  2. Bounds on low scale gravity from RICE data and cosmogenic neutrino flux models

    NASA Astrophysics Data System (ADS)

    Hussain, Shahid; McKay, Douglas W.

    2006-03-01

    We explore limits on low scale gravity models set by results from the Radio Ice Cherenkov Experiment's (RICE) ongoing search for cosmic ray neutrinos in the cosmogenic, or GZK, energy range. The bound on M, the fundamental scale of gravity, depends upon cosmogenic flux model, black hole formation and decay treatments, inclusion of graviton mediated elastic neutrino processes, and the number of large extra dimensions, d. Assuming proton-based cosmogenic flux models that cover a broad range of flux possibilities, we find bounds in the interval 0.9 TeVmodels generally lead to smaller fluxes and correspondingly weaker bounds. Values d=5, 6 and 7, for which laboratory and astrophysical bounds on LSG models are less restrictive, lead to essentially the same limits on M.

  3. A Unified Multi-scale Model for Cross-Scale Evaluation and Integration of Hydrological and Biogeochemical Processes

    NASA Astrophysics Data System (ADS)

    Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.

    2013-12-01

    Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller

  4. Experimental methods for the simulation of supercritical CO2 injection at laboratory scale aimed to investigate capillary trapping

    NASA Astrophysics Data System (ADS)

    Trevisan, L.; Illangasekare, T. H.; Rodriguez, D.; Sakaki, T.; Cihan, A.; Birkholzer, J. T.; Zhou, Q.

    2011-12-01

    Geological storage of carbon dioxide in deep geologic formations is being considered as a technical option to reduce greenhouse gas loading to the atmosphere. The processes associated with the movement and stable trapping are complex in deep naturally heterogeneous formations. Three primary mechanisms contribute to trapping; capillary entrapment due to immobilization of the supercritical fluid CO2 within soil pores, liquid CO2 dissolving in the formation water and mineralization. Natural heterogeneity in the formation is expected to affect all three mechanisms. A research project is in progress with the primary goal to improve our understanding of capillary and dissolution trapping during injection and post-injection process, focusing on formation heterogeneity. It is expected that this improved knowledge will help to develop site characterization methods targeting on obtaining the most critical parameters that capture the heterogeneity to design strategies and schemes to maximize trapping. This research combines experiments at the laboratory scale with multiphase modeling to upscale relevant trapping processes to the field scale. This paper presents the results from a set of experiments that were conducted in an intermediate scale test tanks. Intermediate scale testing provides an attractive alternative to investigate these processes under controlled conditions in the laboratory. Conducting these types of experiments is highly challenging as methods have to be developed to extrapolate the data from experiments that are conducted under ambient laboratory conditions to high temperatures and pressures settings in deep geologic formations. We explored the use of a combination of surrogate fluids that have similar density, viscosity contrasts and analogous solubility and interfacial tension as supercritical CO2-brine in deep formations. The extrapolation approach involves the use of dimensionless numbers such as Capillary number (Ca) and the Bond number (Bo). A set of

  5. A review of analogue modelling of geodynamic processes: Approaches, scaling, materials and quantification, with an application to subduction experiments

    NASA Astrophysics Data System (ADS)

    Schellart, Wouter P.; Strak, Vincent

    2016-10-01

    We present a review of the analogue modelling method, which has been used for 200 years, and continues to be used, to investigate geological phenomena and geodynamic processes. We particularly focus on the following four components: (1) the different fundamental modelling approaches that exist in analogue modelling; (2) the scaling theory and scaling of topography; (3) the different materials and rheologies that are used to simulate the complex behaviour of rocks; and (4) a range of recording techniques that are used for qualitative and quantitative analyses and interpretations of analogue models. Furthermore, we apply these four components to laboratory-based subduction models and describe some of the issues at hand with modelling such systems. Over the last 200 years, a wide variety of analogue materials have been used with different rheologies, including viscous materials (e.g. syrups, silicones, water), brittle materials (e.g. granular materials such as sand, microspheres and sugar), plastic materials (e.g. plasticine), visco-plastic materials (e.g. paraffin, waxes, petrolatum) and visco-elasto-plastic materials (e.g. hydrocarbon compounds and gelatins). These materials have been used in many different set-ups to study processes from the microscale, such as porphyroclast rotation, to the mantle scale, such as subduction and mantle convection. Despite the wide variety of modelling materials and great diversity in model set-ups and processes investigated, all laboratory experiments can be classified into one of three different categories based on three fundamental modelling approaches that have been used in analogue modelling: (1) The external approach, (2) the combined (external + internal) approach, and (3) the internal approach. In the external approach and combined approach, energy is added to the experimental system through the external application of a velocity, temperature gradient or a material influx (or a combination thereof), and so the system is open

  6. Post Audit of a Field Scale Reactive Transport Model of Uranium at a Former Mill Site

    NASA Astrophysics Data System (ADS)

    Curtis, G. P.

    2015-12-01

    Reactive transport of hexavalent uranium (U(VI)) in a shallow alluvial aquifer at a former uranium mill tailings site near Naturita CO has been monitored for nearly 30 years by the US Department of Energy and the US Geological Survey. Groundwater at the site has high concentrations of chloride, alkalinity and U(VI) as a owing to ore processing at the site from 1941 to 1974. We previously calibrated a multicomponent reactive transport model to data collected at the site from 1986 to 2001. A two dimensional nonreactive transport model used a uniform hydraulic conductivity which was estimated from observed chloride concentrations and tritium helium age dates. A reactive transport model for the 2km long site was developed by including an equilibrium U(VI) surface complexation model calibrated to laboratory data and calcite equilibrium. The calibrated model reproduced both nonreactive tracers as well as the observed U(VI), pH and alkalinity. Forward simulations for the period 2002-2015 conducted with the calibrated model predict significantly faster natural attenuation of U(VI) concentrations than has been observed by the persistent high U(VI) concentrations at the site. Alternative modeling approaches are being evaluating evaluated using recent data to determine if the persistence can be explained by multirate mass transfer models developed from experimental observations at the column scale(~0.2m), the laboratory tank scale (~2m), the field tracer test scale (~1-4m) or geophysical observation scale (~1-5m). Results of this comparison should provide insight into the persistence of U(VI) plumes and improved management options.

  7. Continuous microalgal cultivation in a laboratory-scale photobioreactor under seasonal day-night irradiation: experiments and simulation.

    PubMed

    Bertucco, Alberto; Beraldi, Mariaelena; Sforza, Eleonora

    2014-08-01

    In this work, the production of Scenedesmus obliquus in a continuous flat-plate laboratory-scale photobioreactor (PBR) under alternated day-night cycles was tested both experimentally and theoretically. Variation of light intensity according to the four seasons of the year were simulated experimentally by a tunable LED lamp, and effects on microalgal growth and productivity were measured to evaluate the conversion efficiency of light energy into biomass during the different seasons. These results were used to validate a mathematical model for algae growth that can be applied to simulate a large-scale production unit, carried out in a flat-plate PBR of similar geometry. The cellular concentration in the PBR was calculated in both steady-state and transient conditions, and the value of the maintenance kinetic term was correlated to experimental profiles. The relevance of this parameter was finally outlined.

  8. Multi-scale modeling of multi-component reactive transport in geothermal aquifers

    NASA Astrophysics Data System (ADS)

    Nick, Hamidreza M.; Raoof, Amir; Wolf, Karl-Heinz; Bruhn, David

    2014-05-01

    In deep geothermal systems heat and chemical stresses can cause physical alterations, which may have a significant effect on flow and reaction rates. As a consequence it will lead to changes in permeability and porosity of the formations due to mineral precipitation and dissolution. Large-scale modeling of reactive transport in such systems is still challenging. A large area of uncertainty is the way in which the pore-scale information controlling the flow and reaction will behave at a larger scale. A possible choice is to use constitutive relationships relating, for example the permeability and porosity evolutions to the change in the pore geometry. While determining such relationships through laboratory experiments may be limited, pore-network modeling provides an alternative solution. In this work, we introduce a new workflow in which a hybrid Finite-Element Finite-Volume method [1,2] and a pore network modeling approach [3] are employed. Using the pore-scale model, relevant constitutive relations are developed. These relations are then embedded in the continuum-scale model. This approach enables us to study non-isothermal reactive transport in porous media while accounting for micro-scale features under realistic conditions. The performance and applicability of the proposed model is explored for different flow and reaction regimes. References: 1. Matthäi, S.K., et al.: Simulation of solute transport through fractured rock: a higher-order accurate finite-element finite-volume method permitting large time steps. Transport in porous media 83.2 (2010): 289-318. 2. Nick, H.M., et al.: Reactive dispersive contaminant transport in coastal aquifers: Numerical simulation of a reactive Henry problem. Journal of contaminant hydrology 145 (2012), 90-104. 3. Raoof A., et al.: PoreFlow: A Complex pore-network model for simulation of reactive transport in variably saturated porous media, Computers & Geosciences, 61, (2013), 160-174.

  9. Use of a PhET Interactive Simulation in General Chemistry Laboratory: Models of the Hydrogen Atom

    ERIC Educational Resources Information Center

    Clark, Ted M.; Chamberlain, Julia M.

    2014-01-01

    An activity supporting the PhET interactive simulation, Models of the Hydrogen Atom, has been designed and used in the laboratory portion of a general chemistry course. This article describes the framework used to successfully accomplish implementation on a large scale. The activity guides students through a comparison and analysis of the six…

  10. Upscaling of reaction rates in reactive transport using pore-scale reactive transport model

    NASA Astrophysics Data System (ADS)

    Yoon, H.; Dewers, T. A.; Arnold, B. W.; Major, J. R.; Eichhubl, P.; Srinivasan, S.

    2013-12-01

    Dissolved CO2 during geological CO2 storage may react with minerals in fractured rocks, confined aquifers, or faults, resulting in mineral precipitation and dissolution. The overall rate of reaction can be affected by coupled processes among hydrodynamics, transport, and reactions at the (sub) pore-scale. In this research pore-scale modeling of coupled fluid flow, reactive transport, and heterogeneous reaction at the mineral surface is applied to account for permeability alterations caused by precipitation-induced pore-blocking. This work is motivated by the observed CO2 seeps from a natural analog to geologic CO2 sequestration at Crystal Geyser, Utah. A key observation is the lateral migration of CO2 seep sites at a scale of ~ 100 meters over time. A pore-scale model provides fundamental mechanistic explanations of how calcite precipitation alters flow paths by pore plugging under different geochemical compositions and pore configurations. In addition, response function of reaction rates will be constructed from pore-scale simulations which account for a range of reaction regimes characterized by the Damkohler and Peclet numbers. Newly developed response functions will be used in a continuum scale model that may account for large-scale phenomena mimicking lateral migration of surface CO2 seeps. Comparison of field observations and simulations results will provide mechanistic explanations of the lateral migration and enhance our understanding of subsurface processes associated with the CO2 injection. This work is supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security

  11. Pesticide fate on catchment scale: conceptual modelling of stream CSIA data

    NASA Astrophysics Data System (ADS)

    Lutz, Stefanie R.; van der Velde, Ype; Elsayed, Omniea F.; Imfeld, Gwenaël; Lefrancq, Marie; Payraudeau, Sylvain; van Breukelen, Boris M.

    2017-10-01

    Compound-specific stable isotope analysis (CSIA) has proven beneficial in the characterization of contaminant degradation in groundwater, but it has never been used to assess pesticide transformation on catchment scale. This study presents concentration and carbon CSIA data of the herbicides S-metolachlor and acetochlor from three locations (plot, drain, and catchment outlets) in a 47 ha agricultural catchment (Bas-Rhin, France). Herbicide concentrations at the catchment outlet were highest (62 µg L-1) in response to an intense rainfall event following herbicide application. Increasing δ13C values of S-metolachlor and acetochlor by more than 2 ‰ during the study period indicated herbicide degradation. To assist the interpretation of these data, discharge, concentrations, and δ13C values of S-metolachlor were modelled with a conceptual mathematical model using the transport formulation by travel-time distributions. Testing of different model setups supported the assumption that degradation half-lives (DT50) increase with increasing soil depth, which can be straightforwardly implemented in conceptual models using travel-time distributions. Moreover, model calibration yielded an estimate of a field-integrated isotopic enrichment factor as opposed to laboratory-based assessments of enrichment factors in closed systems. Thirdly, the Rayleigh equation commonly applied in groundwater studies was tested by our model for its potential to quantify degradation on catchment scale. It provided conservative estimates on the extent of degradation as occurred in stream samples. However, largely exceeding the simulated degradation within the entire catchment, these estimates were not representative of overall degradation on catchment scale. The conceptual modelling approach thus enabled us to upscale sample-based CSIA information on degradation to the catchment scale. Overall, this study demonstrates the benefit of combining monitoring and conceptual modelling of concentration

  12. Blood Flow: Multi-scale Modeling and Visualization (July 2011)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-01-01

    Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations. This animation presents early results of two studies used in the development of a multi-scale visualization methodology. The fisrt illustrates a flow of healthy (red) and diseased (blue) blood cells with a Dissipative Particle Dynamics (DPD) method. Each bloodmore » cell is represented by a mesh, small spheres show a sub-set of particles representing the blood plasma, while instantaneous streamlines and slices represent the ensemble average velocity. In the second we investigate the process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of an aneruysm. Simulation was performed on Kraken at the National Institute for Computational Sciences. Visualization was produced using resources of the Argonne Leadership Computing Facility at Argonne National Laboratory.« less

  13. Oxy-acetylene driven laboratory scale shock tubes for studying blast wave effects

    NASA Astrophysics Data System (ADS)

    Courtney, Amy C.; Andrusiv, Lubov P.; Courtney, Michael W.

    2012-04-01

    This paper describes the development and characterization of modular, oxy-acetylene driven laboratory scale shock tubes. Such tools are needed to produce realistic blast waves in a laboratory setting. The pressure-time profiles measured at 1 MHz using high-speed piezoelectric pressure sensors have relevant durations and show a true shock front and exponential decay characteristic of free-field blast waves. Descriptions are included for shock tube diameters of 27-79 mm. A range of peak pressures from 204 kPa to 1187 kPa (with 0.5-5.6% standard error of the mean) were produced by selection of the driver section diameter and distance from the shock tube opening. The peak pressures varied predictably with distance from the shock tube opening while maintaining both a true blast wave profile and relevant pulse duration for distances up to about one diameter from the shock tube opening. This shock tube design provides a more realistic blast profile than current compression-driven shock tubes, and it does not have a large jet effect. In addition, operation does not require specialized personnel or facilities like most blast-driven shock tubes, which reduces operating costs and effort and permits greater throughput and accessibility. It is expected to be useful in assessing the response of various sensors to shock wave loading; assessing the reflection, transmission, and absorption properties of candidate armor materials; assessing material properties at high rates of loading; assessing the response of biological materials to shock wave exposure; and providing a means to validate numerical models of the interaction of shock waves with structures. All of these activities have been difficult to pursue in a laboratory setting due in part to lack of appropriate means to produce a realistic blast loading profile.

  14. Laboratory and Pilot Scale Evaluation of Coagulation, Clarification, and Filtration for Upgrading Sewage Lagoon Effluents.

    DTIC Science & Technology

    1980-08-01

    AD-AGAB 906 ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURG--ETC FIG 14/2 LABORATORY AND PILOT SCALE EVALUATION OF COAGULATION, CLARIFICA -ETC U...FILTRATION FOR LWGRADING JEWAGE LAGOON EFFLUENTS~ w IL j0 ( M John ullinane, Jr., Richard A. hafer (0 Environmental Laboratory gel U. S. Army Engineer ...Shafer 9. PERFORMING ORGANIZATION NAME AND ADORESS SO. PROGRAM ELEMENT, PROJECT, TASK AREA a WORK UNIT NUMBERS U. S. Army Engineer Waterways Experiment

  15. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  16. Scaling depth-induced wave-breaking in two-dimensional spectral wave models

    NASA Astrophysics Data System (ADS)

    Salmon, J. E.; Holthuijsen, L. H.; Zijlema, M.; van Vledder, G. Ph.; Pietrzak, J. D.

    2015-03-01

    Wave breaking in shallow water is still poorly understood and needs to be better parameterized in 2D spectral wave models. Significant wave heights over horizontal bathymetries are typically under-predicted in locally generated wave conditions and over-predicted in non-locally generated conditions. A joint scaling dependent on both local bottom slope and normalized wave number is presented and is shown to resolve these issues. Compared to the 12 wave breaking parameterizations considered in this study, this joint scaling demonstrates significant improvements, up to ∼50% error reduction, over 1D horizontal bathymetries for both locally and non-locally generated waves. In order to account for the inherent differences between uni-directional (1D) and directionally spread (2D) wave conditions, an extension of the wave breaking dissipation models is presented. By including the effects of wave directionality, rms-errors for the significant wave height are reduced for the best performing parameterizations in conditions with strong directional spreading. With this extension, our joint scaling improves modeling skill for significant wave heights over a verification data set of 11 different 1D laboratory bathymetries, 3 shallow lakes and 4 coastal sites. The corresponding averaged normalized rms-error for significant wave height in the 2D cases varied between 8% and 27%. In comparison, using the default setting with a constant scaling, as used in most presently operating 2D spectral wave models, gave equivalent errors between 15% and 38%.

  17. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    NASA Astrophysics Data System (ADS)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  18. Salvus: A flexible open-source package for waveform modelling and inversion from laboratory to global scales

    NASA Astrophysics Data System (ADS)

    Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.

    2016-12-01

    Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Based on a high order finite (spectral) element discretization, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.

  19. Smoothing analysis of slug tests data for aquifer characterization at laboratory scale

    NASA Astrophysics Data System (ADS)

    Aristodemo, Francesco; Ianchello, Mario; Fallico, Carmine

    2018-07-01

    The present paper proposes a smoothing analysis of hydraulic head data sets obtained by means of different slug tests introduced in a confined aquifer. Laboratory experiments were performed through a 3D large-scale physical model built at the University of Calabria. The hydraulic head data were obtained by a pressure transducer placed in the injection well and subjected to a processing operation to smooth out the high-frequency noise occurring in the recorded signals. The adopted smoothing techniques working in time, frequency and time-frequency domain are the Savitzky-Golay filter modeled by third-order polynomial, the Fourier Transform and two types of Wavelet Transform (Mexican hat and Morlet). The performances of the filtered time series of the hydraulic heads for different slug volumes and measurement frequencies were statistically analyzed in terms of optimal fitting of the classical Cooper's equation. For practical purposes, the hydraulic heads smoothed by the involved techniques were used to determine the hydraulic conductivity of the aquifer. The energy contents and the frequency oscillations of the hydraulic head variations in the aquifer were exploited in the time-frequency domain by means of Wavelet Transform as well as the non-linear features of the observed hydraulic head oscillations around the theoretical Cooper's equation.

  20. Numerical simulation on hydromechanical coupling in porous media adopting three-dimensional pore-scale model.

    PubMed

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view.

  1. Numerical Simulation on Hydromechanical Coupling in Porous Media Adopting Three-Dimensional Pore-Scale Model

    PubMed Central

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384

  2. Scaled-model guidelines for formation-flying solar coronagraph missions.

    PubMed

    Landini, Federico; Romoli, Marco; Baccani, Cristian; Focardi, Mauro; Pancrazzi, Maurizio; Galano, Damien; Kirschner, Volker

    2016-02-15

    Stray light suppression is the main concern in designing a solar coronagraph. The main contribution to the stray light for an externally occulted space-borne solar coronagraph is the light diffracted by the occulter and scattered by the optics. It is mandatory to carefully evaluate the diffraction generated by an external occulter and the impact that it has on the stray light signal on the focal plane. The scientific need for observations to cover a large portion of the heliosphere with an inner field of view as close as possible to the photospheric limb supports the ambition of launching formation-flying giant solar coronagraphs. Their dimension prevents the possibility of replicating the flight geometry in a clean laboratory environment, and the strong need for a scaled model is thus envisaged. The problem of scaling a coronagraph has already been faced for exoplanets, for a single point source on axis at infinity. We face the problem here by adopting an original approach and by introducing the scaling of the solar disk as an extended source.

  3. Laboratory Scale Coal And Biomass To Drop-In Fuels (CBDF) Production And Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lux, Kenneth; Imam, Tahmina; Chevanan, Nehru

    This Final Technical Report describes the work and accomplishments of the project entitled, “Laboratory Scale Coal and Biomass to Drop-In Fuels (CBDF) Production and Assessment.” The main objective of the project was to fabricate and test a lab-scale liquid-fuel production system using coal containing different percentages of biomass such as corn stover and switchgrass at a rate of 2 liters per day. The system utilizes the patented Altex fuel-production technology, which incorporates advanced catalysts developed by Pennsylvania State University. The system was designed, fabricated, tested, and assessed for economic and environmental feasibility relative to competing technologies.

  4. RANS Simulation (Virtual Blade Model [VBM]) of Single Lab Scaled DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph

    2014-04-15

    Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device. The required User Defined Functions (UDFs) and look-up table of lift and drag coefficients are included along with the .cas and .dat files.

  5. Formation of Glycidyl Fatty Acid Esters Both in Real Edible Oils during Laboratory-Scale Refining and in Chemical Model during High Temperature Exposure.

    PubMed

    Cheng, Weiwei; Liu, Guoqin; Liu, Xinqi

    2016-07-27

    In the present study, the formation mechanisms of glycidyl fatty acid esters (GEs) were investigated both in real edible oils (soybean oil, camellia oil, and palm oil) during laboratory-scale preparation and refining and in chemical model (1,2-dipalmitin (DPG) and 1-monopalmitin (MPG)) during high temperature exposure (160-260 °C under nitrogen). The formation process of GEs in the chemical model was monitored using attenuated total reflection-Fourier transform infrared (ATR-FTIR) spectroscopy. The results showed that the roasting and pressing process could produce certain amounts of GEs that were much lower than that produced in the deodorization process. GE contents in edible oils increased continuously and significantly with increasing deodorization time below 200 °C. However, when the temperature exceeded 200 °C, GE contents sharply increased in 1-2 h followed by a gradual decrease, which could verify a simultaneous formation and degradation of GEs at high temperature. In addition, it was also found that the presence of acylglycerol (DAGs and MAGs) could significantly increase the formation yield of GEs both in real edible oils and in chemical model. Compared with DAGs, moreover, MAGs displayed a higher formation capacity but substantially lower contribution to GE formation due to their low contents in edible oils. In situ ATR-FTIR spectroscopic evidence showed that cyclic acyloxonium ion intermediate was formed during GE formation derived from DPG and MPG in chemical model heated at 200 °C.

  6. 2000-hour cyclic endurance test of a laboratory model multipropellant resistojet

    NASA Technical Reports Server (NTRS)

    Morren, W. Earl; Sovey, James S.

    1987-01-01

    The technological readiness of a long-life multipropellant resistojet for space station auxiliary propulsion is demonstrated. A laboratory model resistojet made from grain-stabilized platinum served as a test bed to evaluate the design characteristics, fabrication methods, and operating strategies for an engineering model multipropellant resistojet developed under contract by the Rocketdyne Division of Rockwell International and Technion Incorporated. The laboratory model thruster was subjected to a 2000-hr, 2400-thermal-cycle endurance test using carbon dioxide propellant. Maximum thruster temperatures were approximately 1400 C. The post-test analyses of the laboratory model thruster included an investigation of component microstructures. Significant observations from the laboratory model thruster are discussed as they relate to the design of the engineering model thruster.

  7. Simulation of large scale motions and small scale structures in planetary atmospheres and oceans: From laboratory to space experiments on ISS

    NASA Astrophysics Data System (ADS)

    Egbers, Christoph; Futterer, Birgit; Zaussinger, Florian; Harlander, Uwe

    2014-05-01

    Baroclinic waves are responsible for the transport of heat and momentum in the oceans, in the Earth's atmosphere as well as in other planetary atmospheres. The talk will give an overview on possibilities to simulate such large scale as well as co-existing small scale structures with the help of well defined laboratory experiments like the baroclinic wave tank (annulus experiment). The analogy between the Earth's atmosphere and the rotating cylindrical annulus experiment only driven by rotation and differential heating between polar and equatorial regions is obvious. From the Gulf stream single vortices seperate from time to time. The same dynamics and the co-existence of small and large scale structures and their separation can be also observed in laboratory experiments as in the rotating cylindrical annulus experiment. This experiment represents the mid latitude dynamics quite well and is part as a central reference experiment in the German-wide DFG priority research programme ("METSTRÖM", SPP 1276) yielding as a benchmark for lot of different numerical methods. On the other hand, those laboratory experiments in cylindrical geometry are limited due to the fact, that the surface and real interaction between polar and equatorial region and their different dynamics can not be really studied. Therefore, I demonstrate how to use the very successful Geoflow I and Geoflow II space experiment hardware on ISS with future modifications for simulations of small and large scale planetary atmospheric motion in spherical geometry with differential heating between inner and outer spheres as well as between the polar and equatorial regions. References: Harlander, U., Wenzel, J., Wang, Y., Alexandrov, K. & Egbers, Ch., 2012, Simultaneous PIV- and thermography measurements of partially blocked flow in a heated rotating annulus, Exp. in Fluids, 52 (4), 1077-1087 Futterer, B., Krebs, A., Plesa, A.-C., Zaussinger, F., Hollerbach, R., Breuer, D. & Egbers, Ch., 2013, Sheet-like and

  8. Modelling and scale-up of chemical flooding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Lake, L.W.; Sepehrnoori, K.

    1990-03-01

    The objective of this research is to develop, validate, and apply a comprehensive chemical flooding simulator for chemical recovery processes involving surfactants, polymers, and alkaline chemicals in various combinations. This integrated program includes components of laboratory experiments, physical property modelling, scale-up theory, and numerical analysis as necessary and integral components of the simulation activity. We have continued to develop, test, and apply our chemical flooding simulator (UTCHEM) to a wide variety of laboratory and reservoir problems involving tracers, polymers, polymer gels, surfactants, and alkaline agents. Part I is an update on the Application of Higher-Order Methods in Chemical Flooding Simulation.more » This update focuses on the comparison of grid orientation effects for four different numerical methods implemented in UTCHEM. Part II is on Simulation Design Studies and is a continuation of Saad's Big Muddy surfactant pilot simulation study reported last year. Part III reports on the Simulation of Gravity Effects under conditions similar to those of some of the oil reservoirs in the North Sea. Part IV is on Determining Oil Saturation from Interwell Tracers UTCHEM is used for large-scale interwell tracer tests. A systematic procedure for estimating oil saturation from interwell tracer data is developed and a specific example based on the actual field data provided by Sun E P Co. is given. Part V reports on the Application of Vectorization and Microtasking for Reservoir Simulation. Part VI reports on Alkaline Simulation. The alkaline/surfactant/polymer flood compositional simulator (UTCHEM) reported last year is further extended to include reactions involving chemical species containing magnesium, aluminium and silicon as constituent elements. Part VII reports on permeability and trapping of microemulsion.« less

  9. Simulations of Tornadoes, Tropical Cyclones, MJOs, and QBOs, using GFDL's multi-scale global climate modeling system

    NASA Astrophysics Data System (ADS)

    Lin, Shian-Jiann; Harris, Lucas; Chen, Jan-Huey; Zhao, Ming

    2014-05-01

    A multi-scale High-Resolution Atmosphere Model (HiRAM) is being developed at NOAA/Geophysical Fluid Dynamics Laboratory. The model's dynamical framework is the non-hydrostatic extension of the vertically Lagrangian finite-volume dynamical core (Lin 2004, Monthly Wea. Rev.) constructed on a stretchable (via Schmidt transformation) cubed-sphere grid. Physical parametrizations originally designed for IPCC-type climate predictions are in the process of being modified and made more "scale-aware", in an effort to make the model suitable for multi-scale weather-climate applications, with horizontal resolution ranging from 1 km (near the target high-resolution region) to as low as 400 km (near the antipodal point). One of the main goals of this development is to enable simulation of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously thought impossible. We will present preliminary results, covering a very wide spectrum of temporal-spatial scales, ranging from simulation of tornado genesis (hours), Madden-Julian Oscillations (intra-seasonal), topical cyclones (seasonal), to Quasi Biennial Oscillations (intra-decadal), using the same global multi-scale modeling system.

  10. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  11. Development of a laboratory demonstration model active cleaning device

    NASA Technical Reports Server (NTRS)

    Shannon, R. L.; Gillette, R. B.

    1975-01-01

    A laboratory demonstration model of a device for removing contaminant films from optical surfaces in space was developed. The development of a plasma tube, which would produce the desired cleaning effects under high vacuum conditions, represented the major problem in the program. This plasma tube development is discussed, and the resulting laboratory demonstration-model device is described.

  12. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G. D.; Tartakovsky, A. M.; Scheibe, T. D.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2013-09-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model

  13. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Guzel D.; Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2013-09-07

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated withmore » microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparisonto prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale

  14. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    NASA Astrophysics Data System (ADS)

    Scheibe, T. D.; Tartakovsky, G.; Tartakovsky, A. M.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2012-12-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model

  15. Terminology Modeling for an Enterprise Laboratory Orders Catalog

    PubMed Central

    Zhou, Li; Goldberg, Howard; Pabbathi, Deepika; Wright, Adam; Goldman, Debora S.; Van Putten, Cheryl; Barley, Amanda; Rocha, Roberto A.

    2009-01-01

    Laboratory test orders are used in a variety of clinical information systems at Partners HealthCare. At present, each site at Partners manages its own set of laboratory orders with locally defined codes. Our current plan is to implement an enterprise catalog, where laboratory test orders are mapped to reference terminologies and codes from different sites are mapped to each other. This paper describes the terminology modeling effort that preceded the implementation of the enterprise laboratory orders catalog. In particular, we present our experience in adapting HL7’s “Common Terminology Services 2 – Upper Level Class Model” as a terminology metamodel for guiding the development of fully specified laboratory orders and related services. PMID:20351950

  16. Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach

    PubMed Central

    Bigdely-Shamlo, Nima; Makeig, Scott; Robbins, Kay A.

    2016-01-01

    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain–computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a “containerized” approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data “Levels,” each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org). PMID:27014048

  17. Tsunami Simulators in Physical Modelling Laboratories - From Concept to Proven Technique

    NASA Astrophysics Data System (ADS)

    Allsop, W.; Chandler, I.; Rossetto, T.; McGovern, D.; Petrone, C.; Robinson, D.

    2016-12-01

    Before 2004, there was little public awareness around Indian Ocean coasts of the potential size and effects of tsunami. Even in 2011, the scale and extent of devastation by the Japan East Coast Tsunami was unexpected. There were very few engineering tools to assess onshore impacts of tsunami, so no agreement on robust methods to predict forces on coastal defences, buildings or related infrastructure. Modelling generally used substantial simplifications of either solitary waves (far too short durations) or dam break (unrealistic and/or uncontrolled wave forms).This presentation will describe research from EPI-centre, HYDRALAB IV, URBANWAVES and CRUST projects over the last 10 years that have developed and refined pneumatic Tsunami Simulators for the hydraulic laboratory. These unique devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example defences. They have reproduced full-duration tsunamis including the Mercator trace from 2004 at 1:50 scale. Engineering scale models subjected to those tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences and pressures / forces on buildings. This presentation will describe how these pneumatic Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facility within which they operate, and will highlight research results from the three generations of Tsunami Simulator. Of direct relevance to engineers and modellers will be measurements of wave run-up levels and comparison with theoretical predictions. Recent measurements of forces on individual buildings have been generalized by separate experiments on buildings (up to 4 rows) which show that the greatest forces can act on the landward (not seaward) buildings. Continuing research in the 70m long 4m wide Fast Flow Facility on tsunami defence structures have also measured forces on buildings in the lee of a failed defence wall.

  18. Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations

    NASA Astrophysics Data System (ADS)

    Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.

    2011-12-01

    HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of

  19. RANS Simulation (Rotating Reference Frame Model [RRF]) of Single Lab-Scaled DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph

    2014-04-15

    Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study taking advantage of the symmetry of lab-scaled DOE RM1 geometry, only half of the geometry is models using (Single) Rotating Reference Frame model [RRF]. In this model RANS equations, coupled with k-\\omega turbulence closure model, are solved in the rotating reference frame. The actual geometry of the turbine blade is included and the turbulent boundary layer along the blade span is simulated using wall-function approach. The rotation of the blade is modeled by applying periodic boundary condition to sets of plane of symmetry. This case study simulates the performance and flow field in the near and far wake of the device at the desired operating conditions. The results of these simulations were validated against in-house experimental data. Please see the attached paper.

  20. Development of a large-scale isolation chamber system for the safe and humane care of medium-sized laboratory animals harboring infectious diseases*

    PubMed Central

    Pan, Xin; Qi, Jian-cheng; Long, Ming; Liang, Hao; Chen, Xiao; Li, Han; Li, Guang-bo; Zheng, Hao

    2010-01-01

    The close phylogenetic relationship between humans and non-human primates makes non-human primates an irreplaceable model for the study of human infectious diseases. In this study, we describe the development of a large-scale automatic multi-functional isolation chamber for use with medium-sized laboratory animals carrying infectious diseases. The isolation chamber, including the transfer chain, disinfection chain, negative air pressure isolation system, animal welfare system, and the automated system, is designed to meet all biological safety standards. To create an internal chamber environment that is completely isolated from the exterior, variable frequency drive blowers are used in the air-intake and air-exhaust system, precisely controlling the filtered air flow and providing an air-barrier protection. A double door transfer port is used to transfer material between the interior of the isolation chamber and the outside. A peracetic acid sterilizer and its associated pipeline allow for complete disinfection of the isolation chamber. All of the isolation chamber parameters can be automatically controlled by a programmable computerized menu, allowing for work with different animals in different-sized cages depending on the research project. The large-scale multi-functional isolation chamber provides a useful and safe system for working with infectious medium-sized laboratory animals in high-level bio-safety laboratories. PMID:20872984

  1. Multi-scale lung modeling.

    PubMed

    Tawhai, Merryn H; Bates, Jason H T

    2011-05-01

    Multi-scale modeling of biological systems has recently become fashionable due to the growing power of digital computers as well as to the growing realization that integrative systems behavior is as important to life as is the genome. While it is true that the behavior of a living organism must ultimately be traceable to all its components and their myriad interactions, attempting to codify this in its entirety in a model misses the insights gained from understanding how collections of system components at one level of scale conspire to produce qualitatively different behavior at higher levels. The essence of multi-scale modeling thus lies not in the inclusion of every conceivable biological detail, but rather in the judicious selection of emergent phenomena appropriate to the level of scale being modeled. These principles are exemplified in recent computational models of the lung. Airways responsiveness, for example, is an organ-level manifestation of events that begin at the molecular level within airway smooth muscle cells, yet it is not necessary to invoke all these molecular events to accurately describe the contraction dynamics of a cell, nor is it necessary to invoke all phenomena observable at the level of the cell to account for the changes in overall lung function that occur following methacholine challenge. Similarly, the regulation of pulmonary vascular tone has complex origins within the individual smooth muscle cells that line the blood vessels but, again, many of the fine details of cell behavior average out at the level of the organ to produce an effect on pulmonary vascular pressure that can be described in much simpler terms. The art of multi-scale lung modeling thus reduces not to being limitlessly inclusive, but rather to knowing what biological details to leave out.

  2. Laboratory development and testing of spacecraft diagnostics

    NASA Astrophysics Data System (ADS)

    Amatucci, William; Tejero, Erik; Blackwell, Dave; Walker, Dave; Gatling, George; Enloe, Lon; Gillman, Eric

    2017-10-01

    The Naval Research Laboratory's Space Chamber experiment is a large-scale laboratory device dedicated to the creation of large-volume plasmas with parameters scaled to realistic space plasmas. Such devices make valuable contributions to the investigation of space plasma phenomena under controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. However, in addition to investigations such as plasma wave and instability studies, such devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this talk, we will describe how the laboratory simulation of space plasmas made this development path possible. Work sponsored by the US Naval Research Laboratory Base Program.

  3. Laboratory constraints on models of earthquake recurrence

    NASA Astrophysics Data System (ADS)

    Beeler, N. M.; Tullis, Terry; Junger, Jenni; Kilgore, Brian; Goldsby, David

    2014-12-01

    In this study, rock friction "stick-slip" experiments are used to develop constraints on models of earthquake recurrence. Constant rate loading of bare rock surfaces in high-quality experiments produces stick-slip recurrence that is periodic at least to second order. When the loading rate is varied, recurrence is approximately inversely proportional to loading rate. These laboratory events initiate due to a slip-rate-dependent process that also determines the size of the stress drop and, as a consequence, stress drop varies weakly but systematically with loading rate. This is especially evident in experiments where the loading rate is changed by orders of magnitude, as is thought to be the loading condition of naturally occurring, small repeating earthquakes driven by afterslip, or low-frequency earthquakes loaded by episodic slip. The experimentally observed stress drops are well described by a logarithmic dependence on recurrence interval that can be cast as a nonlinear slip predictable model. The fault's rate dependence of strength is the key physical parameter. Additionally, even at constant loading rate the most reproducible laboratory recurrence is not exactly periodic, unlike existing friction recurrence models. We present example laboratory catalogs that document the variance and show that in large catalogs, even at constant loading rate, stress drop and recurrence covary systematically. The origin of this covariance is largely consistent with variability of the dependence of fault strength on slip rate. Laboratory catalogs show aspects of both slip and time predictability, and successive stress drops are strongly correlated indicating a "memory" of prior slip history that extends over at least one recurrence cycle.

  4. Future Shop: A Model Career Placement & Transition Laboratory.

    ERIC Educational Resources Information Center

    Floyd, Deborah L.; And Others

    During 1988-89, the Collin County Community College District (CCCCD) conducted a project to develop, implement, and evaluate a model career laboratory called a "Future Shop." The laboratory was designed to let users explore diverse career options, job placement opportunities, and transfer resources. The Future Shop lab had three major components:…

  5. Numerical Modeling and Experimental Analysis of Scale Horizontal Axis Marine Hydrokinetic (MHK) Turbines

    NASA Astrophysics Data System (ADS)

    Javaherchi, Teymour; Stelzenmuller, Nick; Seydel, Joseph; Aliseda, Alberto

    2013-11-01

    We investigate, through a combination of scale model experiments and numerical simulations, the evolution of the flow field around the rotor and in the wake of Marine Hydrokinetic (MHK) turbines. Understanding the dynamics of this flow field is the key to optimizing the energy conversion of single devices and the arrangement of turbines in commercially viable arrays. This work presents a comparison between numerical and experimental results from two different case studies of scaled horizontal axis MHK turbines (45:1 scale). In the first case study, we investigate the effect of Reynolds number (Re = 40,000 to 100,000) and Tip Speed Ratio (TSR = 5 to 12) variation on the performance and wake structure of a single turbine. In the second case, we study the effect of the turbine downstream spacing (5d to 14d) on the performance and wake development in a coaxial configuration of two turbines. These results provide insights into the dynamics of Horizontal Axis Hydrokinetic Turbines, and by extension to Horizontal Axis Wind Turbines in close proximity to each other, and highlight the capabilities and limitations of the numerical models. Once validated at laboratory scale, the numerical model can be used to address other aspects of MHK turbines at full scale. Supported by DOE through the National Northwest Marine Renewable Energy Center.

  6. Wellbore Completion Systems Containment Breach Solution Experiments at a Large Scale Underground Research Laboratory : Sealant placement & scale-up from Lab to Field

    NASA Astrophysics Data System (ADS)

    Goodman, H.

    2017-12-01

    This investigation seeks to develop sealant technology that can restore containment to completed wells that suffer CO2 gas leakages currently untreatable using conventional technologies. Experimentation is performed at the Mont Terri Underground Research Laboratory (MT-URL) located in NW Switzerland. The laboratory affords investigators an intermediate-scale test site that bridges the gap between the laboratory bench and full field-scale conditions. Project focus is the development of CO2 leakage remediation capability using sealant technology. The experimental concept includes design and installation of a field scale completion package designed to mimic well systems heating-cooling conditions that may result in the development of micro-annuli detachments between the casing-cement-formation boundaries (Figure 1). Of particular interest is to test novel sealants that can be injected in to relatively narrow micro-annuli flow-paths of less than 120 microns aperture. Per a special report on CO2 storage submitted to the IPCC[1], active injection wells, along with inactive wells that have been abandoned, are identified as one of the most probable sources of leakage pathways for CO2 escape to the surface. Origins of pressure leakage common to injection well and completions architecture often occur due to tensile cracking from temperature cycles, micro-annulus by casing contraction (differential casing to cement sheath movement) and cement sheath channel development. This discussion summarizes the experiment capability and sealant testing results. The experiment concludes with overcoring of the entire mock-completion test site to assess sealant performance in 2018. [1] IPCC Special Report on Carbon Dioxide Capture and Storage (September 2005), section 5.7.2 Processes and pathways for release of CO2 from geological storage sites, page 244

  7. Anaerobic treatment of animal byproducts from slaughterhouses at laboratory and pilot scale.

    PubMed

    Edström, Mats; Nordberg, Ake; Thyselius, Lennart

    2003-01-01

    Different mixtures of animal byproducts, other slaughterhouse waste (i.e., rumen, stomach and intestinal content), food waste, and liquid manure were codigested at mesophilic conditions (37 degrees C) at laboratory and pilot scale. Animal byproducts, including blood, represent 70-80% of the total biogas potential from waste generated during slaughter of animals. The total biogas potential from waste generated during slaughter is about 1300 MJ/cattle and about 140 MJ/pig. Fed-batch digestion of pasteurized (70 degrees C, 1 h) animal byproducts resulted in a fourfold increase in biogas yield (1.14 L/g of volatile solids [VS]) compared with nonpasteurized animal byproducts (0.31 L/g of VS). Mixtures with animal byproducts representing 19-38% of the total dry matter were digested in continuous-flow stirred tank reactors at laboratory and pilot scale. Stable processes at organic loading rates (OLRs) exceeding 2.5 g of VS/(L.d) and hydraulic retention times (HRTs) less than 40 d could be obtained with total ammonia nitrogen concentrations (NH4-N + NH3-N) in the range of 4.0-5.0 g/L. After operating one process for more than 1.5 yr at total ammonia nitrogen concentrations >4 g/L, an increase in OLR to 5 g of VS/(L.d) and a decrease in HRT to 22 d was possible without accumulation of volatile fatty acids.

  8. Laboratory and Physical Modelling of Building Ventilation Flows

    NASA Astrophysics Data System (ADS)

    Hunt, Gary

    2001-11-01

    Heating and ventilating buildings accounts for a significant fraction of the total energy budget of cities and an immediate challenge in building physics is for the design of sustainable, low-energy buildings. Natural ventilation provides a low-energy solution as it harness the buoyancy force associated with temperature differences between the internal and external environment, and the wind to drive a ventilating flow. Modern naturally-ventilated buildings use innovative design solutions, e.g. glazed atria and solar chimneys, to enhance the ventilation and demand for these and other designs has far outstripped our understanding of the fluid mechanics within these buildings. Developing an understanding of the thermal stratification and movement of air provides a considerable challenge as the flows involve interactions between stratification and turbulence and often in complex geometries. An approach that has provided significant new insight into these flows and which has led to the development of design guidelines for architects and ventilation engineers is laboratory modelling at small-scale in water tanks combined with physical modelling. Density differences to drive the flow in simplified plexiglass models of rooms or buildings are provided by fresh and salt water solutions, and wind flow is represented by a mean flow in a flume tank. In tandom with the experiments, theoretical models that capture the essential physics of these flows have been developed in order to generalise the experimental results to a wide range of typical building geometries and operating conditions. This paper describes the application and outcomes of these modelling techniques to the study of a variety of natural ventilation flows in buildings.

  9. Preferential flow across scales: how important are plot scale processes for a catchment scale model?

    NASA Astrophysics Data System (ADS)

    Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Klaus, Julian

    2017-04-01

    Numerous experimental studies showed the importance of preferential flow for solute transport and runoff generation. As a consequence, various approaches exist to incorporate preferential flow in hydrological models. However, few studies have applied models that incorporate preferential flow at hillslope scale and even fewer at catchment scale. Certainly, one main difficulty for progress is the determination of an adequate parameterization for preferential flow at these spatial scales. This study applies a 3D physically based model (HydroGeoSphere) of a headwater region (6 ha) of the Weierbach catchment (Luxembourg). The base model was implemented without preferential flow and was limited in simulating fast catchment responses. Thus we hypothesized that the discharge performance can be improved by utilizing a dual permeability approach for a representation of preferential flow. We used the information of bromide irrigation experiments performed on three 1m2 plots to parameterize preferential flow. In a first step we ran 20.000 Monte Carlo simulations of these irrigation experiments in a 1m2 column of the headwater catchment model, varying the dual permeability parameters (15 variable parameters). These simulations identified many equifinal, yet very different parameter sets that reproduced the bromide depth profiles well. Therefore, in the next step we chose 52 parameter sets (the 40 best and 12 low performing sets) for testing the effect of incorporating preferential flow in the headwater catchment scale model. The variability of the flow pattern responses at the headwater catchment scale was small between the different parameterizations and did not coincide with the variability at plot scale. The simulated discharge time series of the different parameterizations clustered in six groups of similar response, ranging from nearly unaffected to completely changed responses compared to the base case model without dual permeability. Yet, in none of the groups the

  10. Non-destructive evaluation of laboratory scale hydraulic fracturing using acoustic emission

    NASA Astrophysics Data System (ADS)

    Hampton, Jesse Clay

    The primary objective of this research is to develop techniques to characterize hydraulic fractures and fracturing processes using acoustic emission monitoring based on laboratory scale hydraulic fracturing experiments. Individual microcrack AE source characterization is performed to understand the failure mechanisms associated with small failures along pre-existing discontinuities and grain boundaries. Individual microcrack analysis methods include moment tensor inversion techniques to elucidate the mode of failure, crack slip and crack normal direction vectors, and relative volumetric deformation of an individual microcrack. Differentiation between individual microcrack analysis and AE cloud based techniques is studied in efforts to refine discrete fracture network (DFN) creation and regional damage quantification of densely fractured media. Regional damage estimations from combinations of individual microcrack analyses and AE cloud density plotting are used to investigate the usefulness of weighting cloud based AE analysis techniques with microcrack source data. Two granite types were used in several sample configurations including multi-block systems. Laboratory hydraulic fracturing was performed with sample sizes ranging from 15 x 15 x 25 cm3 to 30 x 30 x 25 cm 3 in both unconfined and true-triaxially confined stress states using different types of materials. Hydraulic fracture testing in rock block systems containing a large natural fracture was investigated in terms of AE response throughout fracture interactions. Investigations of differing scale analyses showed the usefulness of individual microcrack characterization as well as DFN and cloud based techniques. Individual microcrack characterization weighting cloud based techniques correlated well with post-test damage evaluations.

  11. The Site-Scale Saturated Zone Flow Model for Yucca Mountain

    NASA Astrophysics Data System (ADS)

    Al-Aziz, E.; James, S. C.; Arnold, B. W.; Zyvoloski, G. A.

    2006-12-01

    This presentation provides a reinterpreted conceptual model of the Yucca Mountain site-scale flow system subject to all quality assurance procedures. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain, which is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. This effort started from the ground up with a revised and updated hydrogeologic framework model, which incorporates the latest lithology data, and increased grid resolution that better resolves the hydrogeologic framework, which was updated throughout the model domain. In addition, faults are much better represented using the 250× 250- m2 spacing (compared to the previous model's 500× 500-m2 spacing). Data collected since the previous model calibration effort have been included and they comprise all Nye County water-level data through Phase IV of their Early Warning Drilling Program. Target boundary fluxes are derived from the newest (2004) Death Valley Regional Flow System model from the US Geologic Survey. A consistent weighting scheme assigns importance to each measured water-level datum and boundary flux extracted from the regional model. The numerical model is calibrated by matching these weighted water level measurements and boundary fluxes using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM~v2.24 and parameter estimation software PEST~v5.5) and model setup facilitates efficient calibration of multiple conceptual models. Analyses evaluate the impact of these updates and additional data on the modeled potentiometric surface and the flowpaths emanating from below the repository. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the proposed repository and compare them to those from the

  12. Laboratory simulation of space plasma phenomena*

    NASA Astrophysics Data System (ADS)

    Amatucci, B.; Tejero, E. M.; Ganguli, G.; Blackwell, D.; Enloe, C. L.; Gillman, E.; Walker, D.; Gatling, G.

    2017-12-01

    Laboratory devices, such as the Naval Research Laboratory's Space Physics Simulation Chamber, are large-scale experiments dedicated to the creation of large-volume plasmas with parameters realistically scaled to those found in various regions of the near-Earth space plasma environment. Such devices make valuable contributions to the understanding of space plasmas by investigating phenomena under carefully controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. By working in collaboration with in situ experimentalists to create realistic conditions scaled to those found during the observations of interest, the microphysics responsible for the observed events can be investigated in detail not possible in space. To date, numerous investigations of phenomena such as plasma waves, wave-particle interactions, and particle energization have been successfully performed in the laboratory. In addition to investigations such as plasma wave and instability studies, the laboratory devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this presentation, we will describe several examples of the laboratory investigation of space plasma waves and instabilities and diagnostic development. *This work supported by the NRL Base Program.

  13. Laboratory Scale X-ray Fluorescence Tomography: Instrument Characterization and Application in Earth and Environmental Science.

    PubMed

    Laforce, Brecht; Vermeulen, Bram; Garrevoet, Jan; Vekemans, Bart; Van Hoorebeke, Luc; Janssen, Colin; Vincze, Laszlo

    2016-03-15

    A new laboratory scale X-ray fluorescence (XRF) imaging instrument, based on an X-ray microfocus tube equipped with a monocapillary optic, has been developed to perform XRF computed tomography experiments with both higher spatial resolution (20 μm) and a better energy resolution (130 eV @Mn-K(α)) than has been achieved up-to-now. This instrument opens a new range of possible applications for XRF-CT. Next to the analytical characterization of the setup by using well-defined model/reference samples, demonstrating its capabilities for tomographic imaging, the XRF-CT microprobe has been used to image the interior of an ecotoxicological model organism, Americamysis bahia. This had been exposed to elevated metal (Cu and Ni) concentrations. The technique allowed the visualization of the accumulation sites of copper, clearly indicating the affected organs, i.e. either the gastric system or the hepatopancreas. As another illustrative application, the scanner has been employed to investigate goethite spherules from the Cretaceous-Paleogene boundary, revealing the internal elemental distribution of these valuable distal ejecta layer particles.

  14. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  15. A 2000-hour cyclic endurance test of a laboratory model multipropellant resistojet

    NASA Technical Reports Server (NTRS)

    Morren, W. Earl; Sovey, James S.

    1987-01-01

    The technological readiness of a long-life multipropellant resistojet for space station auxiliary propulsion is demonstrated. A laboratory model resistojet made from grain-stabilized platinum served as a test bed to evaluate the design characteristics, fabrication methods, and operating strategies for an engineering model multipropellant resistojet developed under contract by the Rocketdyne Division of Rockwell International and Technion Incorporated. The laboratory model thruster was subjected to a 2000-hr, 2400-thermal-cycle endurance test using carbon dioxide propellant. Maximum thruster temperatures were approximately 1400 C. The post-test analyses of the laboratory model thruster included an investigation of component microstructures. Significant observations from the laboratory model thruster are discussed as they relate to the design of the engineering model thruster.

  16. LASSIE: simulating large-scale models of biochemical systems on GPUs.

    PubMed

    Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo

    2017-05-10

    Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE

  17. Clinical laboratory as an economic model for business performance analysis.

    PubMed

    Buljanović, Vikica; Patajac, Hrvoje; Petrovecki, Mladen

    2011-08-15

    To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by implementing changes in the next fiscal

  18. A three-dimensional stratigraphic model for aggrading submarine channels based on laboratory experiments, numerical modeling, and sediment cores

    NASA Astrophysics Data System (ADS)

    Limaye, A. B.; Komatsu, Y.; Suzuki, K.; Paola, C.

    2017-12-01

    Turbidity currents deliver clastic sediment from continental margins to the deep ocean, and are the main driver of landscape and stratigraphic evolution in many low-relief, submarine environments. The sedimentary architecture of turbidites—including the spatial organization of coarse and fine sediments—is closely related to the aggradation, scour, and lateral shifting of channels. Seismic stratigraphy indicates that submarine, meandering channels often aggrade rapidly relative to lateral shifting, and develop channel sand bodies with high vertical connectivity. In comparison, the stratigraphic architecture developed by submarine, braided is relatively uncertain. We present a new stratigraphic model for submarine braided channels that integrates predictions from laboratory experiments and flow modeling with constraints from sediment cores. In the laboratory experiments, a saline density current developed subaqueous channels in plastic sediment. The channels aggraded to form a deposit with a vertical scale of approximately five channel depths. We collected topography data during aggradation to (1) establish relative stratigraphic age, and (2) estimate the sorting patterns of a hypothetical grain size distribution. We applied a numerical flow model to each topographic surface and used modeled flow depth as a proxy for relative grain size. We then conditioned the resulting stratigraphic model to observed grain size distributions using sediment core data from the Nankai Trough, offshore Japan. Using this stratigraphic model, we establish new, quantitative predictions for the two- and three-dimensional connectivity of coarse sediment as a function of fine-sediment fraction. Using this case study as an example, we will highlight outstanding challenges in relating the evolution of low-relief landscapes to the stratigraphic record.

  19. Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model

    NASA Astrophysics Data System (ADS)

    Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.

    2012-12-01

    The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that

  20. Application of lab derived kinetic biodegradation parameters at the field scale

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.

    2003-04-01

    Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way

  1. A laboratory-calibrated model of coho salmon growth with utility for ecological analyses

    USGS Publications Warehouse

    Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.

    2018-01-01

    We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.

  2. Computational fluid dynamics modeling of laboratory flames and an industrial flare.

    PubMed

    Singh, Kanwar Devesh; Gangadharan, Preeti; Chen, Daniel H; Lou, Helen H; Li, Xianchang; Richmond, Peyton

    2014-11-01

    A computational fluid dynamics (CFD) methodology for simulating the combustion process has been validated with experimental results. Three different types of experimental setups were used to validate the CFD model. These setups include an industrial-scale flare setups and two lab-scale flames. The CFD study also involved three different fuels: C3H6/CH/Air/N2, C2H4/O2/Ar and CH4/Air. In the first setup, flare efficiency data from the Texas Commission on Environmental Quality (TCEQ) 2010 field tests were used to validate the CFD model. In the second setup, a McKenna burner with flat flames was simulated. Temperature and mass fractions of important species were compared with the experimental data. Finally, results of an experimental study done at Sandia National Laboratories to generate a lifted jet flame were used for the purpose of validation. The reduced 50 species mechanism, LU 1.1, the realizable k-epsilon turbulence model, and the EDC turbulence-chemistry interaction model were usedfor this work. Flare efficiency, axial profiles of temperature, and mass fractions of various intermediate species obtained in the simulation were compared with experimental data and a good agreement between the profiles was clearly observed. In particular the simulation match with the TCEQ 2010 flare tests has been significantly improved (within 5% of the data) compared to the results reported by Singh et al. in 2012. Validation of the speciated flat flame data supports the view that flares can be a primary source offormaldehyde emission.

  3. On temporal stochastic modeling of precipitation, nesting models across scales

    NASA Astrophysics Data System (ADS)

    Paschalis, Athanasios; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2014-01-01

    We analyze the performance of composite stochastic models of temporal precipitation which can satisfactorily reproduce precipitation properties across a wide range of temporal scales. The rationale is that a combination of stochastic precipitation models which are most appropriate for specific limited temporal scales leads to better overall performance across a wider range of scales than single models alone. We investigate different model combinations. For the coarse (daily) scale these are models based on Alternating renewal processes, Markov chains, and Poisson cluster models, which are then combined with a microcanonical Multiplicative Random Cascade model to disaggregate precipitation to finer (minute) scales. The composite models were tested on data at four sites in different climates. The results show that model combinations improve the performance in key statistics such as probability distributions of precipitation depth, autocorrelation structure, intermittency, reproduction of extremes, compared to single models. At the same time they remain reasonably parsimonious. No model combination was found to outperform the others at all sites and for all statistics, however we provide insight on the capabilities of specific model combinations. The results for the four different climates are similar, which suggests a degree of generality and wider applicability of the approach.

  4. Geometry Laboratory (GEOLAB) surface modeling and grid generation technology and services

    NASA Technical Reports Server (NTRS)

    Kerr, Patricia A.; Smith, Robert E.; Posenau, Mary-Anne K.

    1995-01-01

    The facilities and services of the GEOmetry LABoratory (GEOLAB) at the NASA Langley Research Center are described. Included in this description are the laboratory functions, the surface modeling and grid generation technologies used in the laboratory, and examples of the tasks performed in the laboratory.

  5. Acoustic Treatment Design Scaling Methods. Volume 3; Test Plans, Hardware, Results, and Evaluation

    NASA Technical Reports Server (NTRS)

    Yu, J.; Kwan, H. W.; Echternach, D. K.; Kraft, R. E.; Syed, A. A.

    1999-01-01

    The ability to design, build, and test miniaturized acoustic treatment panels on scale-model fan rigs representative of the full-scale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. To be able to use scale model treatment as a full-scale design tool, it is necessary that the designer be able to reliably translate the scale model design and performance to an equivalent full-scale design. The primary objective of the study presented in this volume of the final report was to conduct laboratory tests to evaluate liner acoustic properties and validate advanced treatment impedance models. These laboratory tests include DC flow resistance measurements, normal incidence impedance measurements, DC flow and impedance measurements in the presence of grazing flow, and in-duct liner attenuation as well as modal measurements. Test panels were fabricated at three different scale factors (i.e., full-scale, half-scale, and one-fifth scale) to support laboratory acoustic testing. The panel configurations include single-degree-of-freedom (SDOF) perforated sandwich panels, SDOF linear (wire mesh) liners, and double-degree-of-freedom (DDOF) linear acoustic panels.

  6. A carbon dioxide stripping model for mammalian cell culture in manufacturing scale bioreactors.

    PubMed

    Xing, Zizhuo; Lewis, Amanda M; Borys, Michael C; Li, Zheng Jian

    2017-06-01

    Control of carbon dioxide within the optimum range is important in mammalian bioprocesses at the manufacturing scale in order to ensure robust cell growth, high protein yields, and consistent quality attributes. The majority of bioprocess development work is done in laboratory bioreactors, in which carbon dioxide levels are more easily controlled. Some challenges in carbon dioxide control can present themselves when cell culture processes are scaled up, because carbon dioxide accumulation is a common feature due to longer gas-residence time of mammalian cell culture in large scale bioreactors. A carbon dioxide stripping model can be used to better understand and optimize parameters that are critical to cell culture processes at the manufacturing scale. The prevailing carbon dioxide stripping models in literature depend on mass transfer coefficients and were applicable to cell culture processes with low cell density or at stationary/cell death phase. However, it was reported that gas bubbles are saturated with carbon dioxide before leaving the culture, which makes carbon dioxide stripping no longer depend on a mass transfer coefficient in the new generation cell culture processes characterized by longer exponential growth phase, higher peak viable cell densities, and higher specific production rate. Here, we present a new carbon dioxide stripping model for manufacturing scale bioreactors, which is independent of carbon dioxide mass transfer coefficient, but takes into account the gas-residence time and gas CO 2 saturation time. The model was verified by CHO cell culture processes with different peak viable cell densities (7 to 12 × 10 6  cells mL -1 ) for two products in 5,000-L and 25,000-L bioreactors. The model was also applied to a next generation cell culture process to optimize cell culture conditions and reduce carbon dioxide levels at manufacturing scale. The model provides a useful tool to understand and better control cell culture carbon dioxide

  7. Modelling landscape evolution at the flume scale

    NASA Astrophysics Data System (ADS)

    Cheraghi, Mohsen; Rinaldo, Andrea; Sander, Graham C.; Barry, D. Andrew

    2017-04-01

    The ability of a large-scale Landscape Evolution Model (LEM) to simulate the soil surface morphological evolution as observed in a laboratory flume (1-m × 2-m surface area) was investigated. The soil surface was initially smooth, and was subjected to heterogeneous rainfall in an experiment designed to avoid rill formation. Low-cohesive fine sand was placed in the flume while the slope and relief height were 5 % and 20 cm, respectively. Non-uniform rainfall with an average intensity of 85 mm h-1 and a standard deviation of 26 % was applied to the sediment surface for 16 h. We hypothesized that the complex overland water flow can be represented by a drainage discharge network, which was calculated via the micro-morphology and the rainfall distribution. Measurements included high resolution Digital Elevation Models that were captured at intervals during the experiment. The calibrated LEM captured the migration of the main flow path from the low precipitation area into the high precipitation area. Furthermore, both model and experiment showed a steep transition zone in soil elevation that moved upstream during the experiment. We conclude that the LEM is applicable under non-uniform rainfall and in the absence of surface incisions, thereby extending its applicability beyond that shown in previous applications. Keywords: Numerical simulation, Flume experiment, Particle Swarm Optimization, Sediment transport, River network evolution model.

  8. The NASA Inductrack Model Rocket Launcher at the Lawrence Livermore National Laboratory

    NASA Technical Reports Server (NTRS)

    Tung, L. S.; Post, R. F.; Cook, E.; Martinez-Frias, J.

    2000-01-01

    The Inductrack magnetic levitation system, developed at the Lawrence Livermore National Laboratory, is being studied for its possible use for launching rockets. Under NASA sponsorship, a small model system is being constructed at the Laboratory to pursue key technical aspects of this proposed application. The Inductrack is a passive magnetic levitation system employing special arrays of high-field permanent magnets (Halbach arrays) on the levitating carrier, moving above a "track" consisting of a close-packed array of shorted coils with which are interleaved with special drive coils. Halbach arrays produce a strong spatially periodic magnetic field on the front surface of the arrays, while canceling the field on their back surface. Relative motion between the Halbach arrays and the track coils induces currents in those coils. These currents levitate the carrier cart by interacting with the horizontal component of the magnetic field. Pulsed currents in the drive coils, synchronized with the motion of the carrier, interact with the vertical component of the magnetic field to provide acceleration forces. Motional stability, including resistance to both vertical and lateral aerodynamic forces, is provided by having Halbach arrays that interact with both the upper and the lower sides of the track coils. In its completed form the model system that is under construction will have a track approximately 100 meters in length along which the carrier cart will be propelled up to peak speeds of Mach 0.4 to 0.5 before being decelerated. Preliminary studies of the parameters of a full-scale system have also been made. These studies address the problems of scale-up, including means to simplify the track construction and to reduce the cost of the pulsed-power systems needed for propulsion.

  9. Application of simultaneous saccharification and fermentation (SSF) from viscosity reducing of raw sweet potato for bioethanol production at laboratory, pilot and industrial scales.

    PubMed

    Zhang, Liang; Zhao, Hai; Gan, Mingzhe; Jin, Yanlin; Gao, Xiaofeng; Chen, Qian; Guan, Jiafa; Wang, Zhongyan

    2011-03-01

    The aim of this work was to research a bioprocess for bioethanol production from raw sweet potato by Saccharomyces cerevisiae at laboratory, pilot and industrial scales. The fermentation mode, inoculum size and pressure from different gases were determined in laboratory. The maximum ethanol concentration, average ethanol productivity rate and yield of ethanol after fermentation in laboratory scale (128.51 g/L, 4.76 g/L/h and 91.4%) were satisfactory with small decrease at pilot scale (109.06 g/L, 4.89 g/L/h and 91.24%) and industrial scale (97.94 g/L, 4.19 g/L/h and 91.27%). When scaled up, the viscosity caused resistance to fermentation parameters, 1.56 AUG/g (sweet potato mash) of xylanase decreased the viscosity from approximately 30000 to 500 cp. Overall, sweet potato is a attractive feedstock for be bioethanol production from both the economic standpoints and environmentally friendly. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Derivation of a GIS-based watershed-scale conceptual model for the St. Jones River Delaware from habitat-scale conceptual models.

    PubMed

    Reiter, Michael A; Saintil, Max; Yang, Ziming; Pokrajac, Dragoljub

    2009-08-01

    Conceptual modeling is a useful tool for identifying pathways between drivers, stressors, Valued Ecosystem Components (VECs), and services that are central to understanding how an ecosystem operates. The St. Jones River watershed, DE is a complex ecosystem, and because management decisions must include ecological, social, political, and economic considerations, a conceptual model is a good tool for accommodating the full range of inputs. In 2002, a Four-Component, Level 1 conceptual model was formed for the key habitats of the St. Jones River watershed, but since the habitat level of resolution is too fine for some important watershed-scale issues we developed a functional watershed-scale model using the existing narrowed habitat-scale models. The narrowed habitat-scale conceptual models and associated matrices developed by Reiter et al. (2006) were combined with data from the 2002 land use/land cover (LULC) GIS-based maps of Kent County in Delaware to assemble a diagrammatic and numerical watershed-scale conceptual model incorporating the calculated weight of each habitat within the watershed. The numerical component of the assembled watershed model was subsequently subjected to the same Monte Carlo narrowing methodology used for the habitat versions to refine the diagrammatic component of the watershed-scale model. The narrowed numerical representation of the model was used to generate forecasts for changes in the parameters "Agriculture" and "Forest", showing that land use changes in these habitats propagated through the results of the model by the weighting factor. Also, the narrowed watershed-scale conceptual model identified some key parameters upon which to focus research attention and management decisions at the watershed scale. The forecast and simulation results seemed to indicate that the watershed-scale conceptual model does lead to different conclusions than the habitat-scale conceptual models for some issues at the larger watershed scale.

  11. Laboratory constraints on models of earthquake recurrence

    USGS Publications Warehouse

    Beeler, Nicholas M.; Tullis, Terry; Junger, Jenni; Kilgore, Brian D.; Goldsby, David L.

    2014-01-01

    In this study, rock friction ‘stick-slip’ experiments are used to develop constraints on models of earthquake recurrence. Constant-rate loading of bare rock surfaces in high quality experiments produces stick-slip recurrence that is periodic at least to second order. When the loading rate is varied, recurrence is approximately inversely proportional to loading rate. These laboratory events initiate due to a slip rate-dependent process that also determines the size of the stress drop [Dieterich, 1979; Ruina, 1983] and as a consequence, stress drop varies weakly but systematically with loading rate [e.g., Gu and Wong, 1991; Karner and Marone, 2000; McLaskey et al., 2012]. This is especially evident in experiments where the loading rate is changed by orders of magnitude, as is thought to be the loading condition of naturally occurring, small repeating earthquakes driven by afterslip, or low-frequency earthquakes loaded by episodic slip. As follows from the previous studies referred to above, experimentally observed stress drops are well described by a logarithmic dependence on recurrence interval that can be cast as a non-linear slip-predictable model. The fault’s rate dependence of strength is the key physical parameter. Additionally, even at constant loading rate the most reproducible laboratory recurrence is not exactly periodic, unlike existing friction recurrence models. We present example laboratory catalogs that document the variance and show that in large catalogs, even at constant loading rate, stress drop and recurrence co-vary systematically. The origin of this covariance is largely consistent with variability of the dependence of fault strength on slip rate. Laboratory catalogs show aspects of both slip and time predictability and successive stress drops are strongly correlated indicating a ‘memory’ of prior slip history that extends over at least one recurrence cycle.

  12. Improving catchment scale water quality modelling with continuous high resolution monitoring of metals in runoff

    NASA Astrophysics Data System (ADS)

    Saari, Markus; Rossi, Pekka; Blomberg von der Geest, Kalle; Mäkinen, Ari; Postila, Heini; Marttila, Hannu

    2017-04-01

    High metal concentrations in natural waters is one of the key environmental and health problems globally. Continuous in-situ analysis of metals from runoff water is technically challenging but essential for the better understanding of processes which lead to pollutant transport. Currently, typical analytical methods for monitoring elements in liquids are off-line laboratory methods such as ICP-OES (Inductively Coupled Plasma Optical Emission Spectroscopy) and ICP-MS (ICP combined with a mass spectrometer). Disadvantage of the both techniques is time consuming sample collection, preparation, and off-line analysis at laboratory conditions. Thus use of these techniques lack possibility for real-time monitoring of element transport. We combined a novel high resolution on-line metal concentration monitoring with catchment scale physical hydrological modelling in Mustijoki river in Southern Finland in order to study dynamics of processes and form a predictive warning system for leaching of metals. A novel on-line measurement technique based on micro plasma emission spectroscopy (MPES) is tested for on-line detection of selected elements (e.g. Na, Mg, Al, K, Ca, Fe, Ni, Cu, Cd and Pb) in runoff waters. The preliminary results indicate that MPES can sufficiently detect and monitor metal concentrations from river water. Water and Soil Assessment Tool (SWAT) catchment scale model was further calibrated with high resolution metal concentration data. We show that by combining high resolution monitoring and catchment scale physical based modelling, further process studies and creation of early warning systems, for example to optimization of drinking water uptake from rivers, can be achieved.

  13. Laboratory evaluation of a walleye (Sander vitreus) bioenergetics model

    USGS Publications Warehouse

    Madenjian, C.P.; Wang, C.; O'Brien, T. P.; Holuszko, M.J.; Ogilvie, L.M.; Stickel, R.G.

    2010-01-01

    Walleye (Sander vitreus) is an important game fish throughout much of North America. We evaluated the performance of the Wisconsin bioenergetics model for walleye in the laboratory. Walleyes were fed rainbow smelt (Osmerus mordax) in four laboratory tanks during a 126-day experiment. Based on a statistical comparison of bioenergetics model predictions of monthly consumption with the observed monthly consumption, we concluded that the bioenergetics model significantly underestimated food consumption by walleye in the laboratory. The degree of underestimation appeared to depend on the feeding rate. For the tank with the lowest feeding rate (1.4% of walleye body weight per day), the agreement between the bioenergetics model prediction of cumulative consumption over the entire 126-day experiment and the observed cumulative consumption was remarkably close, as the prediction was within 0.1% of the observed cumulative consumption. Feeding rates in the other three tanks ranged from 1.6% to 1.7% of walleye body weight per day, and bioenergetics model predictions of cumulative consumption over the 126-day experiment ranged between 11 and 15% less than the observed cumulative consumption. ?? 2008 Springer Science+Business Media B.V.

  14. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  15. Laboratory Scale Experiments and Numerical Modeling of Cosolvent flushing of NAPL Mixtures in Saturated Porous Media

    NASA Astrophysics Data System (ADS)

    Agaoglu, B.; Scheytt, T. J.; Copty, N. K.

    2011-12-01

    This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations were also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with slow flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. The results were less consistent for fast non-equilibrium flow conditions. The dissolution process from the NAPL mixture into the water-ethanol flushing solutions was found to be more complex than dissolution expressions incorporated in the numerical model. The dissolution rate of individual organic compounds (namely Toluene and Benzene) from a mixture NAPL into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values.The implications of this controlled experimental and modeling study on field cosolvent remediation applications are discussed.

  16. A numerical cloud model for the support of laboratory experimentation

    NASA Technical Reports Server (NTRS)

    Hagen, D. E.

    1979-01-01

    A numerical cloud model is presented which can describe the evolution of a cloud starting from moist aerosol-laden air through the diffusional growth regime. The model is designed for the direct support of cloud chamber laboratory experimentation, i.e., experiment preparation, real-time control and data analysis. In the model the thermodynamics is uncoupled from the droplet growth processes. Analytic solutions for the cloud droplet growth equations are developed which can be applied in most laboratory situations. The model is applied to a variety of representative experiments.

  17. FINAL REPORT: Mechanistically-Base Field Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Brian D.

    2013-11-04

    Biogeochemical reactive transport processes in the subsurface environment are important to many contemporary environmental issues of significance to DOE. Quantification of risks and impacts associated with environmental management options, and design of remediation systems where needed, require that we have at our disposal reliable predictive tools (usually in the form of numerical simulation models). However, it is well known that even the most sophisticated reactive transport models available today have poor predictive power, particularly when applied at the field scale. Although the lack of predictive ability is associated in part with our inability to characterize the subsurface and limitations inmore » computational power, significant advances have been made in both of these areas in recent decades and can be expected to continue. In this research, we examined the upscaling (pore to Darcy and Darcy to field) the problem of bioremediation via biofilms in porous media. The principle idea was to start with a conceptual description of the bioremediation process at the pore scale, and apply upscaling methods to formally develop the appropriate upscaled model at the so-called Darcy scale. The purpose was to determine (1) what forms the upscaled models would take, and (2) how one might parameterize such upscaled models for applications to bioremediation in the field. We were able to effectively upscale the bioremediation process to explain how the pore-scale phenomena were linked to the field scale. The end product of this research was to produce a set of upscaled models that could be used to help predict field-scale bioremediation. These models were mechanistic, in the sense that they directly incorporated pore-scale information, but upscaled so that only the essential features of the process were needed to predict the effective parameters that appear in the model. In this way, a direct link between the microscale and the field scale was made, but the upscaling

  18. Formation of stimulated electromagnetic emission of the ionosphere: laboratory modeling

    NASA Astrophysics Data System (ADS)

    Starodubtsev, Mikhail; Kostrov, Alexander; Nazarov, Vladimir

    Laboratory modeling of some physical processes involved in generation of the stimulated elec-tromagnetic emission (SEE) is presented. SEE is a noise component observed in the spectrum of the pump electromagnetic wave reflected from the heated ionosphere during the ionospheric heating experiments. In our laboratory experiments, main attention has been paid to the experimental investigation of generation of the most pronounced SEE components connected to the small-scale filamentation of the heated area of the ionosphere. It has been shown that the main physical mechanism of thermal magnetoplasma nonlinearity in this frequency range is due to thermal self-channeling of the Langmuir waves. This mechanism has the minimal threshold and should appear when both laboratory and ionospheric plasmas are heated by high-power radiowaves. Thermal self-channeling of Langmuir waves is connected with the fact that Langmuir waves are trapped in the area of depleted plasma density. As a result, wave amplitude significantly increases in these depleted ragion, which lead to the local plasma heating and, consequently, to the deepening of the plasma density depletion due to plasma thermo-diffusion. As the result, narrow, magnetic-field-aligned plasma density irregularities are formed in a magnetoplasma. Self-channelled Langmuir waves exhibit well-pronoused spectral satellites shifted by 1-2 MHz from the fundamental frequency (about 700 MHz in our experimental conditions). It has been found that there exist two main mechanisms of satellite formation. First mechanism (dynamic) has been observed during the formation of the small-scale irregularity, when its longitudinal size increases fastly. During this process, spectrum of the trapped wave characterizes by one low-frequency satellite. Physical mechanism, which lead to the formation of this satellite is connected to Doppler shift of the frequency of Langmuir waves trapped in the non-stationar plasma irregularity. Second mechanism

  19. Fabrication Method for Laboratory-Scale High-Performance Membrane Electrode Assemblies for Fuel Cells.

    PubMed

    Sassin, Megan B; Garsany, Yannick; Gould, Benjamin D; Swider-Lyons, Karen E

    2017-01-03

    Custom catalyst-coated membranes (CCMs) and membrane electrode assemblies (MEAs) are necessary for the evaluation of advanced electrocatalysts, gas diffusion media (GDM), ionomers, polymer electrolyte membranes (PEMs), and electrode structures designed for use in next-generation fuel cells, electrolyzers, or flow batteries. This Feature provides a reliable and reproducible fabrication protocol for laboratory scale (10 cm 2 ) fuel cells based on ultrasonic spray deposition of a standard Pt/carbon electrocatalyst directly onto a perfluorosulfonic acid PEM.

  20. Laboratory-Scale Internal Wave Apparatus for Studying Copepod Behavior

    NASA Astrophysics Data System (ADS)

    Jung, S.; Webster, D. R.; Haas, K. A.; Yen, J.

    2016-02-01

    Internal waves are ubiquitous features in coastal marine environments and have been observed to mediate vertical distributions of zooplankton in situ. Internal waves create fine-scale hydrodynamic cues that copepods and other zooplankton are known to sense, such as fluid density gradients and velocity gradients (quantified as shear deformation rate). The role of copepod behavior in response to cues associated with internal waves is largely unknown. The objective is to provide insight to the bio-physical interaction and the role of biological versus physical forcing in mediating organism distributions. We constructed a laboratory-scale internal wave apparatus to facilitate fine-scale observations of copepod behavior in flows that replicate in situ conditions of internal waves in two-layer stratification. Two cases were chosen with density jump of 1 and 1.5 sigma-t units. Analytical analysis of the two-layer system provided guidance to the target forcing frequency needed to generate a standing internal wave with a single dominate frequency of oscillation. Flow visualization and signal processing of the interface location were used to quantify the wave characteristics. The results show a close match to the target wave parameters. Marine copepod (mixed population of Acartia tonsa, Temora longicornis, and Eurytemora affinis) behavior assays were conducted for three different physical arrangements: (1) no density stratification, (2) stagnant two-layer density stratification, and (3) two-layer density stratification with internal wave motion. Digitized trajectories of copepod swimming behavior indicate that in the control (case 1) the animals showed no preferential motion in terms of direction. In the stagnant density jump treatment (case 2) copepods preferentially moved horizontally, parallel to the density interface. In the internal wave treatment (case 3) copepods demonstrated orbital trajectories near the density interface.

  1. Clinical laboratory as an economic model for business performance analysis

    PubMed Central

    Buljanović, Vikica; Patajac, Hrvoje; Petrovečki, Mladen

    2011-01-01

    Aim To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Methods Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. Conclusion The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by

  2. Craftsmen in the Wood Model Shop at the Lewis Flight Propulsion Laboratory

    NASA Image and Video Library

    1953-01-21

    Craftsmen work in the wood model shop at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory. The Fabrication Division created almost all of the equipment and models used at the laboratory. The Fabrication Shop building contained a number of specialized shops in the 1940s and 1950s. These included a Machine Shop, Sheet Metal Shop, Wood Model and Pattern Shop, Instrument Shop, Thermocouple Shop, Heat Treating Shop, Metallurgical Laboratory, and Fabrication Office. The Wood Model and Pattern Shop created everything from control panels and cabinets to aircraft models molds for sheet metal work.

  3. A non-isotropic multiple-scale turbulence model

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1990-01-01

    A newly developed non-isotropic multiple scale turbulence model (MS/ASM) is described for complex flow calculations. This model focuses on the direct modeling of Reynolds stresses and utilizes split-spectrum concepts for modeling multiple scale effects in turbulence. Validation studies on free shear flows, rotating flows and recirculating flows show that the current model perform significantly better than the single scale k-epsilon model. The present model is relatively inexpensive in terms of CPU time which makes it suitable for broad engineering flow applications.

  4. Device Scale Modeling of Solvent Absorption using MFIX-TFM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carney, Janine E.; Finn, Justin R.

    Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO 2 emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO 2 is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO 2 capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology.more » To apply CO 2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first principle foundations. CFD

  5. Transitioning glass-ceramic scintillators for diagnostic x-ray imaging from the laboratory to commercial scale

    NASA Astrophysics Data System (ADS)

    Beckert, M. Brooke; Gallego, Sabrina; Elder, Eric; Nadler, Jason

    2016-10-01

    This study sought to mitigate risk in transitioning newly developed glass-ceramic scintillator technology from a laboratory concept to commercial product by identifying the most significant hurdles to increased scale. These included selection of cost effective raw material sources, investigation of process parameters with the most significant impact on performance, and synthesis steps that could see the greatest benefit from participation of an industry partner that specializes in glass or optical component manufacturing. Efforts focused on enhancing the performance of glass-ceramic nanocomposite scintillators developed specifically for medical imaging via composition and process modifications that ensured efficient capture of incident X-ray energy and emission of scintillation light. The use of cost effective raw materials and existing manufacturing methods demonstrated proof-of-concept for economical viable alternatives to existing benchmark materials, as well as possible disruptive applications afforded by novel geometries and comparatively lower cost per volume. The authors now seek the expertise of industry to effectively navigate the transition from laboratory demonstrations to pilot scale production and testing to evince the industry of the viability and usefulness of composite-based scintillators.

  6. Drift-Scale THC Seepage Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C.R. Bryan

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertainingmore » to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC submodel uses a drift-scale

  7. Streamlining workflow and automation to accelerate laboratory scale protein production.

    PubMed

    Konczal, Jennifer; Gray, Christopher H

    2017-05-01

    Protein production facilities are often required to produce diverse arrays of proteins for demanding methodologies including crystallography, NMR, ITC and other reagent intensive techniques. It is common for these teams to find themselves a bottleneck in the pipeline of ambitious projects. This pressure to deliver has resulted in the evolution of many novel methods to increase capacity and throughput at all stages in the pipeline for generation of recombinant proteins. This review aims to describe current and emerging options to accelerate the success of protein production in Escherichia coli. We emphasize technologies that have been evaluated and implemented in our laboratory, including innovative molecular biology and expression vectors, small-scale expression screening strategies and the automation of parallel and multidimensional chromatography. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Useful measures and models for analytical quality management in medical laboratories.

    PubMed

    Westgard, James O

    2016-02-01

    The 2014 Milan Conference "Defining analytical performance goals 15 years after the Stockholm Conference" initiated a new discussion of issues concerning goals for precision, trueness or bias, total analytical error (TAE), and measurement uncertainty (MU). Goal-setting models are critical for analytical quality management, along with error models, quality-assessment models, quality-planning models, as well as comprehensive models for quality management systems. There are also critical underlying issues, such as an emphasis on MU to the possible exclusion of TAE and a corresponding preference for separate precision and bias goals instead of a combined total error goal. This opinion recommends careful consideration of the differences in the concepts of accuracy and traceability and the appropriateness of different measures, particularly TAE as a measure of accuracy and MU as a measure of traceability. TAE is essential to manage quality within a medical laboratory and MU and trueness are essential to achieve comparability of results across laboratories. With this perspective, laboratory scientists can better understand the many measures and models needed for analytical quality management and assess their usefulness for practical applications in medical laboratories.

  9. [Mathematical model of technical equipment of a clinical-diagnostic laboratory].

    PubMed

    Bukin, S I; Busygin, D V; Tilevich, M E

    1990-01-01

    The paper is concerned with the problems of technical equipment of standard clinico-diagnostic laboratories (CDL) in this country. The authors suggest a mathematic model that may minimize expenditures for laboratory studies. The model enables the following problems to be solved: to issue scientifically-based recommendations for technical equipment of CDL; to validate the medico-technical requirements for newly devised items; to select the optimum types of uniform items; to define optimal technical decisions at the stage of the design; to determine the lab assistant's labour productivity and the cost of some investigations; to compute the medical laboratory engineering requirement for treatment and prophylactic institutions of this country.

  10. Modeling of Army Research Laboratory EMP simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miletta, J.R.; Chase, R.J.; Luu, B.B.

    1993-12-01

    Models are required that permit the estimation of emitted field signatures from EMP simulators to design the simulator antenna structure, to establish the usable test volumes, and to estimate human exposure risk. This paper presents the capabilities and limitations of a variety of EMP simulator models useful to the Army's EMP survivability programs. Comparisons among frequency and time-domain models are provided for two powerful US Army Research Laboratory EMP simulators: AESOP (Army EMP Simulator Operations) and VEMPS II (Vertical EMP Simulator II).

  11. Global scale groundwater flow model

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  12. A review of laboratory and numerical modelling in volcanology

    NASA Astrophysics Data System (ADS)

    Kavanagh, Janine L.; Engwell, Samantha L.; Martin, Simon A.

    2018-04-01

    Modelling has been used in the study of volcanic systems for more than 100 years, building upon the approach first applied by Sir James Hall in 1815. Informed by observations of volcanological phenomena in nature, including eye-witness accounts of eruptions, geophysical or geodetic monitoring of active volcanoes, and geological analysis of ancient deposits, laboratory and numerical models have been used to describe and quantify volcanic and magmatic processes that span orders of magnitudes of time and space. We review the use of laboratory and numerical modelling in volcanological research, focussing on sub-surface and eruptive processes including the accretion and evolution of magma chambers, the propagation of sheet intrusions, the development of volcanic flows (lava flows, pyroclastic density currents, and lahars), volcanic plume formation, and ash dispersal. When first introduced into volcanology, laboratory experiments and numerical simulations marked a transition in approach from broadly qualitative to increasingly quantitative research. These methods are now widely used in volcanology to describe the physical and chemical behaviours that govern volcanic and magmatic systems. Creating simplified models of highly dynamical systems enables volcanologists to simulate and potentially predict the nature and impact of future eruptions. These tools have provided significant insights into many aspects of the volcanic plumbing system and eruptive processes. The largest scientific advances in volcanology have come from a multidisciplinary approach, applying developments in diverse fields such as engineering and computer science to study magmatic and volcanic phenomena. A global effort in the integration of laboratory and numerical volcano modelling is now required to tackle key problems in volcanology and points towards the importance of benchmarking exercises and the need for protocols to be developed so that models are routinely tested against real world data.

  13. Laboratory study of sonic booms and their scaling laws. [ballistic range simulation

    NASA Technical Reports Server (NTRS)

    Toong, T. Y.

    1974-01-01

    This program undertook to seek a basic understanding of non-linear effects associated with caustics, through laboratory simulation experiments of sonic booms in a ballistic range and a coordinated theoretical study of scaling laws. Two cases of superbooms or enhanced sonic booms at caustics have been studied. The first case, referred to as acceleration superbooms, is related to the enhanced sonic booms generated during the acceleration maneuvers of supersonic aircrafts. The second case, referred to as refraction superbooms, involves the superbooms that are generated as a result of atmospheric refraction. Important theoretical and experimental results are briefly reported.

  14. Scaling Laws of Discrete-Fracture-Network Models

    NASA Astrophysics Data System (ADS)

    Philippe, D.; Olivier, B.; Caroline, D.; Jean-Raynald, D.

    2006-12-01

    The statistical description of fracture networks through scale still remains a concern for geologists, considering the complexity of fracture networks. A challenging task of the last 20-years studies has been to find a solid and rectifiable rationale to the trivial observation that fractures exist everywhere and at all sizes. The emergence of fractal models and power-law distributions quantifies this fact, and postulates in some ways that small-scale fractures are genetically linked to their larger-scale relatives. But the validation of these scaling concepts still remains an issue considering the unreachable amount of information that would be necessary with regards to the complexity of natural fracture networks. Beyond the theoretical interest, a scaling law is a basic and necessary ingredient of Discrete-Fracture-Network models (DFN) that are used for many environmental and industrial applications (groundwater resources, mining industry, assessment of the safety of deep waste disposal sites, ..). Indeed, such a function is necessary to assemble scattered data, taken at different scales, into a unified scaling model, and to interpolate fracture densities between observations. In this study, we discuss some important issues related to scaling laws of DFN: - We first describe a complete theoretical and mathematical framework that takes account of both the fracture- size distribution and the fracture clustering through scales (fractal dimension). - We review the scaling laws that have been obtained, and we discuss the ability of fracture datasets to really constrain the parameters of the DFN model. - And finally we discuss the limits of scaling models.

  15. FLARE: A New User Facility for Laboratory Studies of Multiple-Scale Physics of Magnetic Reconnection and Related Phenomena in Heliophysics and Astrophysics

    NASA Astrophysics Data System (ADS)

    Ji, H.; Bhattacharjee, A.; Goodman, A.; Prager, S.; Daughton, W.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Kozub, T.; Jara-Almonte, J.; Myers, C.; Ren, Y.; Sloboda, P.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S.; Drake, J.; Egedal, J.; Sarff, J.; Wallace, J.

    2017-10-01

    The FLARE device (Facility for Laboratory Reconnection Experiments; flare.pppl.gov) is a new laboratory experiment under construction at Princeton with first plasmas expected in the fall of 2017, based on the design of Magnetic Reconnection Experiment (MRX; mrx.pppl.gov) with much extended parameter ranges. Its main objective is to provide an experimental platform for the studies of magnetic reconnection and related phenomena in the multiple X-line regimes directly relevant to space, solar, astrophysical and fusion plasmas. The main diagnostics is an extensive set of magnetic probe arrays, simultaneously covering multiple scales from local electron scales ( 2 mm), to intermediate ion scales ( 10 cm), and global MHD scales ( 1 m). Specific example space physics topics which can be studied on FLARE will be discussed.

  16. Heat transfer analysis of a lab scale solar receiver using the discrete ordinates model

    NASA Astrophysics Data System (ADS)

    Dordevich, Milorad C. W.

    This thesis documents the development, implementation and simulation outcomes of the Discrete Ordinates Radiation Model in ANSYS FLUENT simulating the radiative heat transfer occurring in the San Diego State University lab-scale Small Particle Heat Exchange Receiver. In tandem, it also serves to document how well the Discrete Ordinates Radiation Model results compared with those from the in-house developed Monte Carlo Ray Trace Method in a number of simplified geometries. The secondary goal of this study was the inclusion of new physics, specifically buoyancy. Implementation of an additional Monte Carlo Ray Trace Method software package known as VEGAS, which was specifically developed to model lab scale solar simulators and provide directional, flux and beam spread information for the aperture boundary condition, was also a goal of this study. Upon establishment of the model, test cases were run to understand the predictive capabilities of the model. It was shown that agreement within 15% was obtained against laboratory measurements made in the San Diego State University Combustion and Solar Energy Laboratory with the metrics of comparison being the thermal efficiency and outlet, wall and aperture quartz temperatures. Parametric testing additionally showed that the thermal efficiency of the system was very dependent on the mass flow rate and particle loading. It was also shown that the orientation of the small particle heat exchange receiver was important in attaining optimal efficiency due to the fact that buoyancy induced effects could not be neglected. The analyses presented in this work were all performed on the lab-scale small particle heat exchange receiver. The lab-scale small particle heat exchange receiver is 0.38 m in diameter by 0.51 m tall and operated with an input irradiation flux of 3 kWth and a nominal mass flow rate of 2 g/s with a suspended particle mass loading of 2 g/m3. Finally, based on acumen gained during the implementation and development

  17. Direct geoelectrical evidence of mass transfer at the laboratory scale

    NASA Astrophysics Data System (ADS)

    Swanson, Ryan D.; Singha, Kamini; Day-Lewis, Frederick D.; Binley, Andrew; Keating, Kristina; Haggerty, Roy

    2012-10-01

    Previous field-scale experimental data and numerical modeling suggest that the dual-domain mass transfer (DDMT) of electrolytic tracers has an observable geoelectrical signature. Here we present controlled laboratory experiments confirming the electrical signature of DDMT and demonstrate the use of time-lapse electrical measurements in conjunction with concentration measurements to estimate the parameters controlling DDMT, i.e., the mobile and immobile porosity and rate at which solute exchanges between mobile and immobile domains. We conducted column tracer tests on unconsolidated quartz sand and a material with a high secondary porosity: the zeolite clinoptilolite. During NaCl tracer tests we collected nearly colocated bulk direct-current electrical conductivity (σb) and fluid conductivity (σf) measurements. Our results for the zeolite show (1) extensive tailing and (2) a hysteretic relation between σf and σb, thus providing evidence of mass transfer not observed within the quartz sand. To identify best-fit parameters and evaluate parameter sensitivity, we performed over 2700 simulations of σf, varying the immobile and mobile domain and mass transfer rate. We emphasized the fit to late-time tailing by minimizing the Box-Cox power transformed root-mean square error between the observed and simulated σf. Low-field proton nuclear magnetic resonance (NMR) measurements provide an independent quantification of the volumes of the mobile and immobile domains. The best-fit parameters based on σf match the NMR measurements of the immobile and mobile domain porosities and provide the first direct electrical evidence for DDMT. Our results underscore the potential of using electrical measurements for DDMT parameter inference.

  18. Direct geoelectrical evidence of mass transfer at the laboratory scale

    USGS Publications Warehouse

    Swanson, Ryan D.; Singha, Kamini; Day-Lewis, Frederick D.; Binley, Andrew; Keating, Kristina; Haggerty, Roy

    2012-01-01

    Previous field-scale experimental data and numerical modeling suggest that the dual-domain mass transfer (DDMT) of electrolytic tracers has an observable geoelectrical signature. Here we present controlled laboratory experiments confirming the electrical signature of DDMT and demonstrate the use of time-lapse electrical measurements in conjunction with concentration measurements to estimate the parameters controlling DDMT, i.e., the mobile and immobile porosity and rate at which solute exchanges between mobile and immobile domains. We conducted column tracer tests on unconsolidated quartz sand and a material with a high secondary porosity: the zeolite clinoptilolite. During NaCl tracer tests we collected nearly colocated bulk direct-current electrical conductivity (σb) and fluid conductivity (σf) measurements. Our results for the zeolite show (1) extensive tailing and (2) a hysteretic relation between σf and σb, thus providing evidence of mass transfer not observed within the quartz sand. To identify best-fit parameters and evaluate parameter sensitivity, we performed over 2700 simulations of σf, varying the immobile and mobile domain and mass transfer rate. We emphasized the fit to late-time tailing by minimizing the Box-Cox power transformed root-mean square error between the observed and simulated σf. Low-field proton nuclear magnetic resonance (NMR) measurements provide an independent quantification of the volumes of the mobile and immobile domains. The best-fit parameters based on σf match the NMR measurements of the immobile and mobile domain porosities and provide the first direct electrical evidence for DDMT. Our results underscore the potential of using electrical measurements for DDMT parameter inference.

  19. Collisionless coupling of a high- β expansion to an ambient, magnetized plasma. I. Rayleigh model and scaling

    NASA Astrophysics Data System (ADS)

    Bonde, Jeffrey

    2018-04-01

    The dynamics of a magnetized, expanding plasma with a high ratio of kinetic energy density to ambient magnetic field energy density, or β, are examined by adapting a model of gaseous bubbles expanding in liquids as developed by Lord Rayleigh. New features include scale magnitudes and evolution of the electric fields in the system. The collisionless coupling between the expanding and ambient plasma due to these fields is described as well as the relevant scaling relations. Several different responses of the ambient plasma to the expansion are identified in this model, and for most laboratory experiments, ambient ions should be pulled inward, against the expansion due to the dominance of the electrostatic field.

  20. Laboratory-Measured and Property-Transfer Modeled Saturated Hydraulic Conductivity of Snake River Plain Aquifer Sediments at the Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Perkins, Kim S.

    2008-01-01

    Sediments are believed to comprise as much as 50 percent of the Snake River Plain aquifer thickness in some locations within the Idaho National Laboratory. However, the hydraulic properties of these deep sediments have not been well characterized and they are not represented explicitly in the current conceptual model of subregional scale ground-water flow. The purpose of this study is to evaluate the nature of the sedimentary material within the aquifer and to test the applicability of a site-specific property-transfer model developed for the sedimentary interbeds of the unsaturated zone. Saturated hydraulic conductivity (Ksat) was measured for 10 core samples from sedimentary interbeds within the Snake River Plain aquifer and also estimated using the property-transfer model. The property-transfer model for predicting Ksat was previously developed using a multiple linear-regression technique with bulk physical-property measurements (bulk density [pbulk], the median particle diameter, and the uniformity coefficient) as the explanatory variables. The model systematically underestimates Ksat,typically by about a factor of 10, which likely is due to higher bulk-density values for the aquifer samples compared to the samples from the unsaturated zone upon which the model was developed. Linear relations between the logarithm of Ksat and pbulk also were explored for comparison.

  1. Indomethacin nanocrystals prepared by different laboratory scale methods: effect on crystalline form and dissolution behavior

    NASA Astrophysics Data System (ADS)

    Martena, Valentina; Censi, Roberta; Hoti, Ela; Malaj, Ledjan; Di Martino, Piera

    2012-12-01

    The objective of this study is to select very simple and well-known laboratory scale methods able to reduce particle size of indomethacin until the nanometric scale. The effect on the crystalline form and the dissolution behavior of the different samples was deliberately evaluated in absence of any surfactants as stabilizers. Nanocrystals of indomethacin (native crystals are in the γ form) (IDM) were obtained by three laboratory scale methods: A (Batch A: crystallization by solvent evaporation in a nano-spray dryer), B (Batch B-15 and B-30: wet milling and lyophilization), and C (Batch C-20-N and C-40-N: Cryo-milling in the presence of liquid nitrogen). Nanocrystals obtained by the method A (Batch A) crystallized into a mixture of α and γ polymorphic forms. IDM obtained by the two other methods remained in the γ form and a different attitude to the crystallinity decrease were observed, with a more considerable decrease in crystalline degree for IDM milled for 40 min in the presence of liquid nitrogen. The intrinsic dissolution rate (IDR) revealed a higher dissolution rate for Batches A and C-40-N, due to the higher IDR of α form than γ form for the Batch A, and the lower crystallinity degree for both the Batches A and C-40-N. These factors, as well as the decrease in particle size, influenced the IDM dissolution rate from the particle samples. Modifications in the solid physical state that may occur using different particle size reduction treatments have to be taken into consideration during the scale up and industrial development of new solid dosage forms.

  2. Laboratory Scale Electrodeposition. Practice and Applications.

    ERIC Educational Resources Information Center

    Bruno, Thomas J.

    1986-01-01

    Discusses some aspects of electrodeposition and electroplating. Emphasizes the materials, techniques, and safety precautions necessary to make electrodeposition work reliably in the chemistry laboratory. Describes some problem-solving applications of this process. (TW)

  3. Stresses, deformation, and seismic events on scaled experimental faults with heterogeneous fault segments and comparison to numerical modeling

    NASA Astrophysics Data System (ADS)

    Buijze, Loes; Guo, Yanhuang; Niemeijer, André R.; Ma, Shengli; Spiers, Christopher J.

    2017-04-01

    Faults in the upper crust cross-cut many different lithologies, which cause the composition of the fault rocks to vary. Each different fault rock segment may have specific mechanical properties, e.g. there may be stronger and weaker segments, and segments prone to unstable slip or creeping. This leads to heterogeneous deformation and stresses along such faults, and a heterogeneous distribution of seismic events. We address the influence of fault variability on stress, deformation, and seismicity using a combination of scaled laboratory fault and numerical modeling. A vertical fault was created along the diagonal of a 30 x 20 x 5 cm block of PMMA, along which a 2 mm thick gouge layer was deposited. Gouge materials of different characteristics were used to create various segments along the fault; quartz (average strength, stable sliding), kaolinite (weak, stable sliding), and gypsum (average strength, unstable sliding). The sample assembly was placed in a horizontal biaxial deformation apparatus, and shear displacement was enforced along the vertical fault. Multiple observations were made: 1) Acoustic emissions were continuously recorded at 3 MHz to observe the occurrence of stick-slips (micro-seismicity), 2) Photo-elastic effects (indicative of the differential stress) were recorded in the transparent set of PMMA wall-rocks using a high-speed camera, and 3) particle tracking was conducted on a speckle painted set of PMMA wall-rocks to study the deformation in the wall-rocks flanking the fault. All three observation methods show how the heterogeneous fault gouge exerts a strong control on the fault behavior. For example, a strong, unstable segment of gypsum flanked by two weaker kaolinite segments show strong stress concentrations develop near the edges of the strong segment, with at the same time most of acoustic emissions being located at the edge of this strong segment. The measurements of differential stress, strain and acoustic emissions provide a strong means

  4. Downscaling modelling system for multi-scale air quality forecasting

    NASA Astrophysics Data System (ADS)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a

  5. Regional-Scale Salt Tectonics Modelling: Bench-Scale Validation and Extension to Field-Scale

    NASA Astrophysics Data System (ADS)

    Crook, A. J. L.; Yu, J. G.; Thornton, D. A.

    2010-05-01

    The role of salt in the evolution of the West African continental margin, and in particular its impact on hydrocarbon migration and trap formation, is an important research topic. It has attracted many researchers who have based their research on bench-scale experiments, numerical models and seismic observations. This research has shown that the evolution is very complex. For example, regional analogue bench-scale models of the Angolan margin (Fort et al., 2004) indicate a complex system with an upslope extensional domain with sealed tilted blocks, growth fault and rollover systems and extensional diapers, and a downslope contractional domain with squeezed diapirs, polyharmonic folds and thrust faults, and late-stage folding and thrusting. Numerical models have the potential to provide additional insight into the evolution of these salt driven passive margins. The longer-term aim is to calibrate regional-scale evolution models, and then to evaluate the effect of the depositional history on the current day geomechanical and hydrogeologic state in potential target hydrocarbon reservoir formations adjacent to individual salt bodies. To achieve this goal the burial and deformational history of the sediment must be modelled from initial deposition to the current-day state, while also accounting for the reaction and transport processes occurring in the margin. Accurate forward modeling is, however complex, and necessitates advanced procedures for the prediction of fault formation and evolution, representation of the extreme deformations in the salt, and for coupling the geomechanical, fluid flow and temperature fields. The evolution of the sediment due to a combination of mechanical compaction, chemical compaction and creep relaxation must also be represented. In this paper ongoing research on a computational approach for forward modelling complex structural evolution, with particular reference to passive margins driven by salt tectonics is presented. The approach is an

  6. Identification of small-scale low and high permeability layers using single well forced-gradient tracer tests: fluorescent dye imaging and modelling at the laboratory-scale.

    PubMed

    Barns, Gareth L; Thornton, Steven F; Wilson, Ryan D

    2015-01-01

    Heterogeneity in aquifer permeability, which creates paths of varying mass flux and spatially complex contaminant plumes, can complicate the interpretation of contaminant fate and transport in groundwater. Identifying the location of high mass flux paths is critical for the reliable estimation of solute transport parameters and design of groundwater remediation schemes. Dipole flow tracer tests (DFTTs) and push-pull tests (PPTs) are single well forced-gradient tests which have been used at field-scale to estimate aquifer hydraulic and transport properties. In this study, the potential for PPTs and DFTTs to resolve the location of layered high- and low-permeability layers in granular porous media was investigated with a pseudo 2-D bench-scale aquifer model. Finite element fate and transport modelling was also undertaken to identify appropriate set-ups for in situ tests to determine the type, magnitude, location and extent of such layered permeability contrasts at the field-scale. The characteristics of flow patterns created during experiments were evaluated using fluorescent dye imaging and compared with the breakthrough behaviour of an inorganic conservative tracer. The experimental results show that tracer breakthrough during PPTs is not sensitive to minor permeability contrasts for conditions where there is no hydraulic gradient. In contrast, DFTTs are sensitive to the type and location of permeability contrasts in the host media and could potentially be used to establish the presence and location of high or low mass flux paths. Numerical modelling shows that the tracer peak breakthrough time and concentration in a DFTT is sensitive to the magnitude of the permeability contrast (defined as the permeability of the layer over the permeability of the bulk media) between values of 0.01-20. DFTTs are shown to be more sensitive to deducing variations in the contrast, location and size of aquifer layered permeability contrasts when a shorter central packer is used

  7. An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Brown, Clifford; Bridges, James

    2003-01-01

    Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

  8. Model of the NACA's Aircraft Engine Research Laboratory during its Construction

    NASA Image and Video Library

    1942-08-21

    Zella Morewitz poses with a model of the National Advisory Committee for Aeronautics (NACA) Aircraft Engine Research Laboratory, currently the NASA Glenn Research Center. The model was displayed in the Administration Building during the construction of the laboratory in the early 1940s. Detailed models of the individual test facilities were also fabricated and displayed in the facilities. The laboratory was built on a wedge of land between the Cleveland Municipal Airport on the far side and the deep curving valley etched by the Rocky River on the near end. Roughly only a third of the laboratory's semicircle footprint was initially utilized. Additional facilities were added to the remaining areas in the years after World War II. In the late 1950s the site was supplemented by the acquisition of additional adjacent land. Morewitz joined the NACA in 1935 as a secretary in the main office at the Langley Memorial Aeronautical Laboratory. In September 1940 she took on the task of setting up and guiding an office dedicated to the design of the NACA’s new engine research laboratory. Morewitz and the others in the design office transferred to Cleveland in December 1941 to expedite the construction. Morewitz served as Manager Ray Sharp’s secretary for six years and was a popular figure at the new laboratory. In December 1947 Morewitz announced her engagement to Langley researcher Sidney Batterson and moved back to Virginia.

  9. Multi-scale Modeling of Arctic Clouds

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  10. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2009-09-30

    Modeling of Burning Emissions ( FLAMBE ) project, and other related parameters. Our plans to embed NAAPS inside NOGAPS may need to be put on hold...AOD, FLAMBE and FAROP at FNMOC are supported by 6.4 funding from PMW-120 for “Large-scale Atmospheric Models”, “Small-scale Atmospheric Models

  11. Linking the Grain Scale to Experimental Measurements and Other Scales

    NASA Astrophysics Data System (ADS)

    Vogler, Tracy

    2017-06-01

    A number of physical processes occur at the scale of grains that can have a profound influence on the behavior of materials under shock loading. Examples include inelastic deformation, pore collapse, fracture, friction, and internal wave reflections. In some cases such as the initiation of energetics and brittle fracture, these processes can have first order effects on the behavior of materials: the emergent behavior from the grain scale is the dominant one. In other cases, many aspects of the bulk behavior can be described by a continuum description, but some details of the behavior are missed by continuum descriptions. The multi-scale model paradigm envisions flow of information from smaller scales (atomic, dislocation, etc.) to the grain or mesoscale and the up to the continuum scale. A significant challenge in this approach is the need to validate each step. For the grain scale, diagnosing behavior is challenging because of the small spatial and temporal scales involved. Spatially resolved diagnostics have begun to shed light on these processes, and, more recently, advanced light sources have started to be used to probe behavior at the grain scale. In this talk, I will discuss some interesting phenomena that occur at the grain scale in shock loading, experimental approaches to probe the grain scale, and efforts to link the grain scale to smaller and larger scales. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE.

  12. F/A-18 1/9th scale model tail buffet measurements

    NASA Technical Reports Server (NTRS)

    Martin, C. A.; Glaister, M. K.; Maclaren, L. D.; Meyn, L. A.; Ross, J.

    1991-01-01

    Wind tunnel tests were carried out on a 1/9th scale model of the F/A-18 at high angles of attack to investigate the characteristics of tail buffet due to bursting of the wing leading edge extension (LEX) vortices. The tests were carried out at the Aeronautical Research Laboratory low-speed wind tunnel facility and form part of a collaborative activity with NASA Ames Research Center, organized by The Technical Cooperative Program (TTCP). Information from the program will be used in the planning of similar collaborative tests, to be carried out at NASA Ames, on a full-scale aircraft. The program covered the measurement of unsteady pressures and fin vibration for cases with and without the wing LEX fences fitted. Fourier transform methods were used to analyze the unsteady data, and information on the spatial and temporal content of the vortex burst pressure field was obtained. Flow visualization of the vortex behavior was carried out using smoke and a laser light sheet technique.

  13. Design of scaled down structural models

    NASA Technical Reports Server (NTRS)

    Simitses, George J.

    1994-01-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  14. Design of scaled down structural models

    NASA Astrophysics Data System (ADS)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  15. Hydrologic control on the root growth of Salix cuttings at the laboratory scale

    NASA Astrophysics Data System (ADS)

    Bau', Valentina; Calliari, Baptiste; Perona, Paolo

    2017-04-01

    Riparian plant roots contribute to the ecosystem functioning and, to a certain extent, also directly affect fluvial morphodynamics, e.g. by influencing sediment transport via mechanical stabilization and trapping. There is much both scientific and engineering interest in understanding the complex interactions among riparian vegetation and river processes. For example, to investigate plant resilience to uprooting by flow, one should quantify the probability that riparian plants may be uprooted during specific flooding event. Laboratory flume experiments are of some help to this regard, but are often limited to use grass (e.g., Avena and Medicago sativa) as vegetation replicate with a number of limitations due to fundamental scaling problems. Hence, the use of small-scale real plants grown undisturbed in the actual sediment and within a reasonable time frame would be particularly helpful to obtain more realistic flume experiments. The aim of this work is to develop and tune an experimental technique to control the growth of the root vertical density distribution of small-scale Salix cuttings of different sizes and lengths. This is obtained by controlling the position of the saturated water table in the sedimentary bed according to the sediment size distribution and the cutting length. Measurements in the rhizosphere are performed by scanning and analysing the whole below-ground biomass by means of the root analysis software WinRhizo, from which root morphology statistics and the empirical vertical density distribution are obtained. The model of Tron et al. (2015) for the vertical density distribution of the below-ground biomass is used to show that experimental conditions that allow to develop the desired root density distribution can be fairly well predicted. This augments enormously the flexibility and the applicability of the proposed methodology in view of using such plants for novel flow erosion experiments. Tron, S., Perona, P., Gorla, L., Schwarz, M., Laio, F

  16. The global reference atmospheric model, mod 2 (with two scale perturbation model)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Hargraves, W. R.

    1976-01-01

    The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.

  17. Basin-scale hydrogeologic modeling

    NASA Astrophysics Data System (ADS)

    Person, Mark; Raffensperger, Jeff P.; Ge, Shemin; Garven, Grant

    1996-02-01

    Mathematical modeling of coupled groundwater flow, heat transfer, and chemical mass transport at the sedimentary basin scale has been increasingly used by Earth scientists studying a wide range of geologic processes including the formation of excess pore pressures, infiltration-driven metamorphism, heat flow anomalies, nuclear waste isolation, hydrothermal ore genesis, sediment diagenesis, basin tectonics, and petroleum generation and migration. These models have provided important insights into the rates and pathways of groundwater migration through basins, the relative importance of different driving mechanisms for fluid flow, and the nature of coupling between the hydraulic, thermal, chemical, and stress regimes. The mathematical descriptions of basin transport processes, the analytical and numerical solution methods employed, and the application of modeling to sedimentary basins around the world are the subject of this review paper. The special considerations made to represent coupled transport processes at the basin scale are emphasized. Future modeling efforts will probably utilize three-dimensional descriptions of transport processes, incorporate greater information regarding natural geological heterogeneity, further explore coupled processes, and involve greater field applications.

  18. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  19. redGEM: Systematic reduction and analysis of genome-scale metabolic reconstructions for development of consistent core metabolic models

    PubMed Central

    Ataman, Meric

    2017-01-01

    Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these “consistently-reduced” models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models. PMID:28727725

  20. Microphysics in Multi-scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  1. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y.S. Wu

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used tomore » support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas

  2. Assessing sexual conflict in the Drosophila melanogaster laboratory model system

    PubMed Central

    Rice, William R; Stewart, Andrew D; Morrow, Edward H; Linder, Jodell E; Orteiza, Nicole; Byrne, Phillip G

    2006-01-01

    We describe a graphical model of interlocus coevolution used to distinguish between the interlocus sexual conflict that leads to sexually antagonistic coevolution, and the intrinsic conflict over mating rate that is an integral part of traditional models of sexual selection. We next distinguish the ‘laboratory island’ approach from the study of both inbred lines and laboratory populations that are newly derived from nature, discuss why we consider it to be one of the most fitting forms of laboratory analysis to study interlocus sexual conflict, and then describe four experiments using this approach with Drosophila melanogaster. The first experiment evaluates the efficacy of the laboratory model system to study interlocus sexual conflict by comparing remating rates of females when they are, or are not, provided with a spatial refuge from persistent male courtship. The second experiment tests for a lag-load in males that is due to adaptations that have accumulated in females, which diminish male-induced harm while simultaneously interfering with a male's ability to compete in the context of sexual selection. The third and fourth experiments test for a lag-load in females owing to direct costs from their interactions with males, and for the capacity for indirect benefits to compensate for these direct costs. PMID:16612888

  3. Scale factor management in the studies of affine models of shockproof garment elements

    NASA Astrophysics Data System (ADS)

    Denisov, Oleg; Pleshko, Mikhail; Ponomareva, Irina; Merenyashev, Vitaliy

    2018-03-01

    New samples of protective garment for performing construction work at height require numerous tests in conditions close to real conditions of extreme vital activity. The article presents some results of shockproof garment element studies and a description of a patented prototype. The tests were carried out on a model which geometric dimensions were convenient for manufacturing it in a limited batch. In addition, the used laboratory equipment (for example, a unique power pendulum), blanks made of a titanium-nickel alloy with a shape memory effect also imposed their limitations. The problem of the adequacy of the obtained experimental results transfer to mass-produced products was solved using tools of the classical similarity theory. Scale factor management influence in the affine modeling of the shockproof element, studied on the basis of the equiatomic titanium-nickel alloy with the shape memory effect, allowed us to assume, with a sufficient degree of reliability, the technical possibility of extrapolating the results of experimental studies to full-scale objects for the formation of the initial data of the mathematical model of shockproof garment dynamics elastoplastic deformation (while observing the similarity of the features of external loading).

  4. Laboratory-scale experiments and numerical modeling of cosolvent flushing of multi-component NAPLs in saturated porous media

    NASA Astrophysics Data System (ADS)

    Agaoglu, Berken; Scheytt, Traugott; Copty, Nadim K.

    2012-10-01

    This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations was also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with low flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. Model simulations over-estimated NAPL recovery for high specific discharges and rate-limited mass transfer, suggesting a constant mass transfer coefficient for the entire flushing experiment may not be valid. When multi-component NAPLs are present, the dissolution rate of individual organic compounds (namely, toluene and benzene) into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values.

  5. Evaluating the capabilities of watershed-scale models in estimating sediment yield at field-scale.

    PubMed

    Sommerlot, Andrew R; Nejadhashemi, A Pouyan; Woznicki, Sean A; Giri, Subhasis; Prohaska, Michael D

    2013-09-30

    Many watershed model interfaces have been developed in recent years for predicting field-scale sediment loads. They share the goal of providing data for decisions aimed at improving watershed health and the effectiveness of water quality conservation efforts. The objectives of this study were to: 1) compare three watershed-scale models (Soil and Water Assessment Tool (SWAT), Field_SWAT, and the High Impact Targeting (HIT) model) against calibrated field-scale model (RUSLE2) in estimating sediment yield from 41 randomly selected agricultural fields within the River Raisin watershed; 2) evaluate the statistical significance among models; 3) assess the watershed models' capabilities in identifying areas of concern at the field level; 4) evaluate the reliability of the watershed-scale models for field-scale analysis. The SWAT model produced the most similar estimates to RUSLE2 by providing the closest median and the lowest absolute error in sediment yield predictions, while the HIT model estimates were the worst. Concerning statistically significant differences between models, SWAT was the only model found to be not significantly different from the calibrated RUSLE2 at α = 0.05. Meanwhile, all models were incapable of identifying priorities areas similar to the RUSLE2 model. Overall, SWAT provided the most correct estimates (51%) within the uncertainty bounds of RUSLE2 and is the most reliable among the studied models, while HIT is the least reliable. The results of this study suggest caution should be exercised when using watershed-scale models for field level decision-making, while field specific data is of paramount importance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Chapter 1.1 Process Scale-Up of Cellulose Nanocrystal Production to 25 kg per Batch at the Forest Products Laboratory

    Treesearch

    Richard S. Reiner; Alan W. Rudie

    2013-01-01

    The Fiber and Chemical Sciences Research Work Unit at the Forest Products Laboratory began working out the preparation of cellulose nanocrystals in 2006, using the method of Dong, Revol, and Gray. Initial samples were provided to several scientists within the Forest Service. Continued requests for this material forced scale-up from the initial 20 g scale to kg...

  7. Scaling and modeling of turbulent suspension flows

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1989-01-01

    Scaling factors determining various aspects of particle-fluid interactions and the development of physical models to predict gas-solid turbulent suspension flow fields are discussed based on two-fluid, continua formulation. The modes of particle-fluid interactions are discussed based on the length and time scale ratio, which depends on the properties of the particles and the characteristics of the flow turbulence. For particle size smaller than or comparable with the Kolmogorov length scale and concentration low enough for neglecting direct particle-particle interaction, scaling rules can be established in various parameter ranges. The various particle-fluid interactions give rise to additional mechanisms which affect the fluid mechanics of the conveying gas phase. These extra mechanisms are incorporated into a turbulence modeling method based on the scaling rules. A multiple-scale two-phase turbulence model is developed, which gives reasonable predictions for dilute suspension flow. Much work still needs to be done to account for the poly-dispersed effects and the extension to dense suspension flows.

  8. Salvus: A flexible high-performance and open-source package for waveform modelling and inversion from laboratory to global scales

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael; Boehm, Christian; van Driel, Martin; Krischer, Lion; May, Dave; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Currently based on an abstract implementation of high order finite (spectral) elements, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. viscoelastic, coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ template mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and

  9. Web-Based Virtual Laboratory for Food Analysis Course

    NASA Astrophysics Data System (ADS)

    Handayani, M. N.; Khoerunnisa, I.; Sugiarti, Y.

    2018-02-01

    Implementation of learning on food analysis course in Program Study of Agro-industrial Technology Education faced problems. These problems include the availability of space and tools in the laboratory that is not comparable with the number of students also lack of interactive learning tools. On the other hand, the information technology literacy of students is quite high as well the internet network is quite easily accessible on campus. This is a challenge as well as opportunities in the development of learning media that can help optimize learning in the laboratory. This study aims to develop web-based virtual laboratory as one of the alternative learning media in food analysis course. This research is R & D (research and development) which refers to Borg & Gall model. The results showed that assessment’s expert of web-based virtual labs developed, in terms of software engineering aspects; visual communication; material relevance; usefulness and language used, is feasible as learning media. The results of the scaled test and wide-scale test show that students strongly agree with the development of web based virtual laboratory. The response of student to this virtual laboratory was positive. Suggestions from students provided further opportunities for improvement web based virtual laboratory and should be considered for further research.

  10. ScaleNet: a literature-based model of scale insect biology and systematics

    PubMed Central

    García Morales, Mayrolin; Denno, Barbara D.; Miller, Douglass R.; Miller, Gary L.; Ben-Dov, Yair; Hardy, Nate B.

    2016-01-01

    Scale insects (Hemiptera: Coccoidea) are small herbivorous insects found on all continents except Antarctica. They are extremely invasive, and many species are serious agricultural pests. They are also emerging models for studies of the evolution of genetic systems, endosymbiosis and plant-insect interactions. ScaleNet was launched in 1995 to provide insect identifiers, pest managers, insect systematists, evolutionary biologists and ecologists efficient access to information about scale insect biological diversity. It provides comprehensive information on scale insects taken directly from the primary literature. Currently, it draws from 23 477 articles and describes the systematics and biology of 8194 valid species. For 20 years, ScaleNet ran on the same software platform. That platform is no longer viable. Here, we present a new, open-source implementation of ScaleNet. We have normalized the data model, begun the process of correcting invalid data, upgraded the user interface, and added online administrative tools. These improvements make ScaleNet easier to use and maintain and make the ScaleNet data more accurate and extendable. Database URL: http://scalenet.info PMID:26861659

  11. Phytoplankton Productivity numerical model: calibration via laboratory cultures

    NASA Astrophysics Data System (ADS)

    Zavatarelli, Marco; fiori, Emanuela; Carolina, Amadio

    2017-04-01

    The primary production module of the "Biogeochemical Flux Model" (BFM) has been used to replicate results from laboratory phytoplankton cultures of diatoms, dinoflagellates and picophytoplankton. The model explicitly solve for the phytoplankton, chlorophyll, carbon, phosphorus, nitrogen and (diatoms only) silicon content. Simulations of the temporal evolution of the cultured phytoplankton biomass, have been carried out in order to provide a correct parameterization of the temperature role in modulating the growth dynamics, and to gain insight in the process of chlorophyll turnover, with particular reference to the phytoplankton biomass decay in condition of nutrient stress. Results highligthed some limitation of the Q10 approach in defining the temperature constraints on the primary production (particularly at relatively high temperature) This required a modification of such approach. Moreover, the decay of the chlorophyll concentration under nutrient stress, appeared (as expected) significantly decoupled from the evolution of the carbon content. The implementation of a specific procedure (based on the laboratory culture results) adressing such decoupling, allowed for the achievement of better agreement between model and observations.

  12. Strain Localization and Weakening Processes in Viscously Deforming Rocks: Numerical Modeling Based on Laboratory Torsion Experiments

    NASA Astrophysics Data System (ADS)

    Doehmann, M.; Brune, S.; Nardini, L.; Rybacki, E.; Dresen, G.

    2017-12-01

    Strain localization is an ubiquitous process in earth materials observed over a broad range of scales in space and time. Localized deformation and the formation of shear zones and faults typically involves material softening by various processes, like shear heating and grain size reduction. Numerical modeling enables us to study the complex physical and chemical weakening processes by separating the effect of individual parameters and boundary conditions. Using simple piece-wise linear functions for the parametrization of weakening processes allows studying a system at a chosen (lower) level of complexity (e.g. Cyprych et al., 2016). In this study, we utilize a finite element model to test two weakening laws that reduce the strength of the material depending on either the I) amount of accumulated strain or II) deformational work. Our 2D Cartesian models are benchmarked to single inclusion torsion experiments performed at elevated temperatures of 900 °C and pressures of up to 400 MPa (Rybacki et al., 2014). The experiments were performed on Carrara marble samples containing a weak Solnhofen limestone inclusion at a maximum strain rate of 2.0*10-4 s-1. Our models are designed to reproduce shear deformation of a hollow cylinder equivalent to the laboratory setup, such that material leaving one side of the model in shear direction enters again on the opposite side using periodic boundary conditions. Similar to the laboratory tests, we applied constant strain rate and constant stress boundary conditions.We use our model to investigate the time-dependent distribution of stress and strain and the effect of different parameters. For instance, inclusion rotation is shown to be strongly dependent on the viscosity ratio between matrix and inclusion and stronger ductile weakening increases the localization rate while decreasing shear zone width. The most suitable weakening law for representation of ductile rock is determined by combining the results of parameter tests with

  13. Impact decapitation from laboratory to basin scales

    NASA Technical Reports Server (NTRS)

    Schultz, P. H.; Gault, D. E.

    1991-01-01

    Although vertical hypervelocity impacts result in the annihilation (melting/vaporization) of the projectile, oblique impacts (less than 15 deg) fundamentally change the partitioning of energy with fragments as large as 10 percent of the original projectile surviving. Laboratory experiments reveal that both ductile and brittle projectiles produce very similar results where limiting disruption depends on stresses proportional to the vertical velocity component. Failure of the projectile at laboratory impact velocities (6 km/s) is largely controlled by stresses established before the projectile has penetrated a significant distance into the target. The planetary surface record exhibits numerous examples of oblique impacts with evidence fir projectile failure and downrange sibling collisions.

  14. LOW OZONE-DEPLETING HALOCARBONS AS TOTAL-FLOOD AGENTS: VOLUME 2. LABORATORY-SCALE FIRE SUPPRESSION AND EXPLOSION PREVENTION TESTING

    EPA Science Inventory

    The report gives results from (1) flame suppression testing of potential Halon-1301 (CF3Br) replacement chemicals in a laboratory cup burner using n-heptane fuel and (2) explosion prevention (inertion) testing in a small-scale explosion sphere using propane and methane as fuels. ...

  15. Effects of combustion temperature on PCDD/Fs formation in laboratory-scale fluidized-bed incineration.

    PubMed

    Hatanaka, T; Imagawa, T; Kitajima, A; Takeuchi, M

    2001-12-15

    Combustion experiments in a laboratory-scale fluidized-bed reactor were performed to elucidate the effects of combustion temperature on PCDD/Fs formation during incineration of model wastes with poly(vinyl chloride) or sodium chloride as a chlorine source and copper chloride as a catalyst. Each temperature of primary and secondary combustion zones in the reactor was set independently to 700, 800, and 900 degrees C using external electric heaters. The PCDD/Fs concentration is reduced as the temperature of the secondary combustion zone increases. It is effective to keep the temperature of the secondary combustion zone high enough to reduce their release during the waste incineration. On the other hand, as the temperature of the primary combustion zone rises, the PCDD/Fs concentration also increases. Lower temperature of the primary combustion zone results in less PCDD/Fs concentration in these experimental conditions. This result is probably related to the devolatilization rate of the solid waste in the primary combustion zone. The temperature decrease slows the devolatilization rate and promotes mixing of oxygen and volatile matters from the solid waste. This contributes to completing combustion reactions, resulting in reducing the PCDD/Fs concentration.

  16. Multi-scale computational modeling of developmental biology.

    PubMed

    Setty, Yaki

    2012-08-01

    Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.

  17. Evaluation of Non-Laboratory and Laboratory Prediction Models for Current and Future Diabetes Mellitus: A Cross-Sectional and Retrospective Cohort Study

    PubMed Central

    Hahn, Seokyung; Moon, Min Kyong; Park, Kyong Soo; Cho, Young Min

    2016-01-01

    Background Various diabetes risk scores composed of non-laboratory parameters have been developed, but only a few studies performed cross-validation of these scores and a comparison with laboratory parameters. We evaluated the performance of diabetes risk scores composed of non-laboratory parameters, including a recently published Korean risk score (KRS), and compared them with laboratory parameters. Methods The data of 26,675 individuals who visited the Seoul National University Hospital Healthcare System Gangnam Center for a health screening program were reviewed for cross-sectional validation. The data of 3,029 individuals with a mean of 6.2 years of follow-up were reviewed for longitudinal validation. The KRS and 16 other risk scores were evaluated and compared with a laboratory prediction model developed by logistic regression analysis. Results For the screening of undiagnosed diabetes, the KRS exhibited a sensitivity of 81%, a specificity of 58%, and an area under the receiver operating characteristic curve (AROC) of 0.754. Other scores showed AROCs that ranged from 0.697 to 0.782. For the prediction of future diabetes, the KRS exhibited a sensitivity of 74%, a specificity of 54%, and an AROC of 0.696. Other scores had AROCs ranging from 0.630 to 0.721. The laboratory prediction model composed of fasting plasma glucose and hemoglobin A1c levels showed a significantly higher AROC (0.838, P < 0.001) than the KRS. The addition of the KRS to the laboratory prediction model increased the AROC (0.849, P = 0.016) without a significant improvement in the risk classification (net reclassification index: 4.6%, P = 0.264). Conclusions The non-laboratory risk scores, including KRS, are useful to estimate the risk of undiagnosed diabetes but are inferior to the laboratory parameters for predicting future diabetes. PMID:27214034

  18. Using thermal balance model to determine optimal reactor volume and insulation material needed in a laboratory-scale composting reactor.

    PubMed

    Wang, Yongjiang; Pang, Li; Liu, Xinyu; Wang, Yuansheng; Zhou, Kexun; Luo, Fei

    2016-04-01

    A comprehensive model of thermal balance and degradation kinetics was developed to determine the optimal reactor volume and insulation material. Biological heat production and five channels of heat loss were considered in the thermal balance model for a representative reactor. Degradation kinetics was developed to make the model applicable to different types of substrates. Simulation of the model showed that the internal energy accumulation of compost was the significant heat loss channel, following by heat loss through reactor wall, and latent heat of water evaporation. Lower proportion of heat loss occurred through the reactor wall when the reactor volume was larger. Insulating materials with low densities and low conductive coefficients were more desirable for building small reactor systems. Model developed could be used to determine the optimal reactor volume and insulation material needed before the fabrication of a lab-scale composting system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Laboratory-scale experiments and numerical modeling of cosolvent flushing of multi-component NAPLs in saturated porous media.

    PubMed

    Agaoglu, Berken; Scheytt, Traugott; Copty, Nadim K

    2012-10-01

    This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations was also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with low flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. Model simulations over-estimated NAPL recovery for high specific discharges and rate-limited mass transfer, suggesting a constant mass transfer coefficient for the entire flushing experiment may not be valid. When multi-component NAPLs are present, the dissolution rate of individual organic compounds (namely, toluene and benzene) into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Pore scale modelling of electrical and hydraulic properties of a semi-consolidated sandstone under unsaturated conditions

    NASA Astrophysics Data System (ADS)

    Cassiani, G.; dalla, E.; Brovelli, A.; Pitea, D.; Binley, A. M.

    2003-04-01

    The development of reliable constitutive laws to translate geophysical properties into hydrological ones is the fundamental step for successful applications of hydrogeophysical techniques. Many such laws have been proposed and applied, particularly with regard to two types of relationships: (a) between moisture content and dielectric properties, and (b) between electrical resistivity, rock structure and water saturation. The classical Archie's law belongs to this latter category. Archie's relationship has been widely used, starting from borehole logs applications, to translate geoelectrical measurements into estimates of saturation. However, in spite of its popularity, it remains an empirical relationship, the parameters of which must be calibrated case by case, e.g. on laboratory data. Pore-scale models have been recently recognized and used as powerful tools to investigate the constitutive relations of multiphase soils from a pore-scale point of view, because they bridge the microscopic and macroscopic scales. In this project, we develop and validate a three-dimensional pore-scale method to compute electrical properties of unsaturated and saturated porous media. First we simulate a random packing of spheres [1] that obeys the grain-size distribution and porosity of an experimental porous medium system; then we simulate primary drainage with a morphological approach [2]; finally, for each state of saturation during the drainage process, we solve the electrical conduction equation within the grain structure with a new numerical model and compute the apparent electrical resistivity of the porous medium. We apply the new method to a semi-consolidated Permo-Triassic Sandstone from the UK (Sherwood Sandstone) for which both pressure-saturation (Van Genuchten) and Archie's law parameters have been measured on laboratory samples. A comparison between simulated and measured relationships has been performed.

  1. ScaleNet: a literature-based model of scale insect biology and systematics.

    PubMed

    García Morales, Mayrolin; Denno, Barbara D; Miller, Douglass R; Miller, Gary L; Ben-Dov, Yair; Hardy, Nate B

    2016-01-01

    Scale insects (Hemiptera: Coccoidea) are small herbivorous insects found on all continents except Antarctica. They are extremely invasive, and many species are serious agricultural pests. They are also emerging models for studies of the evolution of genetic systems, endosymbiosis and plant-insect interactions. ScaleNet was launched in 1995 to provide insect identifiers, pest managers, insect systematists, evolutionary biologists and ecologists efficient access to information about scale insect biological diversity. It provides comprehensive information on scale insects taken directly from the primary literature. Currently, it draws from 23,477 articles and describes the systematics and biology of 8194 valid species. For 20 years, ScaleNet ran on the same software platform. That platform is no longer viable. Here, we present a new, open-source implementation of ScaleNet. We have normalized the data model, begun the process of correcting invalid data, upgraded the user interface, and added online administrative tools. These improvements make ScaleNet easier to use and maintain and make the ScaleNet data more accurate and extendable. Database URL: http://scalenet.info. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  2. Linking Aerosol Optical Properties Between Laboratory, Field, and Model Studies

    NASA Astrophysics Data System (ADS)

    Murphy, S. M.; Pokhrel, R. P.; Foster, K. A.; Brown, H.; Liu, X.

    2017-12-01

    The optical properties of aerosol emissions from biomass burning have a significant impact on the Earth's radiative balance. Based on measurements made during the Fourth Fire Lab in Missoula Experiment, our group published a series of parameterizations that related optical properties (single scattering albedo and absorption due to brown carbon at multiple wavelengths) to the elemental to total carbon ratio of aerosols emitted from biomass burning. In this presentation, the ability of these parameterizations to simulate the optical properties of ambient aerosol is assessed using observations collected in 2017 from our mobile laboratory chasing wildfires in the Western United States. The ambient data includes measurements of multi-wavelength absorption, scattering, and extinction, size distribution, chemical composition, and volatility. In addition to testing the laboratory parameterizations, this combination of measurements allows us to assess the ability of core-shell Mie Theory to replicate observations and to assess the impact of brown carbon and mixing state on optical properties. Finally, both laboratory and ambient data are compared to the optical properties generated by a prominent climate model (Community Earth System Model (CESM) coupled with the Community Atmosphere Model (CAM 5)). The discrepancies between lab observations, ambient observations and model output will be discussed.

  3. Customer Satisfaction Assessment at the Pacific Northwest National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale N.; Sours, Mardell L.

    2000-03-20

    The Pacific Northwest National Laboratory (PNNL) is developing and implementing a customer satisfaction assessment program (CSAP) to assess the quality of research and development provided by the laboratory. We present the customer survey component of the PNNL CSAP. The customer survey questionnaire is composed of 2 major sections, Strategic Value and Project Performance. The Strategic Value section of the questionnaire consists of 5 questions that can be answered with a 5 point Likert scale response. These questions are designed to determine if a project is directly contributing to critical future national needs. The Project Performance section of the questionnaire consistsmore » of 9 questions that can be answered with a 5 point Likert scale response. These questions determine PNNL performance in meeting customer expectations. Many approaches could be used to analyze customer survey data. We present a statistical model that can accurately capture the random behavior of customer survey data. The properties of this statistical model can be used to establish a "gold standard'' or performance expectation for the laboratory, and then assess progress. The gold standard is defined from input from laboratory management --- answers to 4 simple questions, in terms of the information obtained from the CSAP customer survey, define the standard: *What should the average Strategic Value be for the laboratory project portfolio? *What Strategic Value interval should include most of the projects in the laboratory portfolio? *What should average Project Performance be for projects with a Strategic Value of about 2? *What should average Project Performance be for projects with a Strategic Value of about 4? We discuss how to analyze CSAP customer survey data with this model. Our discussion will include "lessons learned" and issues that can invalidate this type of assessment.« less

  4. Airframe noise of a small model transport aircraft and scaling effects

    NASA Astrophysics Data System (ADS)

    Shearin, J. G.

    1981-05-01

    Airframe noise of a 0.01 scale model Boeing 747 wide-body transport was measured in the Langley Anechoic Noise Facility. The model geometry simulated the landing and cruise configurations. The model noise was found to be similar in noise characteristics to that possessed by a 0.03 scale model 747. The 0.01 scale model noise data scaled to within 3 dB of full scale data using the same scaling relationships as that used to scale the 0.03 scale model noise data. The model noise data are compared with full scale noise data, where the full scale data are calculated using the NASA aircraft noise prediction program.

  5. Acoustic Emission Patterns and the Transition to Ductility in Sub-Micron Scale Laboratory Earthquakes

    NASA Astrophysics Data System (ADS)

    Ghaffari, H.; Xia, K.; Young, R.

    2013-12-01

    We report observation of a transition from the brittle to ductile regime in precursor events from different rock materials (Granite, Sandstone, Basalt, and Gypsum) and Polymers (PMMA, PTFE and CR-39). Acoustic emission patterns associated with sub-micron scale laboratory earthquakes are mapped into network parameter spaces (functional damage networks). The sub-classes hold nearly constant timescales, indicating dependency of the sub-phases on the mechanism governing the previous evolutionary phase, i.e., deformation and failure of asperities. Based on our findings, we propose that the signature of the non-linear elastic zone around a crack tip is mapped into the details of the evolutionary phases, supporting the formation of a strongly weak zone in the vicinity of crack tips. Moreover, we recognize sub-micron to micron ruptures with signatures of 'stiffening' in the deformation phase of acoustic-waveforms. We propose that the latter rupture fronts carry critical rupture extensions, including possible dislocations faster than the shear wave speed. Using 'template super-shear waveforms' and their network characteristics, we show that the acoustic emission signals are possible super-shear or intersonic events. Ref. [1] Ghaffari, H. O., and R. P. Young. "Acoustic-Friction Networks and the Evolution of Precursor Rupture Fronts in Laboratory Earthquakes." Nature Scientific reports 3 (2013). [2] Xia, Kaiwen, Ares J. Rosakis, and Hiroo Kanamori. "Laboratory earthquakes: The sub-Rayleigh-to-supershear rupture transition." Science 303.5665 (2004): 1859-1861. [3] Mello, M., et al. "Identifying the unique ground motion signatures of supershear earthquakes: Theory and experiments." Tectonophysics 493.3 (2010): 297-326. [4] Gumbsch, Peter, and Huajian Gao. "Dislocations faster than the speed of sound." Science 283.5404 (1999): 965-968. [5] Livne, Ariel, et al. "The near-tip fields of fast cracks." Science 327.5971 (2010): 1359-1363. [6] Rycroft, Chris H., and Eran Bouchbinder

  6. A New Method of Building Scale-Model Houses

    Treesearch

    Richard N. Malcolm

    1978-01-01

    Scale-model houses are used to display new architectural and construction designs.Some scale-model houses will not withstand the abuse of shipping and handling.This report describes how to build a solid-core model house which is rigid, lightweight, and sturdy.

  7. Establishing an academic laboratory: mentoring as a business model

    PubMed Central

    Greco, Valentina

    2014-01-01

    It is a tremendous honor for my group and me to receive the recognition of the 2014 Women in Cell Biology Junior Award. I would like to take the opportunity of this essay to describe my scientific journey, discuss my philosophy about running a group, and propose what I think is a generalizable model to efficiently establish an academic laboratory. This essay is about my view on the critical components that go into establishing a highly functional academic laboratory during the current tough, competitive times. PMID:25360043

  8. Laboratory studies of 2H evaporator scale dissolution in dilute nitric acid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oji, L.

    The rate of 2H evaporator scale solids dissolution in dilute nitric acid has been experimentally evaluated under laboratory conditions in the SRNL shielded cells. The 2H scale sample used for the dissolution study came from the bottom of the evaporator cone section and the wall section of the evaporator cone. The accumulation rate of aluminum and silicon, assumed to be the two principal elemental constituents of the 2H evaporator scale aluminosilicate mineral, were monitored in solution. Aluminum and silicon concentration changes, with heating time at a constant oven temperature of 90 deg C, were used to ascertain the extent ofmore » dissolution of the 2H evaporator scale mineral. The 2H evaporator scale solids, assumed to be composed of mostly aluminosilicate mineral, readily dissolves in 1.5 and 1.25 M dilute nitric acid solutions yielding principal elemental components of aluminum and silicon in solution. The 2H scale dissolution rate constant, based on aluminum accumulation in 1.5 and 1.25 M dilute nitric acid solution are, respectively, 9.21E-04 ± 6.39E-04 min{sup -1} and 1.07E-03 ± 7.51E-05 min{sup -1}. Silicon accumulation rate in solution does track the aluminum accumulation profile during the first few minutes of scale dissolution. It however diverges towards the end of the scale dissolution. This divergence therefore means the aluminum-to-silicon ratio in the first phase of the scale dissolution (non-steady state conditions) is different from the ratio towards the end of the scale dissolution. Possible causes of this change in silicon accumulation in solution as the scale dissolution progresses may include silicon precipitation from solution or the 2H evaporator scale is a heterogeneous mixture of aluminosilicate minerals with several impurities. The average half-life for the decomposition of the 2H evaporator scale mineral in 1.5 M nitric acid is 12.5 hours, while the half-life for the decomposition of the 2H evaporator scale in 1.25 M nitric acid is

  9. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  10. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.; hide

    2008-01-01

    Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite

  11. Impact of Scattering Model on Disdrometer Derived Attenuation Scaling

    NASA Technical Reports Server (NTRS)

    Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)

    2016-01-01

    NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.

  12. Impact of Scattering Model on Disdrometer Derived Attenuation Scaling

    NASA Technical Reports Server (NTRS)

    Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo

    2016-01-01

    NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP#5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 gigahertz attenuation from the disdrometer and the 20 gigahertz time-series as well as to directly measure the 40 gigahertz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data. In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer-derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.

  13. Biological treatment of whey by Tetrahymena pyriformis and impact study on laboratory-scale wastewater lagoon process.

    PubMed

    Bonnet, J L; Bogaerts, P; Bohatier, J

    1999-06-01

    A procedure based on a biological treatment of whey was tested as part of research on waste treatment at the scale of small cheesemaking units. We studied the potential biodegradation of whey by a protozoan ciliate, Tetrahymena pyriformis, and evaluated the functional, microbiological and physiological disturbances caused by crude whey and the biodegraded whey in laboratory-scale pilots mimicking a natural lagoon treatment. The results show that T. pyriformis can strongly reduce the pollutant load of whey. In the lagoon pilots serving as example of receptor media, crude whey gradually but completely arrested operation, whereas with the biodegraded whey adverse effects were only temporary, and normal operation versus a control was gradually recovered in a few days.

  14. SDG and qualitative trend based model multiple scale validation

    NASA Astrophysics Data System (ADS)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  15. Generalization Technique for 2D+SCALE Dhe Data Model

    NASA Astrophysics Data System (ADS)

    Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel

    2016-10-01

    Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.

  16. Large-scale modeling of rain fields from a rain cell deterministic model

    NASA Astrophysics Data System (ADS)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  17. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    NASA Astrophysics Data System (ADS)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced

  18. Transdisciplinary application of the cross-scale resilience model

    USGS Publications Warehouse

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  19. JWST Full-Scale Model on Display in Germany

    NASA Image and Video Library

    2010-03-10

    JWST Full-Scale Model on Display. A full-scale model of the James Webb Space Telescope was built by the prime contractor, Northrop Grumman, to provide a better understanding of the size, scale and complexity of this satellite. The model is constructed mainly of aluminum and steel, weighs 12,000 lb., and is approximately 80 feet long, 40 feet wide and 40 feet tall. The model requires 2 trucks to ship it and assembly takes a crew of 12 approximately four days. This model has travelled to a few sites since 2005. The photographs below were taken at some of its destinations. The model is pictured here in Munich, Germany Credit: EADS Astrium

  20. JWST Full-Scale Model on Display in Germany

    NASA Image and Video Library

    2017-12-08

    JWST Full-Scale Model on Display. A full-scale model of the James Webb Space Telescope was built by the prime contractor, Northrop Grumman, to provide a better understanding of the size, scale and complexity of this satellite. The model is constructed mainly of aluminum and steel, weighs 12,000 lb., and is approximately 80 feet long, 40 feet wide and 40 feet tall. The model requires 2 trucks to ship it and assembly takes a crew of 12 approximately four days. This model has travelled to a few sites since 2005. The photographs below were taken at some of its destinations. The model is pictured here in Munich, Germany Credit: EADS Astrium

  1. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  2. A first large-scale flood inundation forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast

  3. Modelling utility-scale wind power plants. Part 2: Capacity credit

    NASA Astrophysics Data System (ADS)

    Milligan, Michael R.

    2000-10-01

    As the worldwide use of wind turbine generators in utility-scale applications continues to increase, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This article is the second in a two-part series that addresses modelling approaches and results that were obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This second article focuses on wind plant capacity credit as measured with power system reliability indices. Reliability-based methods of measuring capacity credit are compared with wind plant capacity factor. The relationship between capacity credit and accurate wind forecasting is also explored. Published in 2000 by John Wiley & Sons, Ltd.

  4. Establishing an academic laboratory: mentoring as a business model.

    PubMed

    Greco, Valentina

    2014-11-01

    It is a tremendous honor for my group and me to receive the recognition of the 2014 Women in Cell Biology Junior Award. I would like to take the opportunity of this essay to describe my scientific journey, discuss my philosophy about running a group, and propose what I think is a generalizable model to efficiently establish an academic laboratory. This essay is about my view on the critical components that go into establishing a highly functional academic laboratory during the current tough, competitive times. © 2014 Greco.

  5. Slurry spray distribution within a simulated laboratory scale spray dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertone, P.C.

    1979-12-20

    It was found that the distribution of liquid striking the sides of a simulated room temperature spray dryer was not significantly altered by the choice of nozles, nor by a variation in nozzle operating conditions. Instead, it was found to be a function of the spray dryer's configuration. A cocurrent flow of air down the drying cylinder, not possible with PNL's closed top, favorably altered the spray distribution by both decreasing the amount of liquid striking the interior of the cylinder from 72 to 26% of the feed supplied, and by shifting the zone of maximum impact from 1.0 tomore » 1.7 feet from the nozzle. These findings led to the redesign of the laboratory scale spray dryer to be tested at the Savannah River Plant. The diameter of the drying chamber was increased from 5 to 8 inches, and a cocurrent flow of air was established with a closed recycle. Finally, this investigation suggested a drying scheme which offers all the advantages of spray drying without many of its limitations.« less

  6. The Role of Laboratory-Based Studies of the Physical and Biological Properties of Sea Ice in Supporting the Observation and Modeling of Ice Covered Seas

    NASA Astrophysics Data System (ADS)

    Light, B.; Krembs, C.

    2003-12-01

    Laboratory-based studies of the physical and biological properties of sea ice are an essential link between high latitude field observations and existing numerical models. Such studies promote improved understanding of climatic variability and its impact on sea ice and the structure of ice-dependent marine ecosystems. Controlled laboratory experiments can help identify feedback mechanisms between physical and biological processes and their response to climate fluctuations. Climatically sensitive processes occurring between sea ice and the atmosphere and sea ice and the ocean determine surface radiative energy fluxes and the transfer of nutrients and mass across these boundaries. High temporally and spatially resolved analyses of sea ice under controlled environmental conditions lend insight to the physics that drive these transfer processes. Techniques such as optical probing, thin section photography, and microscopy can be used to conduct experiments on natural sea ice core samples and laboratory-grown ice. Such experiments yield insight on small scale processes from the microscopic to the meter scale and can be powerful interdisciplinary tools for education and model parameterization development. Examples of laboratory investigations by the authors include observation of the response of sea ice microstructure to changes in temperature, assessment of the relationships between ice structure and the partitioning of solar radiation by first-year sea ice covers, observation of pore evolution and interfacial structure, and quantification of the production and impact of microbial metabolic products on the mechanical, optical, and textural characteristics of sea ice.

  7. On the Subgrid-Scale Modeling of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Squires, Kyle; Zeman, Otto

    1990-01-01

    A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.

  8. Evaluating scaling models in biology using hierarchical Bayesian approaches

    PubMed Central

    Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S

    2009-01-01

    Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621

  9. Validating the Equilibrium Stage Model for an Azeotropic System in a Laboratorial Distillation Column

    ERIC Educational Resources Information Center

    Duarte, B. P. M.; Coelho Pinheiro, M. N.; Silva, D. C. M.; Moura, M. J.

    2006-01-01

    The experiment described is an excellent opportunity to apply theoretical concepts of distillation, thermodynamics of mixtures and process simulation at laboratory scale, and simultaneously enhance the ability of students to operate, control and monitor complex units.

  10. Multi-scaling modelling in financial markets

    NASA Astrophysics Data System (ADS)

    Liu, Ruipeng; Aste, Tomaso; Di Matteo, T.

    2007-12-01

    In the recent years, a new wave of interest spurred the involvement of complexity in finance which might provide a guideline to understand the mechanism of financial markets, and researchers with different backgrounds have made increasing contributions introducing new techniques and methodologies. In this paper, Markov-switching multifractal models (MSM) are briefly reviewed and the multi-scaling properties of different financial data are analyzed by computing the scaling exponents by means of the generalized Hurst exponent H(q). In particular we have considered H(q) for price data, absolute returns and squared returns of different empirical financial time series. We have computed H(q) for the simulated data based on the MSM models with Binomial and Lognormal distributions of the volatility components. The results demonstrate the capacity of the multifractal (MF) models to capture the stylized facts in finance, and the ability of the generalized Hurst exponents approach to detect the scaling feature of financial time series.

  11. Scale and modeling issues in water resources planning

    USGS Publications Warehouse

    Lins, H.F.; Wolock, D.M.; McCabe, G.J.

    1997-01-01

    Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models-the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.

  12. RANS Simulation (Virtual Blade Model [VBM]) of Array of Three Coaxial Lab Scaled DOE RM1 MHK Turbine with 5D Spacing

    DOE Data Explorer

    Javaherchi, Teymour

    2016-06-08

    Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for the Reynolds Averaged Navier-Stokes (RANS) simulation of three coaxially located lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbines in a coaxial array is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of each device and structure of their turbulent far wake. The results of these simulations were validated against the developed in-house experimental data. Simulations for other turbine configurations are available upon request.

  13. Correlations between homologue concentrations of PCDD/Fs and toxic equivalency values in laboratory-, package boiler-, and field-scale incinerators.

    PubMed

    Iino, Fukuya; Takasuga, Takumi; Touati, Abderrahmane; Gullett, Brian K

    2003-01-01

    The toxic equivalency (TEQ) values of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) are predicted with a model based on the homologue concentrations measured from a laboratory-scale reactor (124 data points), a package boiler (61 data points), and operating municipal waste incinerators (114 data points). Regardless of the three scales and types of equipment, the different temperature profiles, sampling emissions and/or solids (fly ash), and the various chemical and physical properties of the fuels, all the PCDF plots showed highly linear correlations (R(2)>0.99). The fitting lines of the reactor and the boiler data were almost linear with slope of unity, whereas the slope of the municipal waste incinerator data was 0.86, which is caused by higher predicted values for samples with high measured TEQ. The strong correlation also implies that each of the 10 toxic PCDF congeners has a constant concentration relative to its respective total homologue concentration despite a wide range of facility types and combustion conditions. The PCDD plots showed significant scatter and poor linearity, which implies that the relative concentration of PCDD TEQ congeners is more sensitive to variations in reaction conditions than that of the PCDF congeners.

  14. JWST Full-Scale Model on Display at GSFC

    NASA Image and Video Library

    2010-02-26

    JWST Full-Scale Model on Display. A full-scale model of the James Webb Space Telescope was built by the prime contractor, Northrop Grumman, to provide a better understanding of the size, scale and complexity of this satellite. The model is constructed mainly of aluminum and steel, weighs 12,000 lb., and is approximately 80 feet long, 40 feet wide and 40 feet tall. The model requires 2 trucks to ship it and assembly takes a crew of 12 approximately four days. This model has travelled to a few sites since 2005. The photographs below were taken at some of its destinations. The model is pictured here in Greenbelt, MD at the NASA Goddard Space Flight Center. Credit: NASA/Goddard Space Flight Center/Pat Izzo

  15. Scale Mixture Models with Applications to Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Qin, Zhaohui S.; Damien, Paul; Walker, Stephen

    2003-11-01

    Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.

  16. Results of tests of advanced flexible insulation vortex and flow environments in the North American Aerodynamics Laboratory lowspeed wind tunnel using 0.0405-scale Space Shuttle Orbiter model 16-0 (test OA-309)

    NASA Technical Reports Server (NTRS)

    Marshall, B. A.; Nichols, M. E.

    1984-01-01

    An experimental investigation (Test OA-309) was conducted using 0.0405-scale Space Shuttle Orbiter Model 16-0 in the North American Aerodynamics Laboratory 7.75 x 11.00-foot Lowspeed Wind Tunnel. The primary purpose was to locate and study any flow conditions or vortices that might have caused damage to the Advanced Flexible Reusable Surface Insulation (AFRSI) during the Space Transportation System STS-6 mission. A secondary objective was to evaluate vortex generators to be used for Wind Tunnel Test OS-314. Flowfield visualization was obtained by means of smoke, tufts, and oil flow. The test was conducted at Mach numbers between 0.07 and 0.23 and at dynamic pressures between 7 and 35 pounds per square foot. The angle-of-attack range of the model was -5 degrees through 35 degrees at 0 or 2 degrees of sideslip, while roll angle was held constant at zero degrees. The vortex generators were studied at angles of 0, 5, 10, and 15 degrees.

  17. Improved Strength and Damage Modeling of Geologic Materials

    NASA Astrophysics Data System (ADS)

    Stewart, Sarah; Senft, Laurel

    2007-06-01

    Collisions and impact cratering events are important processes in the evolution of planetary bodies. The time and length scales of planetary collisions, however, are inaccessible in the laboratory and require the use of shock physics codes. We present the results from a new rheological model for geological materials implemented in the CTH code [1]. The `ROCK' model includes pressure, temperature, and damage effects on strength, as well as acoustic fluidization during impact crater collapse. We demonstrate that the model accurately reproduces final crater shapes, tensile cracking, and damaged zones from laboratory to planetary scales. The strength model requires basic material properties; hence, the input parameters may be benchmarked to laboratory results and extended to planetary collision events. We show the effects of varying material strength parameters, which are dependent on both scale and strain rate, and discuss choosing appropriate parameters for laboratory and planetary situations. The results are a significant improvement in models of continuum rock deformation during large scale impact events. [1] Senft, L. E., Stewart, S. T. Modeling Impact Cratering in Layered Surfaces, J. Geophys. Res., submitted.

  18. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  19. Model-Based Reasoning in the Physics Laboratory: Framework and Initial Results

    ERIC Educational Resources Information Center

    Zwickl, Benjamin M.; Hu, Dehui; Finkelstein, Noah; Lewandowski, H. J.

    2015-01-01

    We review and extend existing frameworks on modeling to develop a new framework that describes model-based reasoning in introductory and upper-division physics laboratories. Constructing and using models are core scientific practices that have gained significant attention within K-12 and higher education. Although modeling is a broadly applicable…

  20. The TriLab, a Novel ICT Based Triple Access Mode Laboratory Education Model

    ERIC Educational Resources Information Center

    Abdulwahed, Mahmoud; Nagy, Zoltan K.

    2011-01-01

    This paper introduces a novel model of laboratory education, namely the TriLab. The model is based on recent advances in ICT and implements a three access modes to the laboratory experience (virtual, hands-on and remote) in one software package. A review of the three modes is provided with highlights of advantages and disadvantages of each mode.…

  1. Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.

    PubMed

    Green, Sara; Batterman, Robert

    2017-02-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. High-resolution LES of the rotating stall in a reduced scale model pump-turbine

    NASA Astrophysics Data System (ADS)

    Pacot, Olivier; Kato, Chisachi; Avellan, François

    2014-03-01

    Extending the operating range of modern pump-turbines becomes increasingly important in the course of the integration of renewable energy sources in the existing power grid. However, at partial load condition in pumping mode, the occurrence of rotating stall is critical to the operational safety of the machine and on the grid stability. The understanding of the mechanisms behind this flow phenomenon yet remains vague and incomplete. Past numerical simulations using a RANS approach often led to inconclusive results concerning the physical background. For the first time, the rotating stall is investigated by performing a large scale LES calculation on the HYDRODYNA pump-turbine scale model featuring approximately 100 million elements. The computations were performed on the PRIMEHPC FX10 of the University of Tokyo using the overset Finite Element open source code FrontFlow/blue with the dynamic Smagorinsky turbulence model and the no-slip wall condition. The internal flow computed is the one when operating the pump-turbine at 76% of the best efficiency point in pumping mode, as previous experimental research showed the presence of four rotating cells. The rotating stall phenomenon is accurately reproduced for a reduced Reynolds number using the LES approach with acceptable computing resources. The results show an excellent agreement with available experimental data from the reduced scale model testing at the EPFL Laboratory for Hydraulic Machines. The number of stall cells as well as the propagation speed corroborates the experiment.

  3. Biointerface dynamics--Multi scale modeling considerations.

    PubMed

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. The Binary System Laboratory Activities Based on Students Mental Model

    NASA Astrophysics Data System (ADS)

    Albaiti, A.; Liliasari, S.; Sumarna, O.; Martoprawiro, M. A.

    2017-09-01

    Generic science skills (GSS) are required to develop student conception in learning binary system. The aim of this research was to know the improvement of students GSS through the binary system labotoratory activities based on their mental model using hypothetical-deductive learning cycle. It was a mixed methods embedded experimental model research design. This research involved 15 students of a university in Papua, Indonesia. Essay test of 7 items was used to analyze the improvement of students GSS. Each items was designed to interconnect macroscopic, sub-microscopic and symbolic levels. Students worksheet was used to explore students mental model during investigation in laboratory. The increase of students GSS could be seen in their N-Gain of each GSS indicators. The results were then analyzed descriptively. Students mental model and GSS have been improved from this study. They were interconnect macroscopic and symbolic levels to explain binary systems phenomena. Furthermore, they reconstructed their mental model with interconnecting the three levels of representation in Physical Chemistry. It necessary to integrate the Physical Chemistry Laboratory into a Physical Chemistry course for effectiveness and efficiency.

  5. Empirical spatial econometric modelling of small scale neighbourhood

    NASA Astrophysics Data System (ADS)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  6. Crystal Model Kits for Use in the General Chemistry Laboratory.

    ERIC Educational Resources Information Center

    Kildahl, Nicholas J.; And Others

    1986-01-01

    Dynamic crystal model kits are described. Laboratory experiments in which students use these kits to build models have been extremely successful in providing them with an understanding of the three-dimensional structures of the common cubic unit cells as well as hexagonal and cubic closest-packing of spheres. (JN)

  7. Genome-scale biological models for industrial microbial systems.

    PubMed

    Xu, Nan; Ye, Chao; Liu, Liming

    2018-04-01

    The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.

  8. Evaluation of the laboratory mouse model for screening topical mosquito repellents.

    PubMed

    Rutledge, L C; Gupta, R K; Wirtz, R A; Buescher, M D

    1994-12-01

    Eight commercial repellents were tested against Aedes aegypti 0 and 4 h after application in serial dilution to volunteers and laboratory mice. Results were analyzed by multiple regression of percentage of biting (probit scale) on dose (logarithmic scale) and time. Empirical correction terms for conversion of values obtained in tests on mice to values expected in tests on human volunteers were calculated from data obtained on 4 repellents and evaluated with data obtained on 4 others. Corrected values from tests on mice did not differ significantly from values obtained in tests on volunteers. Test materials used in the study were dimethyl phthalate, butopyronoxyl, butoxy polypropylene glycol, MGK Repellent 11, deet, ethyl hexanediol, Citronyl, and dibutyl phthalate.

  9. 2017 GTO Project review Laboratory Evaluation of EGS Shear Stimulation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Stephen J.

    The objectives and purpose of this research has been to produce laboratory-based experimental and numerical analyses to provide a physics-based understanding of shear stimulation phenomena (hydroshearing) and its evolution during stimulation. Water was flowed along fractures in hot and stressed fractured rock, to promote slip. The controlled laboratory experiments provide a high resolution/high quality data resource for evaluation of analysis methods developed by DOE to assess EGS “behavior” during this stimulation process. Segments of the experimental program will provide data sets for model input parameters, i.e., material properties, and other segments of the experimental program will represent small scale physicalmore » models of an EGS system, which may be modeled. The coupled lab/analysis project has been a study of the response of a fracture in hot, water-saturated fractured rock to shear stress experiencing fluid flow. Under this condition, the fracture experiences a combination of potential pore pressure changes and fracture surface cooling, resulting in slip along the fracture. The laboratory work provides a means to assess the role of “hydroshearing” on permeability enhancement in reservoir stimulation. Using the laboratory experiments and results to define boundary and input/output conditions of pore pressure, thermal stress, fracture shear deformation and fluid flow, and models were developed and simulations completed by the University of Oklahoma team. The analysis methods are ones used on field scale problems. The sophisticated numerical models developed contain parameters present in the field. The analysis results provide insight into the role of fracture slip on permeability enhancement-“hydroshear” is to be obtained. The work will provide valuable input data to evaluate stimulation models, thus helping design effective EGS.« less

  10. Emergent dynamics of laboratory insect swarms

    NASA Astrophysics Data System (ADS)

    Kelley, Douglas H.; Ouellette, Nicholas T.

    2013-01-01

    Collective animal behaviour occurs at nearly every biological size scale, from single-celled organisms to the largest animals on earth. It has long been known that models with simple interaction rules can reproduce qualitative features of this complex behaviour. But determining whether these models accurately capture the biology requires data from real animals, which has historically been difficult to obtain. Here, we report three-dimensional, time-resolved measurements of the positions, velocities, and accelerations of individual insects in laboratory swarms of the midge Chironomus riparius. Even though the swarms do not show an overall polarisation, we find statistical evidence for local clusters of correlated motion. We also show that the swarms display an effective large-scale potential that keeps individuals bound together, and we characterize the shape of this potential. Our results provide quantitative data against which the emergent characteristics of animal aggregation models can be benchmarked.

  11. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  12. Laboratory Observations of Dune Erosion

    NASA Astrophysics Data System (ADS)

    Maddux, T. B.; Ruggiero, P.; Palmsten, M.; Holman, R.; Cox, D. T.

    2006-12-01

    Coastal dunes are an important feature along many coastlines, owing to their input to the sediment supply, use as habitat, and ability to protect onshore resources from wave attack. Correct predictions of the erosion and overtopping rates of these features are needed to develop improved responses to coastal dune damage events, and to determining the likelihood and magnitude of future erosion and overtopping on different beaches. We have conducted a large-scale laboratory study at Oregon State University's O.H. Hinsdale Wave Research Laboratory (HWRL) with the goal of producing a comprehensive, near prototype-scale, physical model data set of hydrodynamics, sediment transport, and morphological evolution during extreme dune erosion events. The two goals of this work are (1) to develop a better understanding of swash/dune dynamics and (2) to evaluate and guide further development of dune erosion models. We present initial results from the first phase of the experimental program. An initial beach and dune profile was selected based on field LIDAR-based observations of various U.S. east coast and Gulf coast dune systems. The laboratory beach was brought to equilibrium with pre-storm random wave conditions. It was subsequently subjected to attack from steadily increasing water level and offshore wave heights. Observations made include inner surf zone and swash free surface and velocities as well as wave-by-wave estimates of topographical change at high spatial resolution through the use of stereo video imagery. Future work will include studies of fluid overtopping of the dune and sediment overwash and assessment of the resilience of man-made "push-up" dunes to wave attack in comparison with their more-compacted "natural" cousins.

  13. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  14. Cross-polarization microwave radar return at severe wind conditions: laboratory model and geophysical model function.

    NASA Astrophysics Data System (ADS)

    Troitskaya, Yuliya; Abramov, Victor; Ermoshkin, Alexey; Zuikova, Emma; Kazakov, Vassily; Sergeev, Daniil; Kandaurov, Alexandr

    2014-05-01

    Satellite remote sensing is one of the main techniques of monitoring severe weather conditions over the ocean. The principal difficulty of the existing algorithms of retrieving wind based on dependence of microwave backscattering cross-section on wind speed (Geophysical Model Function, GMF) is due to its saturation at winds exceeding 25 - 30 m/s. Recently analysis of dual- and quad-polarization C-band radar return measured from satellite Radarsat-2 suggested that the cross-polarized radar return has much higher sensitivity to the wind speed than co-polarized back scattering [1] and conserved sensitivity to wind speed at hurricane conditions [2]. Since complete collocation of these data was not possible and time difference in flight legs and SAR images acquisition was up to 3 hours, these two sets of data were compared in [2] only statistically. The main purpose of this paper is investigation of the functional dependence of cross-polarized radar cross-section on the wind speed in laboratory experiment. Since cross-polarized radar return is formed due to scattering at small-scale structures of the air-sea interface (short-crested waves, foam, sprays, etc), which are well reproduced in laboratory conditions, then the approach based on laboratory experiment on radar scattering of microwaves at the water surface under hurricane wind looks feasible. The experiments were performed in the Wind-wave flume located on top of the Large Thermostratified Tank of the Institute of Applied Physics, where the airflow was produced in the flume with the straight working part of 10 m and operating cross section 0.40?0.40 sq. m, the axis velocity can be varied from 5 to 25 m/s. Microwave measurements were carried out by a coherent Doppler X-band (3.2 cm) scatterometer with the consequent receive of linear polarizations. Experiments confirmed higher sensitivity to the wind speed of the cross-polarized radar return. Simultaneously parameters of the air flow in the turbulent boundary layer

  15. The generation and amplification of intergalactic magnetic fields in analogue laboratory experiments with high power lasers

    NASA Astrophysics Data System (ADS)

    Gregori, G.; Reville, B.; Miniati, F.

    2015-11-01

    The advent of high-power laser facilities has, in the past two decades, opened a new field of research where astrophysical environments can be scaled down to laboratory dimensions, while preserving the essential physics. This is due to the invariance of the equations of magneto-hydrodynamics to a class of similarity transformations. Here we review the relevant scaling relations and their application in laboratory astrophysics experiments with a focus on the generation and amplification of magnetic fields in cosmic environment. The standard model for the origin of magnetic fields is a multi stage process whereby a vanishing magnetic seed is first generated by a rotational electric field and is then amplified by turbulent dynamo action to the characteristic values observed in astronomical bodies. We thus discuss the relevant seed generation mechanisms in cosmic environment including resistive mechanism, collision-less and fluid instabilities, as well as novel laboratory experiments using high power laser systems aimed at investigating the amplification of magnetic energy by magneto-hydrodynamic (MHD) turbulence. Future directions, including efforts to model in the laboratory the process of diffusive shock acceleration are also discussed, with an emphasis on the potential of laboratory experiments to further our understanding of plasma physics on cosmic scales.

  16. Measurement of unsaturated hydraulic properties and evaluation of property-transfer models for deep sedimentary interbeds, Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Perkins, Kimberlie; Johnson, Brittany D.; Mirus, Benjamin B.

    2014-01-01

    During 2013–14, the USGS, in cooperation with the U.S. Department of Energy, focused on further characterization of the sedimentary interbeds below the future site of the proposed Remote Handled Low-Level Waste (RHLLW) facility, which is intended for the long-term storage of low-level radioactive waste. Twelve core samples from the sedimentary interbeds from a borehole near the proposed facility were collected for laboratory analysis of hydraulic properties, which also allowed further testing of the property-transfer modeling approach. For each core sample, the steady-state centrifuge method was used to measure relations between matric potential, saturation, and conductivity. These laboratory measurements were compared to water-retention and unsaturated hydraulic conductivity parameters estimated using the established property-transfer models. For each core sample obtained, the agreement between measured and estimated hydraulic parameters was evaluated quantitatively using the Pearson correlation coefficient (r). The highest correlation is for saturated hydraulic conductivity (Ksat) with an r value of 0.922. The saturated water content (qsat) also exhibits a strong linear correlation with an r value of 0.892. The curve shape parameter (λ) has a value of 0.731, whereas the curve scaling parameter (yo) has the lowest r value of 0.528. The r values demonstrate that model predictions correspond well to the laboratory measured properties for most parameters, which supports the value of extending this approach for quantifying unsaturated hydraulic properties at various sites throughout INL.

  17. Scaling of metabolic rate on body mass in small laboratory mammals

    NASA Technical Reports Server (NTRS)

    Pace, N.; Rahlmann, D. F.; Smith, A. H.

    1980-01-01

    The scaling of metabolic heat production rate on body mass is investigated for five species of small laboratory mammal in order to define selection of animals of metabolic rates and size range appropriate for the measurement of changes in the scaling relationship upon exposure to weightlessness in Shuttle/Spacelab experiment. Metabolic rates were measured according to oxygen consumption and carbon dioxide production for individual male and female Swiss-Webster mice, Syrian hamsters, Simonsen albino rats, Hartley guinea pigs and New Zealand white rabbits, which range in mass from 0.05 to 5 kg mature body size, at ages of 1, 2, 3, 5, 8, 12, 18 and 24 months. The metabolic intensity, defined as the heat produced per hour per kg body mass, is found to decrease dramatically with age until the animals are 6 to 8 months old, with little or no sex difference. When plotted on a logarithmic graph, the relation of metabolic rate to total body mass is found to obey a power law of index 0.676, which differs significantly from the classical value of 0.75. When the values for the mice are removed, however, an index of 0.749 is obtained. It is thus proposed that six male animals, 8 months of age, of each of the four remaining species be used to study the effects of gravitational loading on the metabolic energy requirements of terrestrial animals.

  18. Structural similitude and design of scaled down laminated models

    NASA Technical Reports Server (NTRS)

    Simitses, G. J.; Rezaeepazhand, J.

    1993-01-01

    The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial

  19. Full-scale laboratory validation of a wireless MEMS-based technology for damage assessment of concrete structures

    NASA Astrophysics Data System (ADS)

    Trapani, Davide; Zonta, Daniele; Molinari, Marco; Amditis, Angelos; Bimpas, Matthaios; Bertsch, Nicolas; Spiering, Vincent; Santana, Juan; Sterken, Tom; Torfs, Tom; Bairaktaris, Dimitris; Bairaktaris, Manos; Camarinopulos, Stefanos; Frondistou-Yannas, Mata; Ulieru, Dumitru

    2012-04-01

    This paper illustrates an experimental campaign conducted under laboratory conditions on a full-scale reinforced concrete three-dimensional frame instrumented with wireless sensors developed within the Memscon project. In particular it describes the assumptions which the experimental campaign was based on, the design of the structure, the laboratory setup and the results of the tests. The aim of the campaign was to validate the performance of Memscon sensing systems, consisting of wireless accelerometers and strain sensors, on a real concrete structure during construction and under an actual earthquake. Another aspect of interest was to assess the effectiveness of the full damage recognition procedure based on the data recorded by the sensors and the reliability of the Decision Support System (DSS) developed in order to provide the stakeholders recommendations for building rehabilitation and the costs of this. With these ends, a Eurocode 8 spectrum-compatible accelerogram with increasing amplitude was applied at the top of an instrumented concrete frame built in the laboratory. MEMSCON sensors were directly compared with wired instruments, based on devices available on the market and taken as references, during both construction and seismic simulation.

  20. Laboratory modeling of dust impact detection by the Cassini spacecraft

    NASA Astrophysics Data System (ADS)

    Nouzák, L.; Hsu, S.; Malaspina, D.; Thayer, F. M.; Ye, S.-Y.; Pavlů, J.; Němeček, Z.; Šafránková, J.; Sternovsky, Z.

    2018-07-01

    The paper presents laboratory investigations of the response of a scaled down model of the Cassini spacecraft to impacts of submicron iron grains accelerated to velocities 5-25 km/s. The aim of the study is to help in a detailed analysis and interpretation of signals provided by the RPWS (Radio Wave Plasma Science) instrument that were attributed to dust impacts onto RPWS antennas or spacecraft body. The paper describes the experimental set-up, discusses its limitations, and presents the first results. Both monopole and dipole antenna configurations are investigated. We demonstrate that the amplitude and polarity of the impulse signals recorded by antenna amplifiers depend on the voltages applied onto the antennas or the spacecraft body and briefly introduce the mechanism leading to the signal generation. The experimental results support the recent suggestion by Ye et al. (2016) that antennas operated in a dipole mode are greatly insensitive to dust impacts on the spacecraft body. The pre-peak phenomenon, commonly observed in space, is also reproduced in the measurements and explained as the induced charge on the antenna from the impact plasma cloud that is becoming non-neutral due to the escape of the faster electrons.

  1. Scale Model Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Canacci, Victor A.

    1997-01-01

    NASA Lewis Research Center's Icing Research Tunnel (IRT) is the world's largest refrigerated wind tunnel and one of only three icing wind tunnel facilities in the United States. The IRT was constructed in the 1940's and has been operated continually since it was built. In this facility, natural icing conditions are duplicated to test the effects of inflight icing on actual aircraft components as well as on models of airplanes and helicopters. IRT tests have been used successfully to reduce flight test hours for the certification of ice-detection instrumentation and ice protection systems. To ensure that the IRT will remain the world's premier icing facility well into the next century, Lewis is making some renovations and is planning others. These improvements include modernizing the control room, replacing the fan blades with new ones to increase the test section maximum velocity to 430 mph, installing new spray bars to increase the size and uniformity of the artificial icing cloud, and replacing the facility heat exchanger. Most of the improvements will have a first-order effect on the IRT's airflow quality. To help us understand these effects and evaluate potential improvements to the flow characteristics of the IRT, we built a modular 1/10th-scale aerodynamic model of the facility. This closed-loop scale-model pilot tunnel was fabricated onsite in the various shops of Lewis' Fabrication Support Division. The tunnel's rectangular sections are composed of acrylic walls supported by an aluminum angle framework. Its turning vanes are made of tubing machined to the contour of the IRT turning vanes. The fan leg of the tunnel, which transitions from rectangular to circular and back to rectangular cross sections, is fabricated of fiberglass sections. The contraction section of the tunnel is constructed from sheet aluminum. A 12-bladed aluminum fan is coupled to a turbine powered by high-pressure air capable of driving the maximum test section velocity to 550 ft

  2. Acoustic characteristics of 1/20-scale model helicopter rotors

    NASA Technical Reports Server (NTRS)

    Shenoy, Rajarama K.; Kohlhepp, Fred W.; Leighton, Kenneth P.

    1986-01-01

    A wind tunnel test to study the effects of geometric scale on acoustics and to investigate the applicability of very small scale models for the study of acoustic characteristics of helicopter rotors was conducted in the United Technologies Research Center Acoustic Research Tunnel. The results show that the Reynolds number effects significantly alter the Blade-Vortex-Interaction (BVI) Noise characteristics by enhancing the lower frequency content and suppressing the higher frequency content. In the time domain this is observed as an inverted thickness noise impulse rather than the typical positive-negative impulse of BVI noise. At higher advance ratio conditions, in the absence of BVI, the 1/20 scale model acoustic trends with Mach number follow those of larger scale models. However, the 1/20 scale model acoustic trends appear to indicate stall at higher thrust and advance ratio conditions.

  3. Pore-scale modeling of moving contact line problems in immiscible two-phase flow

    NASA Astrophysics Data System (ADS)

    Kucala, Alec; Noble, David; Martinez, Mario

    2016-11-01

    Accurate modeling of moving contact line (MCL) problems is imperative in predicting capillary pressure vs. saturation curves, permeability, and preferential flow paths for a variety of applications, including geological carbon storage (GCS) and enhanced oil recovery (EOR). Here, we present a model for the moving contact line using pore-scale computational fluid dynamics (CFD) which solves the full, time-dependent Navier-Stokes equations using the Galerkin finite-element method. The MCL is modeled as a surface traction force proportional to the surface tension, dependent on the static properties of the immiscible fluid/solid system. We present a variety of verification test cases for simple two- and three-dimensional geometries to validate the current model, including threshold pressure predictions in flows through pore-throats for a variety of wetting angles. Simulations involving more complex geometries are also presented to be used in future simulations for GCS and EOR problems. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. Infrared radiation models for atmospheric methane

    NASA Technical Reports Server (NTRS)

    Cess, R. D.; Kratz, D. P.; Caldwell, J.; Kim, S. J.

    1986-01-01

    Mutually consistent line-by-line, narrow-band and broad-band infrared radiation models are presented for methane, a potentially important anthropogenic trace gas within the atmosphere. Comparisons of the modeled band absorptances with existing laboratory data produce the best agreement when, within the band models, spurious band intensities are used which are consistent with the respective laboratory data sets, but which are not consistent with current knowledge concerning the intensity of the infrared fundamental band of methane. This emphasizes the need for improved laboratory band absorptance measurements. Since, when applied to atmospheric radiation calculations, the line-by-line model does not require the use of scaling approximations, the mutual consistency of the band models provides a means of appraising the accuracy of scaling procedures. It is shown that Curtis-Godson narrow-band and Chan-Tien broad-band scaling provide accurate means of accounting for atmospheric temperature and pressure variations.

  5. Mimicking the oxygen minimum zones: stimulating interaction of aerobic archaeal and anaerobic bacterial ammonia oxidizers in a laboratory-scale model system

    PubMed Central

    Yan, Jia; Haaijer, Suzanne C M; Op den Camp, Huub J M; Niftrik, Laura; Stahl, David A; Könneke, Martin; Rush, Darci; Sinninghe Damsté, Jaap S; Hu, Yong Y; Jetten, Mike S M

    2012-01-01

    In marine oxygen minimum zones (OMZs), ammonia-oxidizing archaea (AOA) rather than marine ammonia-oxidizing bacteria (AOB) may provide nitrite to anaerobic ammonium-oxidizing (anammox) bacteria. Here we demonstrate the cooperation between marine anammox bacteria and nitrifiers in a laboratory-scale model system under oxygen limitation. A bioreactor containing ‘Candidatus Scalindua profunda’ marine anammox bacteria was supplemented with AOA (Nitrosopumilus maritimus strain SCM1) cells and limited amounts of oxygen. In this way a stable mixed culture of AOA, and anammox bacteria was established within 200 days while also a substantial amount of endogenous AOB were enriched. ‘Ca. Scalindua profunda’ and putative AOB and AOA morphologies were visualized by transmission electron microscopy and a C18 anammox [3]-ladderane fatty acid was highly abundant in the oxygen-limited culture. The rapid oxygen consumption by AOA and AOB ensured that anammox activity was not affected. High expression of AOA, AOB and anammox genes encoding for ammonium transport proteins was observed, likely caused by the increased competition for ammonium. The competition between AOA and AOB was found to be strongly related to the residual ammonium concentration based on amoA gene copy numbers. The abundance of archaeal amoA copy numbers increased markedly when the ammonium concentration was below 30 μM finally resulting in almost equal abundance of AOA and AOB amoA copy numbers. Massive parallel sequencing of mRNA and activity analyses further corroborated equal abundance of AOA and AOB. PTIO addition, inhibiting AOA activity, was employed to determine the relative contribution of AOB versus AOA to ammonium oxidation. The present study provides the first direct evidence for cooperation of archaeal ammonia oxidation with anammox bacteria by provision of nitrite and consumption of oxygen. PMID:23057688

  6. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  7. Stress drop with constant, scale independent seismic efficiency and overshoot

    USGS Publications Warehouse

    Beeler, N.M.

    2001-01-01

    To model dissipated and radiated energy during earthquake stress drop, I calculate dynamic fault slip using a single degree of freedom spring-slider block and a laboratory-based static/kinetic fault strength relation with a dynamic stress drop proportional to effective normal stress. The model is scaled to earthquake size assuming a circular rupture; stiffness varies inversely with rupture radius, and rupture duration is proportional to radius. Calculated seismic efficiency, the ratio of radiated to total energy expended during stress drop, is in good agreement with laboratory and field observations. Predicted overshoot, a measure of how much the static stress drop exceeds the dynamic stress drop, is higher than previously published laboratory and seismic observations and fully elasto-dynamic calculations. Seismic efficiency and overshoot are constant, independent of normal stress and scale. Calculated variation of apparent stress with seismic moment resembles the observational constraints of McGarr [1999].

  8. Fate of Salmonella Typhimurium in laboratory-scale drinking water biofilms.

    PubMed

    Schaefer, L M; Brözel, V S; Venter, S N

    2013-12-01

    Investigations were carried out to evaluate and quantify colonization of laboratory-scale drinking water biofilms by a chromosomally green fluorescent protein (gfp)-tagged strain of Salmonella Typhimurium. Gfp encodes the green fluorescent protein and thus allows in situ detection of undisturbed cells and is ideally suited for monitoring Salmonella in biofilms. The fate and persistence of non-typhoidal Salmonella in simulated drinking water biofilms was investigated. The ability of Salmonella to form biofilms in monoculture and the fate and persistence of Salmonella in a mixed aquatic biofilm was examined. In monoculture S. Typhimurium formed loosely structured biofilms. Salmonella colonized established multi-species drinking water biofilms within 24 hours, forming micro-colonies within the biofilm. S. Typhimurium was also released at high levels from the drinking water-associated biofilm into the water passing through the system. This indicated that Salmonella could enter into, survive and grow within, and be released from a drinking water biofilm. The ability of Salmonella to survive and persist in a drinking water biofilm, and be released at high levels into the flow for recolonization elsewhere, indicates the potential for a persistent health risk to consumers once a network becomes contaminated with this bacterium.

  9. Hairy Root as a Model System for Undergraduate Laboratory Curriculum and Research

    ERIC Educational Resources Information Center

    Keyes, Carol A.; Subramanian, Senthil; Yu, Oliver

    2009-01-01

    Hairy root transformation has been widely adapted in plant laboratories to rapidly generate transgenic roots for biochemical and molecular analysis. We present hairy root transformations as a versatile and adaptable model system for a wide variety of undergraduate laboratory courses and research. This technique is easy, efficient, and fast making…

  10. Modeling and Laboratory Investigations of Radiative Shocks

    NASA Astrophysics Data System (ADS)

    Grun, Jacob; Laming, J. Martin; Manka, Charles; Moore, Christopher; Jones, Ted; Tam, Daniel

    2001-10-01

    Supernova remnants are often inhomogeneous, with knots or clumps of material expanding in ambient plasma. This structure may be initiated by hydrodynamic instabilities occurring during the explosion, but it may plausibly be amplified by instabilities of the expanding shocks such as, for example, corrugation instabilities described by D’yakov in 1954, Vishniac in 1983, and observed in the laboratory by Grun et al. in 1991. Shock instability can occur when radiation lowers the effective adiabatic index of the gas. In view of the difficulty of modeling radiation in non-equilibrium plasmas, and the dependence of shock instabilities on such radiation, we are performing a laboratory experiment to study radiative shocks. The shocks are generated in a miniature, laser-driven shock tube. The gas density inside the tube at any instant in time is measured using time and space-resolved interferometry, and the emission spectrum of the gas is measured with time-resolved spectroscopy. We simulate the experiment with a 1D code that models time dependent post-shock ionization and non-equilibrium radiative cooling. S. P. D’yakov, Zhurnal Eksperimentalnoi Teoreticheskoi Fiziki 27, 288 (1954); see also section 90 in L.D. Landau and E.M. Lifshitz, Fluid Mechanics (Butterworth-Heinemann 1987); E.T. Vishniac, Astrophys. J. 236, 880 (1983); J. Grun, et al., Phys. Rev. Lett., 66, 2738 (1991)

  11. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  12. Using Multi-Scale Modeling Systems to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2010-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  13. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  14. Scale-up considerations for surface collecting agent assisted in-situ burn crude oil spill response experiments in the Arctic: Laboratory to field-scale investigations.

    PubMed

    Bullock, Robin J; Aggarwal, Srijan; Perkins, Robert A; Schnabel, William

    2017-04-01

    In the event of a marine oil spill in the Arctic, government agencies, industry, and the public have a stake in the successful implementation of oil spill response. Because large spills are rare events, oil spill response techniques are often evaluated with laboratory and meso-scale experiments. The experiments must yield scalable information sufficient to understand the operability and effectiveness of a response technique under actual field conditions. Since in-situ burning augmented with surface collecting agents ("herders") is one of the few viable response options in ice infested waters, a series of oil spill response experiments were conducted in Fairbanks, Alaska, in 2014 and 2015 to evaluate the use of herders to assist in-situ burning and the role of experimental scale. This study compares burn efficiency and herder application for three experimental designs for in-situ burning of Alaska North Slope crude oil in cold, fresh waters with ∼10% ice cover. The experiments were conducted in three project-specific constructed venues with varying scales (surface areas of approximately 0.09 square meters, 9 square meters and 8100 square meters). The results from the herder assisted in-situ burn experiments performed at these three different scales showed good experimental scale correlation and no negative impact due to the presence of ice cover on burn efficiency. Experimental conclusions are predominantly associated with application of the herder material and usability for a given experiment scale to make response decisions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Toward Simplification of Dynamic Subgrid-Scale Models

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    We examine the relationship between the filter and the subgrid-scale (SGS) model for large-eddy simulations, in general, and for those with dynamic SGS models, in particular. From a review of the literature, it would appear that many practitioners of LES consider the link between the filter and the model more or less as a formality of little practical effect. In contrast, we will show that the filter and the model are intimately linked, that the Smagorinsky SGS model is appropriate only for filters of first- or second-order, and that the Smagorinsky model is inconsistent with spectral filters. Moreover, the Germano identity is shown to be both problematic and unnecessary for the development of dynamic SGS models. Its use obscures the following fundamental realization: For a suitably chosen filter, the computible resolved turbulent stresses, property scaled, closely approximate the SGS stresses.

  16. The Rat Grimace Scale: A partially automated method for quantifying pain in the laboratory rat via facial expressions

    PubMed Central

    2011-01-01

    We recently demonstrated the utility of quantifying spontaneous pain in mice via the blinded coding of facial expressions. As the majority of preclinical pain research is in fact performed in the laboratory rat, we attempted to modify the scale for use in this species. We present herein the Rat Grimace Scale, and show its reliability, accuracy, and ability to quantify the time course of spontaneous pain in the intraplantar complete Freund's adjuvant, intraarticular kaolin-carrageenan, and laparotomy (post-operative pain) assays. The scale's ability to demonstrate the dose-dependent analgesic efficacy of morphine is also shown. In addition, we have developed software, Rodent Face Finder®, which successfully automates the most labor-intensive step in the process. Given the known mechanistic dissociations between spontaneous and evoked pain, and the primacy of the former as a clinical problem, we believe that widespread adoption of spontaneous pain measures such as the Rat Grimace Scale might lead to more successful translation of basic science findings into clinical application. PMID:21801409

  17. Incorporating microbes into large-scale biogeochemical models

    NASA Astrophysics Data System (ADS)

    Allison, S. D.; Martiny, J. B.

    2008-12-01

    Micro-organisms, including Bacteria, Archaea, and Fungi, control major processes throughout the Earth system. Recent advances in microbial ecology and microbiology have revealed an astounding level of genetic and metabolic diversity in microbial communities. However, a framework for interpreting the meaning of this diversity has lagged behind the initial discoveries. Microbial communities have yet to be included explicitly in any major biogeochemical models in terrestrial ecosystems, and have only recently broken into ocean models. Although simplification of microbial communities is essential in complex systems, omission of community parameters may seriously compromise model predictions of biogeochemical processes. Two key questions arise from this tradeoff: 1) When and where must microbial community parameters be included in biogeochemical models? 2) If microbial communities are important, how should they be simplified, aggregated, and parameterized in models? To address these questions, we conducted a meta-analysis to determine if microbial communities are sensitive to four environmental disturbances that are associated with global change. In all cases, we found that community composition changed significantly following disturbance. However, the implications for ecosystem function were unclear in most of the published studies. Therefore, we developed a simple model framework to illustrate the situations in which microbial community changes would affect rates of biogeochemical processes. We found that these scenarios could be quite common, but powerful predictive models cannot be developed without much more information on the functions and disturbance responses of microbial taxa. Small-scale models that explicitly incorporate microbial communities also suggest that process rates strongly depend on microbial interactions and disturbance responses. The challenge is to scale up these models to make predictions at the ecosystem and global scales based on measurable

  18. Dynamic subfilter-scale stress model for large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Rouhi, A.; Piomelli, U.; Geurts, B. J.

    2016-08-01

    We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.

  19. Multi-scale Modeling of Plasticity in Tantalum.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describingmore » temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore

  20. Unraveling the physics of magnetic reconnection: the interaction of laboratory and space observations with models

    NASA Astrophysics Data System (ADS)

    Drake, James

    2017-10-01

    Reconnection leads to impulsive conversion of magnetic energy into high-speed flows, plasma heating and the production of energetic particles. A major challenge has been to account for the enormous range of spatial scales in systems undergoing reconnection. Progress on the topic has been facilitated by the observations in space and the laboratory with models bridging the divide. Understanding the mechanisms for fast reconnection is a historical example. However, in this talk I will focus on reconnection in asymmetric systems - those with large ambient gradients in the pressure or density. The interest in the topic has been driven by efforts to understand when and where reconnection takes place in the laboratory (tokamaks) and in space (planetary magnetospheres and the solar wind). Ideas on reconnection suppression due to diamagnetic drifts have produced a unified picture of the conditions required for reconnection onset over a wide range of environments. Observations from the MMS mission have provided an extraordinary window into reconnection at the Earth's magnetopause, including the mechanisms for magnetic energy dissipation and the role of turbulence. Finally, the prospects for establishing the mechanisms for energetic particle production will be addressed.

  1. The Role of Slope in the Fill and Spill Process of Linked Submarine Minibasins. Model Validation and Numerical Runs at Laboratory Scale.

    NASA Astrophysics Data System (ADS)

    Bastianon, E.; Viparelli, E.; Cantelli, A.; Imran, J.

    2015-12-01

    Primarily motivated by applications to hydrocarbon exploration, submarine minibasins have been widely studied during recent decades to understand the physical phenomenon that characterizes their fill process. Minibasins were identified in seismic records in the Gulf of Mexico, Angola, Trinidad and Tobago, Ireland, Nigeria and also in outcrops (e.g., Tres Pasos Formation, southern Chile). The filling of minibasis is generally described as the 'fill-and-spill' process, i.e. turbidity currents enter, are reflected on the minibasin flanks, pond and deposit suspended sediment. As the minibasin fills the turbidity current spills on the lowermost zone of the basin flank -spill point - and start filling the next basin downdip. Different versions of this simplified model were used to interpret field and laboratory data but it is still unclear how the minibasin size compared to the magnitude of the turbidity currents, the position of each basin in the system, and the slope of the minibasin system affects the characteristics of the deposit (e.g., geometry, grain size). Here, we conduct a numerical study to investigate how the 'fill-and-spill' model changes with increase in slopes of the minibasin system. First, we validate our numerical results against laboratory experiment performed on two linked minibasins located on a horizontal platform by comparing measured and simulated deposit geometries, suspended sediment concentration profiles and grain sizes. We then perform numerical simulations by increasing the minibasin system slope: deposit and flow characteristics are compared with the case of horizontal platform to identify how the depositional processes change. For the numerical study we used a three-dimensional numerical model of turbidity currents that solves the Reynolds-averaged Navier-Stokes equations for dilute suspensions. Turbulence is modeled by a buoyancy-modified k-ɛ closure. The numerical model has a deforming bottom boundary, to model the changes in the bed

  2. Development of collaborative-creative learning model using virtual laboratory media for instrumental analytical chemistry lectures

    NASA Astrophysics Data System (ADS)

    Zurweni, Wibawa, Basuki; Erwin, Tuti Nurian

    2017-08-01

    The framework for teaching and learning in the 21st century was prepared with 4Cs criteria. Learning providing opportunity for the development of students' optimal creative skills is by implementing collaborative learning. Learners are challenged to be able to compete, work independently to bring either individual or group excellence and master the learning material. Virtual laboratory is used for the media of Instrumental Analytical Chemistry (Vis, UV-Vis-AAS etc) lectures through simulations computer application and used as a substitution for the laboratory if the equipment and instruments are not available. This research aims to design and develop collaborative-creative learning model using virtual laboratory media for Instrumental Analytical Chemistry lectures, to know the effectiveness of this design model adapting the Dick & Carey's model and Hannafin & Peck's model. The development steps of this model are: needs analyze, design collaborative-creative learning, virtual laboratory media using macromedia flash, formative evaluation and test of learning model effectiveness. While, the development stages of collaborative-creative learning model are: apperception, exploration, collaboration, creation, evaluation, feedback. Development of collaborative-creative learning model using virtual laboratory media can be used to improve the quality learning in the classroom, overcome the limitation of lab instruments for the real instrumental analysis. Formative test results show that the Collaborative-Creative Learning Model developed meets the requirements. The effectiveness test of students' pretest and posttest proves significant at 95% confidence level, t-test higher than t-table. It can be concluded that this learning model is effective to use for Instrumental Analytical Chemistry lectures.

  3. Power law cosmology model comparison with CMB scale information

    NASA Astrophysics Data System (ADS)

    Tutusaus, Isaac; Lamine, Brahim; Blanchard, Alain; Dupays, Arnaud; Zolnierowski, Yves; Cohen-Tanugi, Johann; Ealet, Anne; Escoffier, Stéphanie; Le Fèvre, Olivier; Ilić, Stéphane; Pisani, Alice; Plaszczynski, Stéphane; Sakr, Ziad; Salvatelli, Valentina; Schücker, Thomas; Tilquin, André; Virey, Jean-Marc

    2016-11-01

    Despite the ability of the cosmological concordance model (Λ CDM ) to describe the cosmological observations exceedingly well, power law expansion of the Universe scale radius, R (t )∝tn, has been proposed as an alternative framework. We examine here these models, analyzing their ability to fit cosmological data using robust model comparison criteria. Type Ia supernovae (SNIa), baryonic acoustic oscillations (BAO) and acoustic scale information from the cosmic microwave background (CMB) have been used. We find that SNIa data either alone or combined with BAO can be well reproduced by both Λ CDM and power law expansion models with n ˜1.5 , while the constant expansion rate model (n =1 ) is clearly disfavored. Allowing for some redshift evolution in the SNIa luminosity essentially removes any clear preference for a specific model. The CMB data are well known to provide the most stringent constraints on standard cosmological models, in particular, through the position of the first peak of the temperature angular power spectrum, corresponding to the sound horizon at recombination, a scale physically related to the BAO scale. Models with n ≥1 lead to a divergence of the sound horizon and do not naturally provide the relevant scales for the BAO and the CMB. We retain an empirical footing to overcome this issue: we let the data choose the preferred values for these scales, while we recompute the ionization history in power law models, to obtain the distance to the CMB. In doing so, we find that the scale coming from the BAO data is not consistent with the observed position of the first peak of the CMB temperature angular power spectrum for any power law cosmology. Therefore, we conclude that when the three standard probes (SNIa, BAO, and CMB) are combined, the Λ CDM model is very strongly favored over any of these alternative models, which are then essentially ruled out.

  4. Probabilistic, meso-scale flood loss modelling

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  5. Models for small-scale structure on cosmic strings. II. Scaling and its stability

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Martins, C. J. A. P.; Shellard, E. P. S.

    2016-11-01

    We make use of the formalism described in a previous paper [Martins et al., Phys. Rev. D 90, 043518 (2014)] to address general features of wiggly cosmic string evolution. In particular, we highlight the important role played by poorly understood energy loss mechanisms and propose a simple Ansatz which tackles this problem in the context of an extended velocity-dependent one-scale model. We find a general procedure to determine all the scaling solutions admitted by a specific string model and study their stability, enabling a detailed comparison with future numerical simulations. A simpler comparison with previous Goto-Nambu simulations supports earlier evidence that scaling is easier to achieve in the matter era than in the radiation era. In addition, we also find that the requirement that a scaling regime be stable seems to notably constrain the allowed range of energy loss parameters.

  6. A refuge for inorganic chemistry: Bunsen's Heidelberg laboratory.

    PubMed

    Nawa, Christine

    2014-05-01

    Immediately after its opening in 1855, Bunsen's Heidelberg laboratory became iconic as the most modern and best equipped laboratory in Europe. Although comparatively modest in size, the laboratory's progressive equipment made it a role model for new construction projects in Germany and beyond. In retrospect, it represents an intermediate stage of development between early teaching facilities, such as Liebig's laboratory in Giessen, and the new 'chemistry palaces' that came into existence with Wöhler's Göttingen laboratory of 1860. As a 'transition laboratory,' Bunsen's Heidelberg edifice is of particular historical interest. This paper explores the allocation of spaces to specific procedures and audiences within the laboratory, and the hierarchies and professional rites of passage embedded within it. On this basis, it argues that the laboratory in Heidelberg was tailored to Bunsen's needs in inorganic and physical chemistry and never aimed at a broad-scale representation of chemistry as a whole. On the contrary, it is an example of early specialisation within a chemical laboratory preceding the process of differentiation into chemical sub-disciplines. Finally, it is shown that the relatively small size of this laboratory, and the fact that after ca. 1860 no significant changes were made within the building, are inseparably connected to Bunsen's views on chemistry teaching.

  7. Los Alamos National Laboratory Economic Analysis Capability Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith; Pasqualini, Donatella

    Los Alamos National Laboratory has developed two types of models to compute the economic impact of infrastructure disruptions. FastEcon is a fast running model that estimates first-­order economic impacts of large scale events such as hurricanes and floods and can be used to identify the amount of economic activity that occurs in a specific area. LANL’s Computable General Equilibrium (CGE) model estimates more comprehensive static and dynamic economic impacts of a broader array of events and captures the interactions between sectors and industries when estimating economic impacts.

  8. Thresholds of understanding: Exploring assumptions of scale invariance vs. scale dependence in global biogeochemical models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bradford, M.; Koven, C.; Talbot, J. M.; Wood, S.; Chadwick, O.

    2016-12-01

    High uncertainty and low confidence in terrestrial carbon (C) cycle projections reflect the incomplete understanding of how best to represent biologically-driven C cycle processes at global scales. Ecosystem theories, and consequently biogeochemical models, are based on the assumption that different belowground communities function similarly and interact with the abiotic environment in consistent ways. This assumption of "Scale Invariance" posits that environmental conditions will change the rate of ecosystem processes, but the biotic response will be consistent across sites. Indeed, cross-site comparisons and global-scale analyses suggest that climate strongly controls rates of litter mass loss and soil organic matter turnover. Alternatively, activities of belowground communities are shaped by particular local environmental conditions, such as climate and edaphic conditions. Under this assumption of "Scale Dependence", relationships generated by evolutionary trade-offs in acquiring resources and withstanding environmental stress dictate the activities of belowground communities and their functional response to environmental change. Similarly, local edaphic conditions (e.g. permafrost soils or reactive minerals that physicochemically stabilize soil organic matter on mineral surfaces) may strongly constrain the availability of substrates that biota decompose—altering the trajectory of soil biogeochemical response to perturbations. Identifying when scale invariant assumptions hold vs. where local variation in biotic communities or edaphic conditions must be considered is critical to advancing our understanding and representation of belowground processes in the face of environmental change. Here we introduce data sets that support assumptions of scale invariance and scale dependent processes and discuss their application in global-scale biogeochemical models. We identify particular domains over which assumptions of scale invariance may be appropriate and potential

  9. Phenomenological Modeling and Laboratory Simulation of Long-Term Aging of Asphalt Mixtures

    NASA Astrophysics Data System (ADS)

    Elwardany, Michael Dawoud

    The accurate characterization of asphalt mixture properties as a function of pavement service life is becoming more important as more powerful pavement design and performance prediction methods are implemented. Oxidative aging is a major distress mechanism of asphalt pavements. Aging increases the stiffness and brittleness of the material, which leads to a high cracking potential. Thus, an improved understanding of the aging phenomenon and its effect on asphalt binder chemical and rheological properties will allow for the prediction of mixture properties as a function of pavement service life. Many researchers have conducted laboratory binder thin-film aging studies; however, this approach does not allow for studying the physicochemical effects of mineral fillers on age hardening rates in asphalt mixtures. Moreover, aging phenomenon in the field is governed by kinetics of binder oxidation, oxygen diffusion through mastic phase, and oxygen percolation throughout the air voids structure. In this study, laboratory aging trials were conducted on mixtures prepared using component materials of several field projects throughout the USA and Canada. Laboratory aged materials were compared against field cores sampled at different ages. Results suggested that oven aging of loose mixture at 95°C is the most promising laboratory long-term aging method. Additionally, an empirical model was developed in order to account for the effect of mineral fillers on age hardening rates in asphalt mixtures. Kinetics modeling was used to predict field aging levels throughout pavement thickness and to determine the required laboratory aging duration to match field aging. Kinetics model outputs are calibrated using measured data from the field to account for the effects of oxygen diffusion and percolation. Finally, the calibrated model was validated using independent set of field sections. This work is expected to provide basis for improved asphalt mixture and pavement design procedures in

  10. SRNL PARTICIPATION IN THE MULTI-SCALE ENSEMBLE EXERCISES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, R

    2007-10-29

    Consequence assessment during emergency response often requires atmospheric transport and dispersion modeling to guide decision making. A statistical analysis of the ensemble of results from several models is a useful way of estimating the uncertainty for a given forecast. ENSEMBLE is a European Union program that utilizes an internet-based system to ingest transport results from numerous modeling agencies. A recent set of exercises required output on three distinct spatial and temporal scales. The Savannah River National Laboratory (SRNL) uses a regional prognostic model nested within a larger-scale synoptic model to generate the meteorological conditions which are in turn used inmore » a Lagrangian particle dispersion model. A discussion of SRNL participation in these exercises is given, with particular emphasis on requirements for provision of results in a timely manner with regard to the various spatial scales.« less

  11. Application of modern radiative transfer tools to model laboratory quartz emissivity

    NASA Astrophysics Data System (ADS)

    Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.

    2005-08-01

    Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.

  12. Laboratory for Atmospheres 2008 Technical Highlights

    NASA Technical Reports Server (NTRS)

    Cote, Charles E.

    2009-01-01

    The 2008 Technical Highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report. The Laboratory for Atmospheres (Code 613) is part of the Earth Sciences Division (Code 610), formerly the Earth Sun Exploration Division, under the Sciences and Exploration Directorate (Code 600) based at NASA s Goddard Space Flight Center in Greenbelt, Maryland. In line with NASA s Exploration Initiative, the Laboratory executes a comprehensive research and technology development program dedicated to advancing knowledge and understanding of the atmospheres of Earth and other planets. The research program is aimed at understanding the influence of solar variability on the Earth s climate; predicting the weather and climate of Earth; understanding the structure, dynamics, and radiative properties of precipitation, clouds, and aerosols; understanding atmospheric chemistry, especially the role of natural and anthropogenic trace species on the ozone balance in the stratosphere and the troposphere; and advancing our understanding of physical properties of Earth s atmosphere. The research program identifies problems and requirements for atmospheric observations via satellite missions. Laboratory scientists conceive, design, develop, and implement ultraviolet, infrared, optical, radar, laser, and lidar technology for remote sensing of the atmosphere. Laboratory members conduct field measurements for satellite data calibration and validation, and carry out numerous modeling activities. These modeling activities include climate model simulations, modeling the chemistry and transport of trace species on regional-to-global scales, cloud-resolving models, and development of next-generation Earth system models. Interdisciplinary research is carried

  13. Potential Electrokinetic Remediation Technologies of Laboratory Scale into Field Application- Methodology Overview

    NASA Astrophysics Data System (ADS)

    Ayuni Suied, Anis; Tajudin, Saiful Azhar Ahmad; Nizam Zakaria, Muhammad; Madun, Aziman

    2018-04-01

    Heavy metal in soil possesses high contribution towards soil contamination which causes to unbalance ecosystem. There are many ways and procedures to make the electrokinetic remediation (EKR) method to be efficient, effective, and potential as a low cost soil treatment. Electrode compartment for electrolyte is expected to treat the contaminated soil through electromigration and enhance metal ions movement. The electrokinetic is applicable for many approaches such as electrokinetic remediation (EKR), electrokinetic stabilization (EKS), electrokinetic bioremediation and many more. This paper presents a critical review on comparison of laboratory scale between EKR, EKS and EK bioremediation treatment by removing the heavy metal contaminants. It is expected to propose one framework of contaminated soil mapping. Electrical Resistivity Method (ERM) is one of famous indirect geophysical tools for surface mapping and subsurface profiling. Hence, ERM is used to mapping the migration of heavy metal ions by electrokinetic.

  14. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  15. Supplementing the Braden scale for pressure ulcer risk among medical inpatients: the contribution of self-reported symptoms and standard laboratory tests.

    PubMed

    Skogestad, Ingrid Johansen; Martinsen, Liv; Børsting, Tove Elisabet; Granheim, Tove Irene; Ludvigsen, Eirin Sigurdssøn; Gay, Caryl L; Lerdal, Anners

    2017-01-01

    To evaluate medical inpatients' symptom experience and selected laboratory blood results as indicators of their pressure ulcer risk as measured by the Braden scale. Pressure ulcers reduce quality of life and increase treatment costs. The prevalence of pressure ulcers is 6-23% in hospital populations, but literature suggests that most pressure ulcers are avoidable. Prospective, cross-sectional survey. Three hundred and twenty-eight patients admitted to medical wards in an acute hospital in Oslo, Norway consented to participate. Data were collected on 10 days between 2012-2014 by registered nurses and nursing students. Pressure ulcer risk was assessed using the Braden scale, and scores <19 indicated pressure ulcer risk. Skin examinations were categorised as normal or stages I-IV using established definitions. Comorbidities were collected by self-report. Self-reported symptom occurrence and distress were measured with 15 items from the Memorial Symptom Assessment Scale, and pain was assessed using two numeric rating scales. Admission laboratory data were collected from medical records. Prevalence of pressure ulcers was 11·9, and 20·4% of patients were identified as being at risk for developing pressure ulcers. Multivariable analysis showed that pressure ulcer risk was positively associated with age ≥80 years, vomiting, severe pain at rest, urination problems, shortness of breath and low albumin and was negatively associated with nervousness. Our study indicates that using patient-reported symptoms and standard laboratory results as supplemental indicators of pressure ulcer risk may improve identification of vulnerable patients, but replication of these findings in other study samples is needed. Nurses play a key role in preventing pressure ulcers during hospitalisation. A better understanding of the underlying mechanisms may improve the quality of care. Knowledge about symptoms associated with pressure ulcer risk may contribute to a faster clinical judgment of

  16. Embedding measurement within existing computerized data systems: scaling clinical laboratory and medical records heart failure data to predict ICU admission.

    PubMed

    Fisher, William P; Burton, Elizabeth C

    2010-01-01

    This study employs existing data sources to develop a new measure of intensive care unit (ICU) admission risk for heart failure patients. Outcome measures were constructed from laboratory, accounting, and medical record data for 973 adult inpatients with primary or secondary heart failure. Several scoring interpretations of the laboratory indicators were evaluated relative to their measurement and predictive properties. Cases were restricted to tests within first lab draw that included at least 15 indicators. After optimizing the original clinical observations, a satisfactory heart failure severity scale was calibrated on a 0-1000 continuum. Patients with unadjusted CHF severity measures of 550 or less were 2.7 times more likely to be admitted to the ICU than those with higher measures. Patients with low HF severity measures (550 or less) adjusted for demographic and diagnostic risk factors are about six times more likely to be admitted to the ICU than those with higher adjusted measures. A nomogram facilitates routine clinical application. Existing computerized data systems could be programmed to automatically structure clinical laboratory reports using the results of studies like this one to reduce data volume with no loss of information, make laboratory results more meaningful to clinical end users, improve the quality of care, reduce errors and unneeded tests, prevent unnecessary ICU admissions, lower costs, and improve patient satisfaction. Existing data typically examined piecemeal form a coherent scale measuring heart failure severity sensitive to increased likelihood of ICU admission. Marked improvements in ROC curves were found for the aggregate measures relative to individual clinical indicators.

  17. Phenomenological aspects of no-scale inflation models

    DOE PAGES

    Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; ...

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n s and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m 0 = B 0 = A 0 = 0, of the CMSSM type with universal A 0 and m 0 ≠ 0 at amore » high scale, and of the mSUGRA type with A 0 = B 0 + m 0 boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m 1/2 ≠ 0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.« less

  18. Effects of process parameters on solid self-microemulsifying particles in a laboratory scale fluid bed.

    PubMed

    Mukherjee, Tusharmouli; Plakogiannis, Fotios M

    2012-01-01

    The purpose of this study was to select the critical process parameters of the fluid bed processes impacting the quality attribute of a solid self-microemulsifying (SME) system of albendazole (ABZ). A fractional factorial design (2(4-1)) with four parameters (spray rate, inlet air temperature, inlet air flow, and atomization air pressure) was created by MINITAB software. Batches were manufactured in a laboratory top-spray fluid bed at 625-g scale. Loss on drying (LOD) samples were taken throughout each batch to build the entire moisture profiles. All dried granulation were sieved using mesh 20 and analyzed for particle size distribution (PSD), morphology, density, and flow. It was found that as spray rate increased, sauter-mean diameter (D(s)) also increased. The effect of inlet air temperature on the peak moisture which is directly related to the mean particle size was found to be significant. There were two-way interactions between studied process parameters. The main effects of inlet air flow rate and atomization air pressure could not be found as the data were inconclusive. The partial least square (PLS) regression model was found significant (P < 0.01) and predictive for optimization. This study established a design space for the parameters for solid SME manufacturing process.

  19. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  20. Synthetic spider silk production on a laboratory scale.

    PubMed

    Hsia, Yang; Gnesa, Eric; Pacheco, Ryan; Kohler, Kristin; Jeffery, Felicia; Vierra, Craig

    2012-07-18

    As society progresses and resources become scarcer, it is becoming increasingly important to cultivate new technologies that engineer next generation biomaterials with high performance properties. The development of these new structural materials must be rapid, cost-efficient and involve processing methodologies and products that are environmentally friendly and sustainable. Spiders spin a multitude of different fiber types with diverse mechanical properties, offering a rich source of next generation engineering materials for biomimicry that rival the best manmade and natural materials. Since the collection of large quantities of natural spider silk is impractical, synthetic silk production has the ability to provide scientists with access to an unlimited supply of threads. Therefore, if the spinning process can be streamlined and perfected, artificial spider fibers have the potential use for a broad range of applications ranging from body armor, surgical sutures, ropes and cables, tires, strings for musical instruments, and composites for aviation and aerospace technology. In order to advance the synthetic silk production process and to yield fibers that display low variance in their material properties from spin to spin, we developed a wet-spinning protocol that integrates expression of recombinant spider silk proteins in bacteria, purification and concentration of the proteins, followed by fiber extrusion and a mechanical post-spin treatment. This is the first visual representation that reveals a step-by-step process to spin and analyze artificial silk fibers on a laboratory scale. It also provides details to minimize the introduction of variability among fibers spun from the same spinning dope. Collectively, these methods will propel the process of artificial silk production, leading to higher quality fibers that surpass natural spider silks.

  1. Synthetic Spider Silk Production on a Laboratory Scale

    PubMed Central

    Hsia, Yang; Gnesa, Eric; Pacheco, Ryan; Kohler, Kristin; Jeffery, Felicia; Vierra, Craig

    2012-01-01

    As society progresses and resources become scarcer, it is becoming increasingly important to cultivate new technologies that engineer next generation biomaterials with high performance properties. The development of these new structural materials must be rapid, cost-efficient and involve processing methodologies and products that are environmentally friendly and sustainable. Spiders spin a multitude of different fiber types with diverse mechanical properties, offering a rich source of next generation engineering materials for biomimicry that rival the best manmade and natural materials. Since the collection of large quantities of natural spider silk is impractical, synthetic silk production has the ability to provide scientists with access to an unlimited supply of threads. Therefore, if the spinning process can be streamlined and perfected, artificial spider fibers have the potential use for a broad range of applications ranging from body armor, surgical sutures, ropes and cables, tires, strings for musical instruments, and composites for aviation and aerospace technology. In order to advance the synthetic silk production process and to yield fibers that display low variance in their material properties from spin to spin, we developed a wet-spinning protocol that integrates expression of recombinant spider silk proteins in bacteria, purification and concentration of the proteins, followed by fiber extrusion and a mechanical post-spin treatment. This is the first visual representation that reveals a step-by-step process to spin and analyze artificial silk fibers on a laboratory scale. It also provides details to minimize the introduction of variability among fibers spun from the same spinning dope. Collectively, these methods will propel the process of artificial silk production, leading to higher quality fibers that surpass natural spider silks. PMID:22847722

  2. The influence of hydrocarbons in changing the mechanical and acoustic properties of a carbonate reservoir: implications of laboratory results on larger scale processes

    NASA Astrophysics Data System (ADS)

    Trippetta, Fabio; Ruggieri, Roberta; Geremia, Davide; Brandano, Marco

    2017-04-01

    bitumen. In order to compare our laboratory results at larger scale we selected 11 outcrops of the same lithofacies of laboratory samples both clean and bitumen-saturated. Fractures orientations, from the scan-line method, are similar for the two types of outcrops and they follow the same trends of literature data collected on older rocks. On the other hand, spacing data show very lower fracture density for bitumen-saturated outcrops confirming laboratory observations. In conclusion, laboratory experiments highlight a more elastic behaviour for bitumen-bearing samples and saturated outcrops are less prone to fracture respect to clean outcrops. Presence of bitumen has, thus, a positive influence on mechanical properties of the reservoir while acoustic model suggests that lighter oils should have an opposite effect. Geologically, this suggests that hydrocarbons migration in the study area predates the last stage of deformation giving also clues about a relatively high density of the oil when deformation began.

  3. Geophysical monitoring of solute transport in dual-domain environments through laboratory experiments, field-scale solute tracer tests, and numerical simulation

    NASA Astrophysics Data System (ADS)

    Swanson, Ryan David

    The advection-dispersion equation (ADE) fails to describe non-Fickian solute transport breakthrough curves (BTCs) in saturated porous media in both laboratory and field experiments, necessitating the use of other models. The dual-domain mass transfer (DDMT) model partitions the total porosity into mobile and less-mobile domains with an exchange of mass between the two domains, and this model can reproduce better fits to BTCs in many systems than ADE-based models. However, direct experimental estimation of DDMT model parameters remains elusive and model parameters are often calculated a posteriori by an optimization procedure. Here, we investigate the use of geophysical tools (direct-current resistivity, nuclear magnetic resonance, and complex conductivity) to estimate these model parameters directly. We use two different samples of the zeolite clinoptilolite, a material shown to demonstrate solute mass transfer due to a significant internal porosity, and provide the first evidence that direct-current electrical methods can track solute movement into and out of a less-mobile pore space in controlled laboratory experiments. We quantify the effects of assuming single-rate DDMT for multirate mass transfer systems. We analyze pore structures using material characterization methods (mercury porosimetry, scanning electron microscopy, and X-ray computer tomography), and compare these observations to geophysical measurements. Nuclear magnetic resonance in conjunction with direct-current resistivity measurements can constrain mobile and less-mobile porosities, but complex conductivity may have little value in relation to mass transfer despite the hypothesis that mass transfer and complex conductivity lengths scales are related. Finally, we conduct a geoelectrical monitored tracer test at the Macrodispersion Experiment (MADE) site in Columbus, MS. We relate hydraulic and electrical conductivity measurements to generate a 3D hydraulic conductivity field, and compare to

  4. Verification of the karst flow model under laboratory controlled conditions

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Andric, Ivo; Malenica, Luka; Srzic, Veljko

    2016-04-01

    Karst aquifers are very important groundwater resources around the world as well as in coastal part of Croatia. They consist of extremely complex structure defining by slow and laminar porous medium and small fissures and usually fast turbulent conduits/karst channels. Except simple lumped hydrological models that ignore high karst heterogeneity, full hydraulic (distributive) models have been developed exclusively by conventional finite element and finite volume elements considering complete karst heterogeneity structure that improves our understanding of complex processes in karst. Groundwater flow modeling in complex karst aquifers are faced by many difficulties such as a lack of heterogeneity knowledge (especially conduits), resolution of different spatial/temporal scales, connectivity between matrix and conduits, setting of appropriate boundary conditions and many others. Particular problem of karst flow modeling is verification of distributive models under real aquifer conditions due to lack of above-mentioned information. Therefore, we will show here possibility to verify karst flow models under the laboratory controlled conditions. Special 3-D karst flow model (5.6*2.6*2 m) consists of concrete construction, rainfall platform, 74 piezometers, 2 reservoirs and other supply equipment. Model is filled by fine sand (3-D porous matrix) and drainage plastic pipes (1-D conduits). This model enables knowledge of full heterogeneity structure including position of different sand layers as well as conduits location and geometry. Moreover, we know geometry of conduits perforation that enable analysis of interaction between matrix and conduits. In addition, pressure and precipitation distribution and discharge flow rates from both phases can be measured very accurately. These possibilities are not present in real sites what this model makes much more useful for karst flow modeling. Many experiments were performed under different controlled conditions such as different

  5. Attempt to model laboratory-scale diffusion and retardation data.

    PubMed

    Hölttä, P; Siitari-Kauppi, M; Hakanen, M; Tukiainen, V

    2001-02-01

    Different approaches for measuring the interaction between radionuclides and rock matrix are needed to test the compatibility of experimental retardation parameters and transport models used in assessing the safety of the underground repositories for the spent nuclear fuel. In this work, the retardation of sodium, calcium and strontium was studied on mica gneiss, unaltered, moderately altered and strongly altered tonalite using dynamic fracture column method. In-diffusion of calcium into rock cubes was determined to predict retardation in columns. In-diffusion of calcium into moderately and strongly altered tonalite was interpreted using a numerical code FTRANS. The code was able to interprete in-diffusion of weakly sorbing calcium into the saturated porous matrix. Elution curves of calcium for the moderately and strongly altered tonalite fracture columns were explained adequately using FTRANS code and parameters obtained from in-diffusion calculations. In this paper, mass distribution ratio values of sodium, calcium and strontium for intact rock are compared to values, previously obtained for crushed rock from batch and crushed rock column experiments. Kd values obtained from fracture column experiments were one order of magnitude lower than Kd values from batch experiments.

  6. Holographic models with anisotropic scaling

    NASA Astrophysics Data System (ADS)

    Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.

    2013-12-01

    We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.

  7. Intrawave sand suspension in the shoaling and surf zone of a field-scale laboratory beach

    NASA Astrophysics Data System (ADS)

    Brinkkemper, J. A.; de Bakker, A. T. M.; Ruessink, B. G.

    2017-01-01

    Short-wave sand transport in morphodynamic models is often based solely on the near-bed wave-orbital motion, thereby neglecting the effect of ripple-induced and surface-induced turbulence on sand transport processes. Here sand stirring was studied using measurements of the wave-orbital motion, turbulence, ripple characteristics, and sand concentration collected on a field-scale laboratory beach under conditions ranging from irregular nonbreaking waves above vortex ripples to plunging waves and bores above subdued bed forms. Turbulence and sand concentration were analyzed as individual events and in a wave phase-averaged sense. The fraction of turbulence events related to suspension events is relatively high (˜50%), especially beneath plunging waves. Beneath nonbreaking waves with vortex ripples, the sand concentration close to the bed peaks right after the maximum positive wave-orbital motion and shows a marked phase lag in the vertical, although the peak in concentration at higher elevations does not shift to beyond the positive to negative flow reversal. Under plunging waves, concentration peaks beneath the wavefront without any notable phase lags in the vertical. In the inner-surf zone (bores), the sand concentration remains phase coupled to positive wave-orbital motion, but the concentration decreases with distance toward the shoreline. On the whole, our observations demonstrate that the wave-driven suspended load transport is onshore and largest beneath plunging waves, while it is small and can also be offshore beneath shoaling waves. To accurately predict wave-driven sand transport in morphodynamic models, the effect of surface-induced turbulence beneath plunging waves should thus be included.

  8. Source Code Analysis Laboratory (SCALe)

    DTIC Science & Technology

    2012-04-01

    Versus Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8...is inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with software for a particular...servers support a collection of virtual machines (VMs) that can be configured to support analysis in various environments, such as Windows XP and Linux . A

  9. LINKING BROAD-SCALE LANDSCAPE APPROACHES WITH FINE-SCALE PROCESS MODELS: THE SEQL PROJECT

    EPA Science Inventory

    Regional landscape models have been shown to be useful in targeting watersheds in need of further attention at a local scale. However, knowing the proximate causes of environmental degradation at a regional scale, such as impervious surface, is not enough to help local decision m...

  10. Allometric Scaling and Resource Limitations Model of Total Aboveground Biomass in Forest Stands: Site-scale Test of Model

    NASA Astrophysics Data System (ADS)

    CHOI, S.; Shi, Y.; Ni, X.; Simard, M.; Myneni, R. B.

    2013-12-01

    Sparseness in in-situ observations has precluded the spatially explicit and accurate mapping of forest biomass. The need for large-scale maps has raised various approaches implementing conjugations between forest biomass and geospatial predictors such as climate, forest type, soil property, and topography. Despite the improved modeling techniques (e.g., machine learning and spatial statistics), a common limitation is that biophysical mechanisms governing tree growth are neglected in these black-box type models. The absence of a priori knowledge may lead to false interpretation of modeled results or unexplainable shifts in outputs due to the inconsistent training samples or study sites. Here, we present a gray-box approach combining known biophysical processes and geospatial predictors through parametric optimizations (inversion of reference measures). Total aboveground biomass in forest stands is estimated by incorporating the Forest Inventory and Analysis (FIA) and Parameter-elevation Regressions on Independent Slopes Model (PRISM). Two main premises of this research are: (a) The Allometric Scaling and Resource Limitations (ASRL) theory can provide a relationship between tree geometry and local resource availability constrained by environmental conditions; and (b) The zeroth order theory (size-frequency distribution) can expand individual tree allometry into total aboveground biomass at the forest stand level. In addition to the FIA estimates, two reference maps from the National Biomass and Carbon Dataset (NBCD) and U.S. Forest Service (USFS) were produced to evaluate the model. This research focuses on a site-scale test of the biomass model to explore the robustness of predictors, and to potentially improve models using additional geospatial predictors such as climatic variables, vegetation indices, soil properties, and lidar-/radar-derived altimetry products (or existing forest canopy height maps). As results, the optimized ASRL estimates satisfactorily

  11. Study of Electron-scale Dissipation near the X-line During Magnetic Reconnection in a Laboratory Plasma

    NASA Astrophysics Data System (ADS)

    Ji, H.; Yoo, J.; Dorfman, S. E.; Jara-Almonte, J.; Yamada, M.; Swanson, C.; Daughton, W. S.; Roytershteyn, V.; Kuwahata, A.; Ii, T.; Inomoto, M.; Ono, Y.; von Stechow, A.; Grulke, O.; Phan, T.; Mozer, F.; Bale, S. D.

    2013-12-01

    Despite its disruptive influences on the large-scale structures of space and solar plasmas, the crucial topological changes and associated dissipation during magnetic reconnection take place only near an X-line within thin singular layers. In the modern collisionless models where electrons and ions are allowed to move separately, it has been predicted that ions exhaust efficiently through a thicker, ion-scale dissipative layer while mobile electrons can evacuate through a thinner, electron-scale dissipation layer, allowing for efficient release of magnetic energy. While ion dissipation layers have been frequently detected, the existence of election layers near the X-line and the associated dissipation structures and mechanisms are still an open question, and will be a main subject of the coming MMS mission. In this presentation, we will summarize our efforts in the past a few years to study electron-scale dissipation in a well-controlled and well-diagnosed reconnecting current sheet in a laboratory plasma, with close comparisons with the state-of-the-art, 2D and 3D fully kinetic simulations. Key results include: (1) positive identification of electromagnetic waves detected at the current sheet center as long wave-length, lower-hybrid drift instabilities (EM-LHDI), (2) however, there is strong evidence that this EM-LHDI cannot provide the required force to support the reconnection electric field, (3) detection of 3D flux-rope-like magnetic structures during impulsive reconnection events, and (4) electrons are heated through non-classical mechanisms near the X-line with a small but clear temperature anisotropy. These results, unfortunately, do not resolve the outstanding discrepancies on electron layer thickness between best available experiments and fully kinetic simulations. To make further progress, we are continuously pushing in the both experimental and numerical frontiers. Experimentally, we started investigations on EM-LHDI and electron heating as a function

  12. Using a Large Scale Computational Model to Study the Effect of Longitudinal and Radial Electrical Coupling in the Cochlea

    NASA Astrophysics Data System (ADS)

    Mistrík, Pavel; Ashmore, Jonathan

    2009-02-01

    We describe a large scale computational model of electrical current flow in the cochlea which is constructed by a flexible Modified Nodal Analysis algorithm to incorporate electrical components representing hair cells and the intercellular radial and longitudinal current flow. The model is used as a laboratory to study the effects of changing longitudinal gap junctional coupling, and shows the way in which cochlear microphonic spreads and tuning is affected. The process for incorporating mechanical longitudinal coupling and feedback is described. We find a difference in tuning and attenuation depending on whether longitudinal or radial couplings are altered.

  13. Posttest destructive examination of the steel liner in a 1:6-scale reactor containment model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, L.D.

    A 1:6-scale model of a nuclear reactor containment model was built and tested at Sandia National Laboratories as part of research program sponsored by the Nuclear Regulatory Commission to investigate containment overpressure test was terminated due to leakage from a large tear in the steel liner. A limited destructive examination of the liner and anchorage system was conducted to gain information about the failure mechanism and is described. Sections of liner were removed in areas where liner distress was evident or where large strains were indicated by instrumentation during the test. The condition of the liner, anchorage system, and concretemore » for each of the regions that were investigated are described. The probable cause of the observed posttest condition of the liner is discussed.« less

  14. Multi-scale Modeling of Chromosomal DNA in Living Cells

    NASA Astrophysics Data System (ADS)

    Spakowitz, Andrew

    The organization and dynamics of chromosomal DNA play a pivotal role in a range of biological processes, including gene regulation, homologous recombination, replication, and segregation. Establishing a quantitative theoretical model of DNA organization and dynamics would be valuable in bridging the gap between the molecular-level packaging of DNA and genome-scale chromosomal processes. Our research group utilizes analytical theory and computational modeling to establish a predictive theoretical model of chromosomal organization and dynamics. In this talk, I will discuss our efforts to develop multi-scale polymer models of chromosomal DNA that are both sufficiently detailed to address specific protein-DNA interactions while capturing experimentally relevant time and length scales. I will demonstrate how these modeling efforts are capable of quantitatively capturing aspects of behavior of chromosomal DNA in both prokaryotic and eukaryotic cells. This talk will illustrate that capturing dynamical behavior of chromosomal DNA at various length scales necessitates a range of theoretical treatments that accommodate the critical physical contributions that are relevant to in vivo behavior at these disparate length and time scales. National Science Foundation, Physics of Living Systems Program (PHY-1305516).

  15. Direct pore-scale reactive transport modelling of dynamic wettability changes induced by surface complexation

    NASA Astrophysics Data System (ADS)

    Maes, Julien; Geiger, Sebastian

    2018-01-01

    Laboratory experiments have shown that oil production from sandstone and carbonate reservoirs by waterflooding could be significantly increased by manipulating the composition of the injected water (e.g. by lowering the ionic strength). Recent studies suggest that a change of wettability induced by a change in surface charge is likely to be one of the driving mechanism of the so-called low-salinity effect. In this case, the potential increase of oil recovery during waterflooding at low ionic strength would be strongly impacted by the inter-relations between flow, transport and chemical reaction at the pore-scale. Hence, a new numerical model that includes two-phase flow, solute reactive transport and wettability alteration is implemented based on the Direct Numerical Simulation of the Navier-Stokes equations and surface complexation modelling. Our model is first used to match experimental results of oil droplet detachment from clay patches. We then study the effect of wettability change on the pore-scale displacement for simple 2D calcite micro-models and evaluate the impact of several parameters such as water composition and injected velocity. Finally, we repeat the simulation experiments on a larger and more complex pore geometry representing a carbonate rock. Our simulations highlight two different effects of low-salinity on oil production from carbonate rocks: a smaller number of oil clusters left in the pores after invasion, and a greater number of pores invaded.

  16. Airframe noise of a small model transport aircraft and scaling effects. [Boeing 747

    NASA Technical Reports Server (NTRS)

    Shearin, J. G.

    1981-01-01

    Airframe noise of a 0.01 scale model Boeing 747 wide-body transport was measured in the Langley Anechoic Noise Facility. The model geometry simulated the landing and cruise configurations. The model noise was found to be similar in noise characteristics to that possessed by a 0.03 scale model 747. The 0.01 scale model noise data scaled to within 3 dB of full scale data using the same scaling relationships as that used to scale the 0.03 scale model noise data. The model noise data are compared with full scale noise data, where the full scale data are calculated using the NASA aircraft noise prediction program.

  17. Laboratory-Model Integrated-System FARAD Thruster

    NASA Technical Reports Server (NTRS)

    Polzin, K.A.; Best, S.; Miller, R.; Rose, M.F.; Owens, T.

    2008-01-01

    Pulsed inductive plasma accelerators are spacecraft propulsion devices in which energy is stored in a capacitor and then discharged through an inductive coil. The device is electrodeless, inducing a plasma current sheet in propellant located near the face of the coil. The propellant is accelerated and expelled at a high exhaust velocity (order of 10 km/s) through the interaction of the plasma current with an induced magnetic field. The Faraday Accelerator with RF-Assisted Discharge (FARAD) thruster [1,2] is a type of pulsed inductive plasma accelerator in which the plasma is preionized by a mechanism separate from that used to form the current sheet and accelerate the gas. Employing a separate preionization mechanism in this manner allows for the formation of an inductive current sheet at much lower discharge energies and voltages than those found in previous pulsed inductive accelerators like the Pulsed Inductive Thruster (PIT). In a previous paper [3], the authors presented a basic design for a 100 J/pulse FARAD laboratory-version thruster. The design was based upon guidelines and performance scaling parameters presented in Refs. [4, 5]. In this paper, we expand upon the design presented in Ref. [3] by presenting a fully-assembled and operational FARAD laboratory-model thruster and addressing system and subsystem-integration issues (concerning mass injection, preionization, and acceleration) that arose during assembly. Experimental data quantifying the operation of this thruster, including detailed internal plasma measurements, are presented by the authors in a companion paper [6]. The thruster operates by first injecting neutral gas over the face of a flat, inductive acceleration coil and at some later time preionizing the gas. Once the gas is preionized current is passed through the acceleration coil, inducing a plasma current sheet in the propellant that is accelerated away from the coil through electromagnetic interaction with the time-varying magnetic field

  18. Development of a scaled-down aerobic fermentation model for scale-up in recombinant protein vaccine manufacturing.

    PubMed

    Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony

    2012-08-17

    A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Development of a regional groundwater flow model for the area of the Idaho National Engineering Laboratory, Eastern Snake River Plain Aquifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, J.M.; Arnett, R.C.; Neupauer, R.M.

    This report documents a study conducted to develop a regional groundwater flow model for the Eastern Snake River Plain Aquifer in the area of the Idaho National Engineering Laboratory. The model was developed to support Waste Area Group 10, Operable Unit 10-04 groundwater flow and transport studies. The products of this study are this report and a set of computational tools designed to numerically model the regional groundwater flow in the Eastern Snake River Plain aquifer. The objective of developing the current model was to create a tool for defining the regional groundwater flow at the INEL. The model wasmore » developed to (a) support future transport modeling for WAG 10-04 by providing the regional groundwater flow information needed for the WAG 10-04 risk assessment, (b) define the regional groundwater flow setting for modeling groundwater contaminant transport at the scale of the individual WAGs, (c) provide a tool for improving the understanding of the groundwater flow system below the INEL, and (d) consolidate the existing regional groundwater modeling information into one usable model. The current model is appropriate for defining the regional flow setting for flow submodels as well as hypothesis testing to better understand the regional groundwater flow in the area of the INEL. The scale of the submodels must be chosen based on accuracy required for the study.« less

  20. Modelling solute dispersion in periodic heterogeneous porous media: Model benchmarking against intermediate scale experiments

    NASA Astrophysics Data System (ADS)

    Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham

    2018-06-01

    This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.

  1. Modeling Radial Holoblastic Cleavage: A Laboratory Activity for Developmental Biology.

    ERIC Educational Resources Information Center

    Ellis, Linda K.

    2000-01-01

    Introduces a laboratory activity designed for an undergraduate developmental biology course. Uses Play-Doh (plastic modeling clay) to build a multicellular embryo in order to provide a 3-D demonstration of cleavage. Includes notes for the instructor and student directions. (YDS)

  2. Multi-Scale Computational Models for Electrical Brain Stimulation

    PubMed Central

    Seo, Hyeon; Jun, Sung C.

    2017-01-01

    Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476

  3. Length scale effects of friction in particle compaction using atomistic simulations and a friction scaling model

    NASA Astrophysics Data System (ADS)

    Stone, T. W.; Horstemeyer, M. F.

    2012-09-01

    The objective of this study is to illustrate and quantify the length scale effects related to interparticle friction under compaction. Previous studies have shown as the length scale of a specimen decreases, the strength of a single crystal metal or ceramic increases. The question underlying this research effort continues the thought—If there is a length scale parameter related to the strength of a material, is there a length scale parameter related to friction? To explore the length scale effects of friction, molecular dynamics (MD) simulations using an embedded atom method potential were performed to analyze the compression of two spherical FCC nickel nanoparticles at different contact angles. In the MD model study, we applied a macroscopic plastic contact formulation to determine the normal plastic contact force at the particle interfaces and used the average shear stress from the MD simulations to determine the tangential contact forces. Combining this information with the Coulomb friction law, we quantified the MD interparticle coefficient of friction and showed good agreement with experimental studies and a Discrete Element Method prediction as a function of contact angle. Lastly, we compared our MD simulation friction values to the tribological predictions of Bhushan and Nosonovsky (BN), who developed a friction scaling model based on strain gradient plasticity and dislocation-assisted sliding that included a length scale parameter. The comparison revealed that the BN elastic friction scaling model did a much better job than the BN plastic scaling model of predicting the coefficient of friction values obtained from the MD simulations.

  4. Manufacturing Laboratory | Energy Systems Integration Facility | NREL

    Science.gov Websites

    Manufacturing Laboratory Manufacturing Laboratory Researchers in the Energy Systems Integration Facility's Manufacturing Laboratory develop methods and technologies to scale up renewable energy technology manufacturing capabilities. Photo of researchers and equipment in the Manufacturing Laboratory. Capability Hubs

  5. Laboratory modeling of aspects of large fires

    NASA Astrophysics Data System (ADS)

    Carrier, G. F.; Fendell, F. E.; Fleeter, R. D.; Gat, N.; Cohen, L. M.

    1984-04-01

    The design, construction, and use of a laboratory-scale combustion tunnel for simulating aspects of large-scale free-burning fires are described. The facility consists of an enclosed, rectangular-cross section (1.12 m wide x 1.27 m high) test section of about 5.6 m in length, fitted with large sidewall windows for viewing. A long upwind section permits smoothing (by screens and honeycombs) of a forced-convective flow, generated by a fan and adjustable in wind speed (up to a maximum speed of about 20 m/s prior to smoothing). Special provision is made for unconstrained ascent of a strongly buoyant plume, the duct over the test section being about 7 m in height. Also, a translatable test-section ceiling can be used to prevent jet-type spreading into the duct of the impressed flow; that is, the wind arriving at a site (say) half-way along the test section can be made (by ceiling movement) approximately the same as that at the leading edge of the test section with a fully open duct (fully retracted ceiling). Of particular interest here are the rate and structure of wind-aided flame spread streamwise along a uniform matrix of vertically oriented small fuel elements (such as toothpicks or coffee-strirrers), implanted in clay stratum on the test-section floor; this experiment is motivated by flame spread across strewn debris, such as may be anticipated in an urban environment after severe blast damage.

  6. Three Collaborative Models for Scaling Up Evidence-Based Practices

    PubMed Central

    Roberts, Rosemarie; Jones, Helen; Marsenich, Lynne; Sosna, Todd; Price, Joseph M.

    2015-01-01

    The current paper describes three models of research-practice collaboration to scale-up evidence-based practices (EBP): (1) the Rolling Cohort model in England, (2) the Cascading Dissemination model in San Diego County, and (3) the Community Development Team model in 53 California and Ohio counties. Multidimensional Treatment Foster Care (MTFC) and KEEP are the focal evidence-based practices that are designed to improve outcomes for children and families in the child welfare, juvenile justice, and mental health systems. The three scale-up models each originated from collaboration between community partners and researchers with the shared goal of wide-spread implementation and sustainability of MTFC/KEEP. The three models were implemented in a variety of contexts; Rolling Cohort was implemented nationally, Cascading Dissemination was implemented within one county, and Community Development Team was targeted at the state level. The current paper presents an overview of the development of each model, the policy frameworks in which they are embedded, system challenges encountered during scale-up, and lessons learned. Common elements of successful scale-up efforts, barriers to success, factors relating to enduring practice relationships, and future research directions are discussed. PMID:21484449

  7. Software Engineering Laboratory (SEL) cleanroom process model

    NASA Technical Reports Server (NTRS)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  8. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    PubMed Central

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  9. Modeling Laser-Driven Laboratory Astrophysics Experiments Using the CRASH Code

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Keiter, P.; Kuranz, C. C.; Malamud, G.; Trantham, M.; Drake, R.

    2013-06-01

    Laser-driven, laboratory astrophysics experiments can provide important insight into the physical processes relevant to astrophysical systems. The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density laboratory astrophysics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. The CRASH model has been used on many applications including: radiative shocks, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL) collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  10. Munition Burial by Local Scour and Sandwaves: large-scale laboratory experiments

    NASA Astrophysics Data System (ADS)

    Garcia, M. H.

    2017-12-01

    Our effort has been the direct observation and monitoring of the burial process of munitions induced by the combined action of waves, currents and pure oscillatory flows. The experimental conditions have made it possible to observe the burial process due to both local scour around model munitions as well as the passage of sandwaves. One experimental facility is the Large Oscillating Water Sediment Tunnel (LOWST) constructed with DURIP support. LOWST can reproduce field-like conditions near the sea bed. The second facility is a multipurpose wave-current flume which is 4 feet (1.20 m) deep, 6 feet (1.8 m) wide, and 161 feet (49.2 m) long. More than two hundred experiments were carried out in the wave-current flume. The main task completed within this effort has been the characterization of the burial process induced by local scour as well in the presence of dynamic sandwaves with superimposed ripples. It is found that the burial of a finite-length model munition (cylinder) is determined by local scour around the cylinder and by a more global process associated with the formation and evolution of sandwaves having superimposed ripples on them. Depending on the ratio of the amplitude of these features and the body's diameter (D), a model munition can progressively get partially or totally buried as such bedforms migrate. Analysis of the experimental data indicates that existing semi-empirical formulae for prediction of equilibrium-burial-depth, geometry of the scour hole around a cylinder, and time-scales developed for pipelines are not suitable for the case of a cylinder of finite length. Relative burial depth (Bd / D) is found to be mainly a function of two parameters. One is the Keulegan-Carpenter number, KC, and the Shields parameter, θ. Munition burial under either waves or combined flow, is influenced by two different processes. One is related to the local scour around the object, which takes place within the first few hundred minutes of flow action (i.e. short

  11. Investigation of correlation between full-scale and fifth-scale wind tunnel tests of a Bell helicopter Textron Model 222

    NASA Technical Reports Server (NTRS)

    Squires, P. K.

    1982-01-01

    Reasons for lack of correlation between data from a fifth-scale wind tunnel test of the Bell Helicopter Textron Model 222 and a full-scale test of the model 222 prototype in the NASA Ames 40-by 80-foot tunnel were investigated. This investigation centered around a carefully designed fifth-scale wind tunnel test of an accurately contoured model of the Model 222 prototype mounted on a replica of the full-scale mounting system. The improvement in correlation for drag characteristics in pitch and yaw with the fifth-scale model mounted on the replica system is shown. Interference between the model and mounting system was identified as a significant effect and was concluded to be a primary cause of the lack of correlation in the earlier tests.

  12. Model selection for identifying power-law scaling.

    PubMed

    Ton, Robert; Daffertshofer, Andreas

    2016-08-01

    Long-range temporal and spatial correlations have been reported in a remarkable number of studies. In particular power-law scaling in neural activity raised considerable interest. We here provide a straightforward algorithm not only to quantify power-law scaling but to test it against alternatives using (Bayesian) model comparison. Our algorithm builds on the well-established detrended fluctuation analysis (DFA). After removing trends of a signal, we determine its mean squared fluctuations in consecutive intervals. In contrast to DFA we use the values per interval to approximate the distribution of these mean squared fluctuations. This allows for estimating the corresponding log-likelihood as a function of interval size without presuming the fluctuations to be normally distributed, as is the case in conventional DFA. We demonstrate the validity and robustness of our algorithm using a variety of simulated signals, ranging from scale-free fluctuations with known Hurst exponents, via more conventional dynamical systems resembling exponentially correlated fluctuations, to a toy model of neural mass activity. We also illustrate its use for encephalographic signals. We further discuss confounding factors like the finite signal size. Our model comparison provides a proper means to identify power-law scaling including the range over which it is present. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Identification of oxidative coupling products of xylenols arising from laboratory-scale phytoremediation.

    PubMed

    Poerschmann, J; Schultze-Nobre, L; Ebert, R U; Górecki, T

    2015-01-01

    Oxidative coupling reactions take place during the passage of xylenols through a laboratory-scale helophyte-based constructed wetland system. Typical coupling product groups including tetramethyl-[1,1'-biphenyl] diols and tetramethyl diphenylether monools as stable organic intermediates could be identified by a combination of pre-chromatographic derivatization and GC/MS analysis. Structural assignment of individual analytes was performed by an increment system developed by Zenkevich to pre-calculate retention sequences. The most abundant analyte turned out to be 3,3',5,5'-tetramethyl-[1,1'-biphenyl]-4,4'-diol, which can be formed by a combination of radicals based on 2,6-xylenol or by an attack of a 2,6-xylenol-based radical on 2,6-xylenol. Organic intermediates originating from oxidative coupling could also be identified in anaerobic constructed wetland systems. This finding suggested the presence of (at least partly) oxic conditions in the rhizosphere. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Round-robin pretest analyses of a 1:6-scale reinforced concrete containment model subject to static internal pressurization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clauss, D.B.

    Analyses of a 1:6-scale reinforced concrete containment model that will be tested to failure at Sandia National Laboratories in the spring of 1987 were conducted by the following organizations in the United States and Europe: Sandia National Laboratories (USA), Argonne National Laboratory (USA), Electric Power Research Institute (USA), Commissariat a L'Energie Atomique (France), HM Nuclear Installations Inspectorate (UK), Comitato Nazionale per la ricerca e per lo sviluppo dell'Energia Nucleare e delle Energie Alternative (Italy), UK Atomic Energy Authority, Safety and Reliability Directorate (UK), Gesellschaft fuer Reaktorsicherheit (FRG), Brookhaven National Laboratory (USA), and Central Electricity Generating Board (UK). Each organization wasmore » supplied with a standard information package, which included construction drawings and actual material properties for most of the materials used in the model. Each organization worked independently using their own analytical methods. This report includes descriptions of the various analytical approaches and pretest predictions submitted by each organization. Significant milestones that occur with increasing pressure, such as damage to the concrete (cracking and crushing) and yielding of the steel components, and the failure pressure (capacity) and failure mechanism are described. Analytical predictions for pressure histories of strain in the liner and rebar and displacements are compared at locations where experimental results will be available after the test. Thus, these predictions can be compared to one another and to experimental results after the test.« less

  15. 15. YAZOO BACKWATER PUMPING STATION MODEL, YAZOO RIVER BASIN (MODEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. YAZOO BACKWATER PUMPING STATION MODEL, YAZOO RIVER BASIN (MODEL SCALE: 1' = 26'). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  16. DESIGN OF A SURFACTANT REMEDIATION FIELD DEMONSTRATION BASED ON LABORATORY AND MODELINE STUDIES

    EPA Science Inventory

    Surfactant-enhanced subsurface remediation is being evaluated as an innovative technology for expediting ground-water remediation. This paper reports on laboratory and modeling studies conducted in preparation for a pilot-scale field test of surfactant-enhanced subsurface remedia...

  17. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the

  18. Scaling, soil moisture and evapotranspiration in runoff models

    NASA Technical Reports Server (NTRS)

    Wood, Eric F.

    1993-01-01

    The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.

  19. Groundwater development stress: Global-scale indices compared to regional modeling

    USGS Publications Warehouse

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  20. EDITORIAL: Interrelationship between plasma phenomena in the laboratory and in space

    NASA Astrophysics Data System (ADS)

    Koepke, Mark

    2008-07-01

    The premise of investigating basic plasma phenomena relevant to space is that an alliance exists between both basic plasma physicists, using theory, computer modelling and laboratory experiments, and space science experimenters, using different instruments, either flown on different spacecraft in various orbits or stationed on the ground. The intent of this special issue on interrelated phenomena in laboratory and space plasmas is to promote the interpretation of scientific results in a broader context by sharing data, methods, knowledge, perspectives, and reasoning within this alliance. The desired outcomes are practical theories, predictive models, and credible interpretations based on the findings and expertise available. Laboratory-experiment papers that explicitly address a specific space mission or a specific manifestation of a space-plasma phenomenon, space-observation papers that explicitly address a specific laboratory experiment or a specific laboratory result, and theory or modelling papers that explicitly address a connection between both laboratory and space investigations were encouraged. Attention was given to the utility of the references for readers who seek further background, examples, and details. With the advent of instrumented spacecraft, the observation of waves (fluctuations), wind (flows), and weather (dynamics) in space plasmas was approached within the framework provided by theory with intuition provided by the laboratory experiments. Ideas on parallel electric field, magnetic topology, inhomogeneity, and anisotropy have been refined substantially by laboratory experiments. Satellite and rocket observations, theory and simulations, and laboratory experiments have contributed to the revelation of a complex set of processes affecting the accelerations of electrons and ions in the geospace plasma. The processes range from meso-scale of several thousands of kilometers to micro-scale of a few meters to kilometers. Papers included in this

  1. Geothermal alteration of Kamchatka rock physical properties: experimental and pore-scale modeling study

    NASA Astrophysics Data System (ADS)

    Shanina, Violetta; Gerke, Kirill; Bichkov, Andrey; Korost, Dmitry

    2013-04-01

    X-ray microtomography prior to any alteration and after the experiments. 3D images were used to quantify structural changes and to determine permeability values using a pore-scale modeling approach, as laboratory measurements with through flow are known to have a potential to modify the pore structure. Chemical composition and local mineral formations were investigated using a «Spectroscan Max GV» spectrometer and scanning electron microscope imaging. Our study revealed significant relationships between structure modifications, physical properties and alteration conditions. Main results and conclusions include: 1) initial porosity and its connectivity have substantial effect on alteration dynamics, rocks with higher porosity values and connected pore space exhibit more pronounced alterations; 2) under similar experimental conditions (pressure, temperature, duration) pH plays an important role, acidic conditions result in significant new mineral formation; 3) almost all physical properties, including porosity, permeability, and elastic properties, were seriously modified in the modeled geothermal processes within short (from geological point of view) time frames; 4) X-ray microtomography was found useful for mineral phases distribution and the pore-scale modeling approach was found to be a promising technique to numerically obtain rock properties based on 3D scans; 5) we conclude that alteration and change of reservoir rocks should be taken into account for re-injecting well and geothermal power-plant design.

  2. VOYAGE!, a Scale Model of the Solar System on the National Mall

    NASA Astrophysics Data System (ADS)

    Bennett, J. O.; Schoemer, J.; Goldstein, J. J.

    1994-12-01

    The Laboratory for Astrophysics (LfA) at the National Air and Space Museum (NASM) is proposing a new exhibit: an outdoor model of the Solar System on the National Mall, dedicated to the Spirit of Human Exploration. At one ten- billionth of the size of the actual Solar System, the model would provide a unique educational tool to illustrate the vast distances that characterize our local corner of the universe. Mounted on pedestals along a gravel walkway between the U.S. Capitol and the Washington Monument for 0.6 kilometers (an easy walk for over 10 million visitors a year), plaques would tactilely depict the scaled sizes and distances of the Sun, the planets, and their larger satellites in polished bronze. Porcelain enamel insets in the bronze would display color photographs, language-independent educational pictograms, and an international pictoral listing of spacecraft that have visited these bodies. Designed for a multi-cultural audience of varied ages and educational backgrounds, and with easy access to persons with disabilities, the model would celebrate humanity's long and ongoing relationship with Earth's nearest neighbors. Ideally, this exhibit will be supported by teacher-activity packets, self-guided tours, exportable models, computer software, and multi-lingual audio programs. This proposal is being partially funded by the NASA Solar Systems division.

  3. Fundamental Research on Percussion Drilling: Improved rock mechanics analysis, advanced simulation technology, and full-scale laboratory investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael S. Bruno

    This report summarizes the research efforts on the DOE supported research project Percussion Drilling (DE-FC26-03NT41999), which is to significantly advance the fundamental understandings of the physical mechanisms involved in combined percussion and rotary drilling, and thereby facilitate more efficient and lower cost drilling and exploration of hard-rock reservoirs. The project has been divided into multiple tasks: literature reviews, analytical and numerical modeling, full scale laboratory testing and model validation, and final report delivery. Literature reviews document the history, pros and cons, and rock failure physics of percussion drilling in oil and gas industries. Based on the current understandings, a conceptualmore » drilling model is proposed for modeling efforts. Both analytical and numerical approaches are deployed to investigate drilling processes such as drillbit penetration with compression, rotation and percussion, rock response with stress propagation, damage accumulation and failure, and debris transportation inside the annulus after disintegrated from rock. For rock mechanics modeling, a dynamic numerical tool has been developed to describe rock damage and failure, including rock crushing by compressive bit load, rock fracturing by both shearing and tensile forces, and rock weakening by repetitive compression-tension loading. Besides multiple failure criteria, the tool also includes a damping algorithm to dissipate oscillation energy and a fatigue/damage algorithm to update rock properties during each impact. From the model, Rate of Penetration (ROP) and rock failure history can be estimated. For cuttings transport in annulus, a 3D numerical particle flowing model has been developed with aid of analytical approaches. The tool can simulate cuttings movement at particle scale under laminar or turbulent fluid flow conditions and evaluate the efficiency of cutting removal. To calibrate the modeling efforts, a series of full-scale fluid hammer

  4. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2018-01-01

    Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  5. Persistence in soil of Miscanthus biochar in laboratory and field conditions

    PubMed Central

    Budai, Alice; O’Toole, Adam; Ma, Xingzhu; Rumpel, Cornelia; Abiven, Samuel

    2017-01-01

    Evaluating biochars for their persistence in soil under field conditions is an important step towards their implementation for carbon sequestration. Current evaluations might be biased because the vast majority of studies are short-term laboratory incubations of biochars produced in laboratory-scale pyrolyzers. Here our objective was to investigate the stability of a biochar produced with a medium-scale pyrolyzer, first through laboratory characterization and stability tests and then through field experiment. We also aimed at relating properties of this medium-scale biochar to that of a laboratory-made biochar with the same feedstock. Biochars were made of Miscanthus biomass for isotopic C-tracing purposes and produced at temperatures between 600 and 700°C. The aromaticity and degree of condensation of aromatic rings of the medium-scale biochar was high, as was its resistance to chemical oxidation. In a 90-day laboratory incubation, cumulative mineralization was 0.1% for the medium-scale biochar vs. 45% for the Miscanthus feedstock, pointing to the absence of labile C pool in the biochar. These stability results were very close to those obtained for biochar produced at laboratory-scale, suggesting that upscaling from laboratory to medium-scale pyrolyzers had little effect on biochar stability. In the field, the medium-scale biochar applied at up to 25 t C ha-1 decomposed at an estimated 0.8% per year. In conclusion, our biochar scored high on stability indices in the laboratory and displayed a mean residence time > 100 years in the field, which is the threshold for permanent removal in C sequestration projects. PMID:28873471

  6. Accounting for small scale heterogeneity in ecohydrologic watershed models

    NASA Astrophysics Data System (ADS)

    Bhaskar, A.; Fleming, B.; Hogan, D. M.

    2016-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  7. Accounting for small scale heterogeneity in ecohydrologic watershed models

    NASA Astrophysics Data System (ADS)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  8. Measuring ignitability for in situ burning of oil spills weathered under Arctic conditions: from laboratory studies to large-scale field experiments.

    PubMed

    Fritt-Rasmussen, Janne; Brandvik, Per Johan

    2011-08-01

    This paper compares the ignitability of Troll B crude oil weathered under simulated Arctic conditions (0%, 50% and 90% ice cover). The experiments were performed in different scales at SINTEF's laboratories in Trondheim, field research station on Svalbard and in broken ice (70-90% ice cover) in the Barents Sea. Samples from the weathering experiments were tested for ignitability using the same laboratory burning cell. The measured ignitability from the experiments in these different scales showed a good agreement for samples with similar weathering. The ice conditions clearly affected the weathering process, and 70% ice or more reduces the weathering and allows a longer time window for in situ burning. The results from the Barents Sea revealed that weathering and ignitability can vary within an oil slick. This field use of the burning cell demonstrated that it can be used as an operational tool to monitor the ignitability of oil spills. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Model-based reasoning in the physics laboratory: Framework and initial results

    NASA Astrophysics Data System (ADS)

    Zwickl, Benjamin M.; Hu, Dehui; Finkelstein, Noah; Lewandowski, H. J.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] We review and extend existing frameworks on modeling to develop a new framework that describes model-based reasoning in introductory and upper-division physics laboratories. Constructing and using models are core scientific practices that have gained significant attention within K-12 and higher education. Although modeling is a broadly applicable process, within physics education, it has been preferentially applied to the iterative development of broadly applicable principles (e.g., Newton's laws of motion in introductory mechanics). A significant feature of the new framework is that measurement tools (in addition to the physical system being studied) are subjected to the process of modeling. Think-aloud interviews were used to refine the framework and demonstrate its utility by documenting examples of model-based reasoning in the laboratory. When applied to the think-aloud interviews, the framework captures and differentiates students' model-based reasoning and helps identify areas of future research. The interviews showed how students productively applied similar facets of modeling to the physical system and measurement tools: construction, prediction, interpretation of data, identification of model limitations, and revision. Finally, we document students' challenges in explicitly articulating assumptions when constructing models of experimental systems and further challenges in model construction due to students' insufficient prior conceptual understanding. A modeling perspective reframes many of the seemingly arbitrary technical details of measurement tools and apparatus as an opportunity for authentic and engaging scientific sense making.

  10. Laboratory and field scale bioremediation of hexachlorocyclohexane (HCH) contaminated soils by means of bioaugmentation and biostimulation.

    PubMed

    Garg, Nidhi; Lata, Pushp; Jit, Simran; Sangwan, Naseer; Singh, Amit Kumar; Dwivedi, Vatsala; Niharika, Neha; Kaur, Jasvinder; Saxena, Anjali; Dua, Ankita; Nayyar, Namita; Kohli, Puneet; Geueke, Birgit; Kunz, Petra; Rentsch, Daniel; Holliger, Christof; Kohler, Hans-Peter E; Lal, Rup

    2016-06-01

    Hexachlorocyclohexane (HCH) contaminated soils were treated for a period of up to 64 days in situ (HCH dumpsite, Lucknow) and ex situ (University of Delhi) in line with three bioremediation approaches. The first approach, biostimulation, involved addition of ammonium phosphate and molasses, while the second approach, bioaugmentation, involved addition of a microbial consortium consisting of a group of HCH-degrading sphingomonads that were isolated from HCH contaminated sites. The third approach involved a combination of biostimulation and bioaugmentation. The efficiency of the consortium was investigated in laboratory scale experiments, in a pot scale study, and in a full-scale field trial. It turned out that the approach of combining biostimulation and bioaugmentation was most effective in achieving reduction in the levels of α- and β-HCH and that the application of a bacterial consortium as compared to the action of a single HCH-degrading bacterial strain was more successful. Although further degradation of β- and δ-tetrachlorocyclohexane-1,4-diol, the terminal metabolites of β- and δ-HCH, respectively, did not occur by the strains comprising the consortium, these metabolites turned out to be less toxic than the parental HCH isomers.

  11. A two-scale model for dynamic damage evolution

    NASA Astrophysics Data System (ADS)

    Keita, Oumar; Dascalu, Cristian; François, Bertrand

    2014-03-01

    This paper presents a new micro-mechanical damage model accounting for inertial effect. The two-scale damage model is fully deduced from small-scale descriptions of dynamic micro-crack propagation under tensile loading (mode I). An appropriate micro-mechanical energy analysis is combined with homogenization based on asymptotic developments in order to obtain the macroscopic evolution law for damage. Numerical simulations are presented in order to illustrate the ability of the model to describe known behaviors like size effects for the structural response, strain-rate sensitivity, brittle-ductile transition and wave dispersion.

  12. JWST Full-Scale Model on Display in Orlando

    NASA Image and Video Library

    2017-12-08

    JWST Full-Scale Model on Display. A full-scale model of the James Webb Space Telescope was built by the prime contractor, Northrop Grumman, to provide a better understanding of the size, scale and complexity of this satellite. The model is constructed mainly of aluminum and steel, weighs 12,000 lb., and is approximately 80 feet long, 40 feet wide and 40 feet tall. The model requires 2 trucks to ship it and assembly takes a crew of 12 approximately four days. This model has traveled to a few sites since 2005. The photographs below were taken at some of its destinations. The model was on display at The International Society for Optical Engineering's (SPIE) week-long Astronomical Telescopes and Instrumentations conference,May 25 - 30, 2006. Credit: NASA/Goddard Space Flight Center/Dr Mark Clampin NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  13. A methodology for ecosystem-scale modeling of selenium

    USGS Publications Warehouse

    Presser, T.S.; Luoma, S.N.

    2010-01-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determinehow Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure

  14. A methodology for ecosystem-scale modeling of selenium.

    PubMed

    Presser, Theresa S; Luoma, Samuel N

    2010-10-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determine how Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure

  15. Multi-scale hydrometeorological observation and modelling for flash flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-09-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2), where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2), where the river routing and flooding processes become important. These observations are part of the HyMeX (HYdrological cycle in the Mediterranean EXperiment) enhanced observation period (EOP), which will last 4 years (2012-2015). In terms of hydrological modelling, the objective is to set up regional-scale models, while addressing small and generally ungauged catchments, which represent the scale of interest for flood risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set-up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes on various scales.

  16. Multi-scale hydrometeorological observation and modelling for flash-flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-02-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2) where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2) where the river routing and flooding processes become important. These observations are part of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) Enhanced Observation Period (EOP) and lasts four years (2012-2015). In terms of hydrological modelling the objective is to set up models at the regional scale, while addressing small and generally ungauged catchments, which is the scale of interest for flooding risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses, in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes at various scales.

  17. [Modeling continuous scaling of NDVI based on fractal theory].

    PubMed

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  18. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    NASA Astrophysics Data System (ADS)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  19. Three Approaches to Using Lengthy Ordinal Scales in Structural Equation Models: Parceling, Latent Scoring, and Shortening Scales

    ERIC Educational Resources Information Center

    Yang, Chongming; Nay, Sandra; Hoyle, Rick H.

    2010-01-01

    Lengthy scales or testlets pose certain challenges for structural equation modeling (SEM) if all the items are included as indicators of a latent construct. Three general approaches to modeling lengthy scales in SEM (parceling, latent scoring, and shortening) have been reviewed and evaluated. A hypothetical population model is simulated containing…

  20. Reduction of product-related species during the fermentation and purification of a recombinant IL-1 receptor antagonist at the laboratory and pilot scale.

    PubMed

    Schirmer, Emily B; Golden, Kathryn; Xu, Jin; Milling, Jesse; Murillo, Alec; Lowden, Patricia; Mulagapati, Srihariraju; Hou, Jinzhao; Kovalchin, Joseph T; Masci, Allyson; Collins, Kathryn; Zarbis-Papastoitsis, Gregory

    2013-08-01

    Through a parallel approach of tracking product quality through fermentation and purification development, a robust process was designed to reduce the levels of product-related species. Three biochemically similar product-related species were identified as byproducts of host-cell enzymatic activity. To modulate intracellular proteolytic activity, key fermentation parameters (temperature, pH, trace metals, EDTA levels, and carbon source) were evaluated through bioreactor optimization, while balancing negative effects on growth, productivity, and oxygen demand. The purification process was based on three non-affinity steps and resolved product-related species by exploiting small charge differences. Using statistical design of experiments for elution conditions, a high-resolution cation exchange capture column was optimized for resolution and recovery. Further reduction of product-related species was achieved by evaluating a matrix of conditions for a ceramic hydroxyapatite column. The optimized fermentation process was transferred from the 2-L laboratory scale to the 100-L pilot scale and the purification process was scaled accordingly to process the fermentation harvest. The laboratory- and pilot-scale processes resulted in similar process recoveries of 60 and 65%, respectively, and in a product that was of equal quality and purity to that of small-scale development preparations. The parallel approach for up- and downstream development was paramount in achieving a robust and scalable clinical process. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A Laboratory Model of a Cooled Continental Shelf

    DTIC Science & Technology

    1993-06-01

    26 Abstract A laboratory model of wintertime cooling over a continental shelf has a water surface cooled by air in an annular rotating...singular point where Froude number u/(g’hl)1/2 equaled a given value and flowed out along the bottom. In this formula, u is velocity of the water onto...support cross-shelf geostrophic currents. To accomplish this, an annular geometry was used. A cylindrical tank was fitted with a shallow but wide

  2. Multi-Scale Modeling in Morphogenesis: A Critical Analysis of the Cellular Potts Model

    PubMed Central

    Voss-Böhme, Anja

    2012-01-01

    Cellular Potts models (CPMs) are used as a modeling framework to elucidate mechanisms of biological development. They allow a spatial resolution below the cellular scale and are applied particularly when problems are studied where multiple spatial and temporal scales are involved. Despite the increasing usage of CPMs in theoretical biology, this model class has received little attention from mathematical theory. To narrow this gap, the CPMs are subjected to a theoretical study here. It is asked to which extent the updating rules establish an appropriate dynamical model of intercellular interactions and what the principal behavior at different time scales characterizes. It is shown that the longtime behavior of a CPM is degenerate in the sense that the cells consecutively die out, independent of the specific interdependence structure that characterizes the model. While CPMs are naturally defined on finite, spatially bounded lattices, possible extensions to spatially unbounded systems are explored to assess to which extent spatio-temporal limit procedures can be applied to describe the emergent behavior at the tissue scale. To elucidate the mechanistic structure of CPMs, the model class is integrated into a general multiscale framework. It is shown that the central role of the surface fluctuations, which subsume several cellular and intercellular factors, entails substantial limitations for a CPM's exploitation both as a mechanistic and as a phenomenological model. PMID:22984409

  3. A Boussinesq-scaled, pressure-Poisson water wave model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint

    2015-02-01

    Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.

  4. Exoplanet Coronagraph Shaped Pupil Masks and Laboratory Scale Star Shade Masks: Design, Fabrication and Characterization

    NASA Technical Reports Server (NTRS)

    Balasubramanian, Kunjithapatha; White, Victor; Yee, Karl; Echternach, Pierre; Muller, Richard; Dickie, Matthew; Cady, Eric; Mejia Prada, Camilo; Ryan, Daniel; Poberezhskiy, Ilya; hide

    2015-01-01

    Star light suppression technologies to find and characterize faint exoplanets include internal coronagraph instruments as well as external star shade occulters. Currently, the NASA WFIRST-AFTA mission study includes an internal coronagraph instrument to find and characterize exoplanets. Various types of masks could be employed to suppress the host star light to about 10 -9 level contrast over a broad spectrum to enable the coronagraph mission objectives. Such masks for high contrast internal coronagraphic imaging require various fabrication technologies to meet a wide range of specifications, including precise shapes, micron scale island features, ultra-low reflectivity regions, uniformity, wave front quality, achromaticity, etc. We present the approaches employed at JPL to produce pupil plane and image plane coronagraph masks by combining electron beam, deep reactive ion etching, and black silicon technologies with illustrative examples of each, highlighting milestone accomplishments from the High Contrast Imaging Testbed (HCIT) at JPL and from the High Contrast Imaging Lab (HCIL) at Princeton University. We also present briefly the technologies applied to fabricate laboratory scale star shade masks.

  5. Exoplanet coronagraph shaped pupil masks and laboratory scale star shade masks: design, fabrication and characterization

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Kunjithapatham; White, Victor; Yee, Karl; Echternach, Pierre; Muller, Richard; Dickie, Matthew; Cady, Eric; Mejia Prada, Camilo; Ryan, Daniel; Poberezhskiy, Ilya; Zhou, Hanying; Kern, Brian; Riggs, A. J.; Zimmerman, Neil T.; Sirbu, Dan; Shaklan, Stuart; Kasdin, Jeremy

    2015-09-01

    Star light suppression technologies to find and characterize faint exoplanets include internal coronagraph instruments as well as external star shade occulters. Currently, the NASA WFIRST-AFTA mission study includes an internal coronagraph instrument to find and characterize exoplanets. Various types of masks could be employed to suppress the host star light to about 10-9 level contrast over a broad spectrum to enable the coronagraph mission objectives. Such masks for high contrast internal coronagraphic imaging require various fabrication technologies to meet a wide range of specifications, including precise shapes, micron scale island features, ultra-low reflectivity regions, uniformity, wave front quality, achromaticity, etc. We present the approaches employed at JPL to produce pupil plane and image plane coronagraph masks by combining electron beam, deep reactive ion etching, and black silicon technologies with illustrative examples of each, highlighting milestone accomplishments from the High Contrast Imaging Testbed (HCIT) at JPL and from the High Contrast Imaging Lab (HCIL) at Princeton University. We also present briefly the technologies applied to fabricate laboratory scale star shade masks.

  6. A high-resolution global-scale groundwater model

    NASA Astrophysics Data System (ADS)

    de Graaf, I. E. M.; Sutanudjaja, E. H.; van Beek, L. P. H.; Bierkens, M. F. P.

    2015-02-01

    Groundwater is the world's largest accessible source of fresh water. It plays a vital role in satisfying basic needs for drinking water, agriculture and industrial activities. During times of drought groundwater sustains baseflow to rivers and wetlands, thereby supporting ecosystems. Most global-scale hydrological models (GHMs) do not include a groundwater flow component, mainly due to lack of geohydrological data at the global scale. For the simulation of lateral flow and groundwater head dynamics, a realistic physical representation of the groundwater system is needed, especially for GHMs that run at finer resolutions. In this study we present a global-scale groundwater model (run at 6' resolution) using MODFLOW to construct an equilibrium water table at its natural state as the result of long-term climatic forcing. The used aquifer schematization and properties are based on available global data sets of lithology and transmissivities combined with the estimated thickness of an upper, unconfined aquifer. This model is forced with outputs from the land-surface PCRaster Global Water Balance (PCR-GLOBWB) model, specifically net recharge and surface water levels. A sensitivity analysis, in which the model was run with various parameter settings, showed that variation in saturated conductivity has the largest impact on the groundwater levels simulated. Validation with observed groundwater heads showed that groundwater heads are reasonably well simulated for many regions of the world, especially for sediment basins (R2 = 0.95). The simulated regional-scale groundwater patterns and flow paths demonstrate the relevance of lateral groundwater flow in GHMs. Inter-basin groundwater flows can be a significant part of a basin's water budget and help to sustain river baseflows, especially during droughts. Also, water availability of larger aquifer systems can be positively affected by additional recharge from inter-basin groundwater flows.

  7. Avalanches and scaling collapse in the large-N Kuramoto model

    NASA Astrophysics Data System (ADS)

    Coleman, J. Patrick; Dahmen, Karin A.; Weaver, Richard L.

    2018-04-01

    We study avalanches in the Kuramoto model, defined as excursions of the order parameter due to ephemeral episodes of synchronization. We present scaling collapses of the avalanche sizes, durations, heights, and temporal profiles, extracting scaling exponents, exponent relations, and scaling functions that are shown to be consistent with the scaling behavior of the power spectrum, a quantity independent of our particular definition of an avalanche. A comprehensive scaling picture of the noise in the subcritical finite-N Kuramoto model is developed, linking this undriven system to a larger class of driven avalanching systems.

  8. Scaled model guidelines for solar coronagraphs' external occulters with an optimized shape.

    PubMed

    Landini, Federico; Baccani, Cristian; Schweitzer, Hagen; Asoubar, Daniel; Romoli, Marco; Taccola, Matteo; Focardi, Mauro; Pancrazzi, Maurizio; Fineschi, Silvano

    2017-12-01

    One of the major challenges faced by externally occulted solar coronagraphs is the suppression of the light diffracted by the occulter edge. It is a contribution to the stray light that overwhelms the coronal signal on the focal plane and must be reduced by modifying the geometrical shape of the occulter. There is a rich literature, mostly experimental, on the appropriate choice of the most suitable shape. The problem arises when huge coronagraphs, such as those in formation flight, shall be tested in a laboratory. A recent contribution [Opt. Lett.41, 757 (2016)OPLEDP0146-959210.1364/OL.41.000757] provides the guidelines for scaling the geometry and replicate in the laboratory the flight diffraction pattern as produced by the whole solar disk and a flight occulter but leaves the conclusion on the occulter scale law somehow unjustified. This paper provides the numerical support for validating that conclusion and presents the first-ever simulation of the diffraction behind an occulter with an optimized shape along the optical axis with the solar disk as a source. This paper, together with Opt. Lett.41, 757 (2016)OPLEDP0146-959210.1364/OL.41.000757, aims at constituting a complete guide for scaling the coronagraphs' geometry.

  9. Design and process aspects of laboratory scale SCF particle formation systems.

    PubMed

    Vemavarapu, Chandra; Mollan, Matthew J; Lodaya, Mayur; Needham, Thomas E

    2005-03-23

    Consistent production of solid drug materials of desired particle and crystallographic morphologies under cGMP conditions is a frequent challenge to pharmaceutical researchers. Supercritical fluid (SCF) technology gained significant attention in pharmaceutical research by not only showing a promise in this regard but also accommodating the principles of green chemistry. Given that this technology attained commercialization in coffee decaffeination and in the extraction of hops and other essential oils, a majority of the off-the-shelf SCF instrumentation is designed for extraction purposes. Only a selective few vendors appear to be in the early stages of manufacturing equipment designed for particle formation. The scarcity of information on the design and process engineering of laboratory scale equipment is recognized as a significant shortcoming to the technological progress. The purpose of this article is therefore to provide the information and resources necessary for startup research involving particle formation using supercritical fluids. The various stages of particle formation by supercritical fluid processing can be broadly classified into delivery, reaction, pre-expansion, expansion and collection. The importance of each of these processes in tailoring the particle morphology is discussed in this article along with presenting various alternatives to perform these operations.

  10. A program for the investigation of the Multibody Modeling, Verification, and Control Laboratory

    NASA Technical Reports Server (NTRS)

    Tobbe, Patrick A.; Christian, Paul M.; Rakoczy, John M.; Bulter, Marlon L.

    1993-01-01

    The Multibody Modeling, Verification, and Control (MMVC) Laboratory is under development at NASA MSFC in Huntsville, Alabama. The laboratory will provide a facility in which dynamic tests and analyses of multibody flexible structures representative of future space systems can be conducted. The purpose of the tests are to acquire dynamic measurements of the flexible structures undergoing large angle motions and use the data to validate the multibody modeling code, TREETOPS, developed under sponsorship of NASA. Advanced control systems design and system identification methodologies will also be implemented in the MMVC laboratory. This paper describes the ground test facility, the real-time control system, and the experiments. A top-level description of the TREETOPS code is also included along with the validation plan for the MMVC program. Dynamic test results from component testing are also presented and discussed. A detailed discussion of the test articles, which manifest the properties of large flexible space structures, is included along with a discussion of the various candidate control methodologies to be applied in the laboratory.

  11. A Lagrangian dynamic subgrid-scale model turbulence

    NASA Technical Reports Server (NTRS)

    Meneveau, C.; Lund, T. S.; Cabot, W.

    1994-01-01

    A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.

  12. Laboratory Astrophysics: Enabling Scientific Discovery and Understanding

    NASA Technical Reports Server (NTRS)

    Kirby, K.

    2006-01-01

    NASA's Science Strategic Roadmap for Universe Exploration lays out a series of science objectives on a grand scale and discusses the various missions, over a wide range of wavelengths, which will enable discovery. Astronomical spectroscopy is arguably the most powerful tool we have for exploring the Universe. Experimental and theoretical studies in Laboratory Astrophysics convert "hard-won data into scientific understanding". However, the development of instruments with increasingly high spectroscopic resolution demands atomic and molecular data of unprecedented accuracy and completeness. How to meet these needs, in a time of severe budgetary constraints, poses a significant challenge both to NASA, the astronomical observers and model-builders, and the laboratory astrophysics community. I will discuss these issues, together with some recent examples of productive astronomy/lab astro collaborations.

  13. Characterization and Scaling of Heave Plates for Ocean Wave Energy Converters

    NASA Astrophysics Data System (ADS)

    Rosenberg, Brian; Mundon, Timothy

    2016-11-01

    Ocean waves present a tremendous, untapped source of renewable energy, capable of providing half of global electricity demand by 2040. Devices developed to extract this energy are known as wave energy converters (WECs) and encompass a wide range of designs. A somewhat common archetype is a two-body point-absorber, in which a surface float reacts against a submerged "heave" plate to extract energy. Newer WEC's are using increasingly complex geometries for the submerged plate and an emerging challenge in creating low-order models lies in accurately determining the hydrodynamic coefficients (added mass and drag) in the corresponding oscillatory flow regime. Here we present experiments in which a laboratory-scale heave plate is sinusoidally forced in translation (heave) and rotation (pitch) to characterize the hydrodynamic coefficients as functions of the two governing nondimensional parameters, Keulegan-Carpenter number (amplitude) and Reynolds number. Comparisons against CFD simulations are offered. As laboratory-scale physical model tests remain the standard for testing wave energy devices, effects and implications of scaling (with respect to a full-scale device) are also investigated.

  14. New time scale based k-epsilon model for near-wall turbulence

    NASA Technical Reports Server (NTRS)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  15. Motion sickness in cats - A symptom rating scale used in laboratory and flight tests

    NASA Technical Reports Server (NTRS)

    Suri, K. B.; Daunton, N. G.; Crampton, G. H.

    1979-01-01

    The cat is proposed as a model for the study of motion and space sickness. Development of a scale for rating the motion sickness severity in the cat is described. The scale is used to evaluate an antimotion sickness drug, d-amphetamine plus scopolamine, and to determine whether it is possible to predict sickness susceptibility during parabolic flight, including zero-G maneuvers, from scores obtained during ground based trials.

  16. Laboratory Applications of the Vortex Tube.

    ERIC Educational Resources Information Center

    Bruno, Thomas J.

    1987-01-01

    Discussed are a brief explanation of the function of the vortex tube and some applications for the chemistry laboratory. It is a useful and inexpensive solution to many small-scale laboratory heating and cooling applications. (RH)

  17. Investigating the Nexus of Climate, Energy, Water, and Land at Decision-Relevant Scales: The Platform for Regional Integrated Modeling and Analysis (PRIMA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraucunas, Ian P.; Clarke, Leon E.; Dirks, James A.

    2015-04-01

    The Platform for Regional Integrated Modeling and Analysis (PRIMA) is an innovative modeling system developed at Pacific Northwest National Laboratory (PNNL) to simulate interactions among natural and human systems at scales relevant to regional decision making. PRIMA brings together state-of-the-art models of regional climate, hydrology, agriculture, socioeconomics, and energy systems using a flexible coupling approach. The platform can be customized to inform a variety of complex questions and decisions, such as the integrated evaluation of mitigation and adaptation options across a range of sectors. Research into stakeholder decision support needs underpins the platform's application to regional issues, including uncertainty characterization.more » Ongoing numerical experiments are yielding new insights into the interactions among human and natural systems on regional scales with an initial focus on the energy-land-water nexus in the upper U.S. Midwest. This paper focuses on PRIMA’s functional capabilities and describes some lessons learned to date about integrated regional modeling.« less

  18. Development and testing of watershed-scale models for poorly drained soils

    Treesearch

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2005-01-01

    Watershed-scale hydrology and water quality models were used to evaluate the crrmulative impacts of land use and management practices on dowrzstream hydrology and nitrogen loading of poorly drained watersheds. Field-scale hydrology and nutrient dyyrutmics are predicted by DRAINMOD in both models. In the first model (DRAINMOD-DUFLOW), field-scale predictions are coupled...

  19. Some relevant parameters for assessing fire hazards of combustible mine materials using laboratory scale experiments

    PubMed Central

    Litton, Charles D.; Perera, Inoka E.; Harteis, Samuel P.; Teacoach, Kara A.; DeRosa, Maria I.; Thomas, Richard A.; Smith, Alex C.

    2018-01-01

    When combustible materials ignite and burn, the potential for fire growth and flame spread represents an obvious hazard, but during these processes of ignition and flaming, other life hazards present themselves and should be included to ensure an effective overall analysis of the relevant fire hazards. In particular, the gases and smoke produced both during the smoldering stages of fires leading to ignition and during the advanced flaming stages of a developing fire serve to contaminate the surrounding atmosphere, potentially producing elevated levels of toxicity and high levels of smoke obscuration that render the environment untenable. In underground mines, these hazards may be exacerbated by the existing forced ventilation that can carry the gases and smoke to locations far-removed from the fire location. Clearly, materials that require high temperatures (above 1400 K) and that exhibit low mass loss during thermal decomposition, or that require high heat fluxes or heat transfer rates to ignite represent less of a hazard than materials that decompose at low temperatures or ignite at low levels of heat flux. In order to define and quantify some possible parameters that can be used to assess these hazards, small-scale laboratory experiments were conducted in a number of configurations to measure: 1) the toxic gases and smoke produced both during non-flaming and flaming combustion; 2) mass loss rates as a function of temperature to determine ease of thermal decomposition; and 3) mass loss rates and times to ignition as a function of incident heat flux. This paper describes the experiments that were conducted, their results, and the development of a set of parameters that could possibly be used to assess the overall fire hazard of combustible materials using small scale laboratory experiments. PMID:29599565

  20. Some relevant parameters for assessing fire hazards of combustible mine materials using laboratory scale experiments.

    PubMed

    Litton, Charles D; Perera, Inoka E; Harteis, Samuel P; Teacoach, Kara A; DeRosa, Maria I; Thomas, Richard A; Smith, Alex C

    2018-04-15

    When combustible materials ignite and burn, the potential for fire growth and flame spread represents an obvious hazard, but during these processes of ignition and flaming, other life hazards present themselves and should be included to ensure an effective overall analysis of the relevant fire hazards. In particular, the gases and smoke produced both during the smoldering stages of fires leading to ignition and during the advanced flaming stages of a developing fire serve to contaminate the surrounding atmosphere, potentially producing elevated levels of toxicity and high levels of smoke obscuration that render the environment untenable. In underground mines, these hazards may be exacerbated by the existing forced ventilation that can carry the gases and smoke to locations far-removed from the fire location. Clearly, materials that require high temperatures (above 1400 K) and that exhibit low mass loss during thermal decomposition, or that require high heat fluxes or heat transfer rates to ignite represent less of a hazard than materials that decompose at low temperatures or ignite at low levels of heat flux. In order to define and quantify some possible parameters that can be used to assess these hazards, small-scale laboratory experiments were conducted in a number of configurations to measure: 1) the toxic gases and smoke produced both during non-flaming and flaming combustion; 2) mass loss rates as a function of temperature to determine ease of thermal decomposition; and 3) mass loss rates and times to ignition as a function of incident heat flux. This paper describes the experiments that were conducted, their results, and the development of a set of parameters that could possibly be used to assess the overall fire hazard of combustible materials using small scale laboratory experiments.

  1. A small-scale turbulence model

    NASA Technical Reports Server (NTRS)

    Lundgren, T. S.

    1993-01-01

    A previously derived analytical model for the small-scale structure of turbulence is reformulated in such a way that the energy spectrum may be computed. The model is an ensemble of two-dimensional (2D) vortices with internal spiral structure, each stretched by an axially symmetric strain flow. Stretching and differential rotation produce an energy cascade to smaller scales in which the stretching represents the effect of instabilities and the spiral structure is the source of dissipation at the end of the cascade. The energy spectrum of the resulting flow may be expressed as a time integration involving only the enstrophy spectrum of the time evolving 2D cross section flow, which may be obtained numerically. Examples are given in which a k exp -5/3 spectrum is obtained by this method. The k exp -5/3 inertial range spectrum is shown to be related to the existence of a self-similar enstrophy preserving range in the 2D enstrophy spectrum. The results are found to be insensitive to time dependence of the strain rate, including even intermittent on-or-off strains.

  2. Laboratory-Scale Simulation and Real-Time Tracking of a Microbial Contamination Event and Subsequent Shock-Chlorination in Drinking Water

    PubMed Central

    Besmer, Michael D.; Sigrist, Jürg A.; Props, Ruben; Buysschaert, Benjamin; Mao, Guannan; Boon, Nico; Hammes, Frederik

    2017-01-01

    Rapid contamination of drinking water in distribution and storage systems can occur due to pressure drop, backflow, cross-connections, accidents, and bio-terrorism. Small volumes of a concentrated contaminant (e.g., wastewater) can contaminate large volumes of water in a very short time with potentially severe negative health impacts. The technical limitations of conventional, cultivation-based microbial detection methods neither allow for timely detection of such contaminations, nor for the real-time monitoring of subsequent emergency remediation measures (e.g., shock-chlorination). Here we applied a newly developed continuous, ultra high-frequency flow cytometry approach to track a rapid pollution event and subsequent disinfection of drinking water in an 80-min laboratory scale simulation. We quantified total (TCC) and intact (ICC) cell concentrations as well as flow cytometric fingerprints in parallel in real-time with two different staining methods. The ingress of wastewater was detectable almost immediately (i.e., after 0.6% volume change), significantly changing TCC, ICC, and the flow cytometric fingerprint. Shock chlorination was rapid and detected in real time, causing membrane damage in the vast majority of bacteria (i.e., drop of ICC from more than 380 cells μl-1 to less than 30 cells μl-1 within 4 min). Both of these effects as well as the final wash-in of fresh tap water followed calculated predictions well. Detailed and highly quantitative tracking of microbial dynamics at very short time scales and for different characteristics (e.g., concentration, membrane integrity) is feasible. This opens up multiple possibilities for targeted investigation of a myriad of bacterial short-term dynamics (e.g., disinfection, growth, detachment, operational changes) both in laboratory-scale research and full-scale system investigations in practice. PMID:29085343

  3. Pilot-scale laboratory waste treatment by supercritical water oxidation.

    PubMed

    Oshima, Yoshito; Hayashi, Rumiko; Yamamoto, Kazuo

    2006-01-01

    Supercritical water oxidation (SCWO) is a reaction in which organics in an aqueous solution can be oxidized by O2 to CO2 and H2O at a very high reaction rate. In 2003, The University of Tokyo constructed a facility for the SCWO process, the capacity of which is approximately 20 kl/year, for the purpose of treating organic laboratory waste. Through the operation of this facility, we have demonstrated that most of the organics in laboratory waste including halogenated organic compounds can be successfully treated without the formation of dioxines, suggesting that SCWO is useful as an alternative technology to the conventional incineration process.

  4. 26. CURRENT METERS WITH FOLDING SCALE (MEASURED IN INCHES) IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. CURRENT METERS WITH FOLDING SCALE (MEASURED IN INCHES) IN FOREGROUND: GURLEY MODEL NO. 665 AT CENTER, GURLEY MODEL NO. 625 'PYGMY' CURRENT METER AT LEFT, AND WES MINIATURE PRICE-TYPE CURRENT METER AT RIGHT. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  5. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    ERIC Educational Resources Information Center

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  6. Modelling and scale-up of chemical flooding: Second annual report for the period October 1986--September 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Lake, L.W.; Sepehrnoori, K.

    1988-11-01

    The objective of this research is to develop, validate, and apply a comprehensive chemical flooding simulator for chemical recovery processes involving surfactants, polymers, and alkaline chemicals in various combinations. This integrated program includes components of laboratory experiments, physical property modelling, scale-up theory, and numerical analysis as necessary and integral components of the simulation activity. Developing, testing and applying flooding simulator (UTCHEM) to a wide variety of laboratory and reservoir problems involving tracers, polymers, polymer gels, surfactants, and alkaline agent has been continued. Improvements in both the physical-chemical and numerical aspects of UTCHEM have been made which enhance its versatility, accuracymore » and speed. Supporting experimental studies during the past year include relative permeability and trapping of microemulsion, tracer flow studies oil recovery in cores using alcohol free surfactant slugs, and microemulsion viscosity measurements. These have enabled model improvement simulator testing. Another code called PROPACK has also been developed which is used as a preprocessor for UTCHEM. Specifically, it is used to evaluate input to UTCHEM by computing and plotting key physical properties such as phase behavior interfacial tension.« less

  7. Optogenetic stimulation of a meso-scale human cortical model

    NASA Astrophysics Data System (ADS)

    Selvaraj, Prashanth; Szeri, Andrew; Sleigh, Jamie; Kirsch, Heidi

    2015-03-01

    Neurological phenomena like sleep and seizures depend not only on the activity of individual neurons, but on the dynamics of neuron populations as well. Meso-scale models of cortical activity provide a means to study neural dynamics at the level of neuron populations. Additionally, they offer a safe and economical way to test the effects and efficacy of stimulation techniques on the dynamics of the cortex. Here, we use a physiologically relevant meso-scale model of the cortex to study the hypersynchronous activity of neuron populations during epileptic seizures. The model consists of a set of stochastic, highly non-linear partial differential equations. Next, we use optogenetic stimulation to control seizures in a hyperexcited cortex, and to induce seizures in a normally functioning cortex. The high spatial and temporal resolution this method offers makes a strong case for the use of optogenetics in treating meso scale cortical disorders such as epileptic seizures. We use bifurcation analysis to investigate the effect of optogenetic stimulation in the meso scale model, and its efficacy in suppressing the non-linear dynamics of seizures.

  8. ONE-ATMOSPHERE DYNAMICS DESCRIPTION IN THE MODELS-3 COMMUNITY MULTI-SCALE QUALITY (CMAQ) MODELING SYSTEM

    EPA Science Inventory

    This paper proposes a general procedure to link meteorological data with air quality models, such as U.S. EPA's Models-3 Community Multi-scale Air Quality (CMAQ) modeling system. CMAQ is intended to be used for studying multi-scale (urban and regional) and multi-pollutant (ozon...

  9. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. Dixon

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Reportmore » is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the

  10. a Model Study of Small-Scale World Map Generalization

    NASA Astrophysics Data System (ADS)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  11. Approximate Seismic Diffusive Models of Near-Receiver Geology: Applications from Lab Scale to Field

    NASA Astrophysics Data System (ADS)

    King, Thomas; Benson, Philip; De Siena, Luca; Vinciguerra, Sergio

    2017-04-01

    This paper presents a novel and simple method of seismic envelope analysis that can be applied at multiple scales, e.g. field, m to km scale and laboratory, mm to cm scale, and utilises the diffusive approximation of the seismic wavefield (Wegler, 2003). Coefficient values for diffusion and attenuation are obtained from seismic coda energies and are used to describe the rate at which seismic energy is scattered and attenuated into the local medium around a receiver. Values are acquired by performing a linear least squares inversion of coda energies calculated in successive time windows along a seismic trace. Acoustic emission data were taken from piezoelectric transducers (PZT) with typical resonance frequency of 1-5MHz glued around rock samples during deformation laboratory experiments carried out using a servo-controlled triaxial testing machine, where a shear/damage zone is generated under compression after the nucleation, growth and coalescence of microcracks. Passive field data were collected from conventional geophones during the 2004-2008 eruption of Mount St. Helens volcano (MSH), USA where a sudden reawakening of the volcanic activity and a new dome growth has occurred. The laboratory study shows a strong correlation between variations of the coefficients over time and the increase of differential stress as the experiment progresses. The field study links structural variations present in the near-surface geology, including those seen in previous geophysical studies of the area, to these same coefficients. Both studies show a correlation between frequency and structural feature size, i.e. landslide slip-planes and microcracks, with higher frequencies being much more sensitive to smaller scale features and vice-versa.

  12. Scale Interactions in the Tropics from a Simple Multi-Cloud Model

    NASA Astrophysics Data System (ADS)

    Niu, X.; Biello, J. A.

    2017-12-01

    Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis

  13. A multi-scale model for geared transmission aero-thermodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Sean M.

    -steady cyclic-symmetric simulation of the internal flow. This high-frequency conduction solution is coupled directly with a model for the meshing friction, developed by a collaborator, which was adapted for use in a finite-volume CFD code. The local surface heat flux on solid surfaces is calculated by time-averaging the heat flux in the high-frequency analysis. This serves as a fixed-flux boundary condition in the long time scale conduction module. The temperature distribution from this long time scale heat transfer calculation serves as a boundary condition for the internal convection simulation, and as the initial condition for the high-frequency heat transfer module. Using this multi-scale model, simulations were performed for equilibrium and loss-of-lubrication operation of the NASA Glenn Research Center test stand. Results were compared with experimental measurements. In addition to the multi-scale model itself, several other specific contributions were made. Eulerian models for droplets and wall-films were developed and im- plemented in the CFD code. A novel approach to retaining liquid film on the solid surfaces, and strategies for its mass exchange with droplets, were developed and verified. Models for interfacial transfer between droplets and wall-film were implemented, and include the effects of droplet deposition, splashing, bouncing, as well as film breakup. These models were validated against airfoil data. To mitigate the observed slow convergence of CFD simulations of the enclosed aerodynamic flows within gearboxes, Fourier stability analysis was applied to the SIMPLE-C fractional-step algorithm. From this, recommendations to accelerate the convergence rate through enhanced pressure-velocity coupling were made. These were shown to be effective. A fast-running finite-volume reduced-order-model of the gearbox aero-thermo- dynamics was developed, and coupled with the tribology model to investigate the sensitivity of loss-of-lubrication predictions to various model

  14. A catchment scale water balance model for FIFE

    NASA Technical Reports Server (NTRS)

    Famiglietti, J. S.; Wood, E. F.; Sivapalan, M.; Thongs, D. J.

    1992-01-01

    A catchment scale water balance model is presented and used to predict evaporation from the King's Creek catchment at the First ISLSCP Field Experiment site on the Konza Prairie, Kansas. The model incorporates spatial variability in topography, soils, and precipitation to compute the land surface hydrologic fluxes. A network of 20 rain gages was employed to measure rainfall across the catchment in the summer of 1987. These data were spatially interpolated and used to drive the model during storm periods. During interstorm periods the model was driven by the estimated potential evaporation, which was calculated using net radiation data collected at site 2. Model-computed evaporation is compared to that observed, both at site 2 (grid location 1916-BRS) and the catchment scale, for the simulation period from June 1 to October 9, 1987.

  15. Atmospheric numerical modeling resource enhancement and model convective parameterization/scale interaction studies

    NASA Technical Reports Server (NTRS)

    Cushman, Paula P.

    1993-01-01

    Research will be undertaken in this contract in the area of Modeling Resource and Facilities Enhancement to include computer, technical and educational support to NASA investigators to facilitate model implementation, execution and analysis of output; to provide facilities linking USRA and the NASA/EADS Computer System as well as resident work stations in ESAD; and to provide a centralized location for documentation, archival and dissemination of modeling information pertaining to NASA's program. Additional research will be undertaken in the area of Numerical Model Scale Interaction/Convective Parameterization Studies to include implementation of the comparison of cloud and rain systems and convective-scale processes between the model simulations and what was observed; and to incorporate the findings of these and related research findings in at least two refereed journal articles.

  16. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  17. The Tanzania experience: clinical laboratory testing harmonization and equipment standardization at different levels of a tiered health laboratory system.

    PubMed

    Massambu, Charles; Mwangi, Christina

    2009-06-01

    The rapid scale-up of the care and treatment programs in Tanzania during the preceding 4 years has greatly increased the demand for quality laboratory services for diagnosis of HIV and monitoring patients during antiretroviral therapy. Laboratory services were not in a position to cope with this demand owing to poor infrastructure, lack of human resources, erratic and/or lack of reagent supply and commodities, and slow manual technologies. With the limited human resources in the laboratory and the need for scaling up the care and treatment program, it became necessary to install automated equipment and train personnel for the increased volume of testing and new tests across all laboratory levels. With the numerous partners procuring equipment, the possibility of a multitude of equipment platforms with attendant challenges for procurement of reagents, maintenance of equipment, and quality assurance arose. Tanzania, therefore, had to harmonize laboratory tests and standardize laboratory equipment at different levels of the laboratory network. The process of harmonization of tests and standardization of equipment included assessment of laboratories, review of guidelines, development of a national laboratory operational plan, and stakeholder advocacy. This document outlines this process.

  18. Outreach/education interface for Cryosphere models using the Virtual Ice Sheet Laboratory

    NASA Astrophysics Data System (ADS)

    Larour, E. Y.; Halkides, D. J.; Romero, V.; Cheng, D. L.; Perez, G.

    2014-12-01

    In the past decade, great strides have been made in the development of models capable of projecting the future evolution of glaciers and the polar ice sheets in a changing climate. These models are now capable of replicating some of the trends apparent in satellite observations. However, because this field is just now maturing, very few efforts have been dedicated to adapting these capabilities to education. Technologies that have been used in outreach efforts in Atmospheric and Oceanic sciences still have not been extended to Cryospheric Science. We present a cutting-edge, technologically driven virtual laboratory, geared towards outreach and k-12 education, dedicated to the polar ice sheets on Antarctica and Greenland, and their role as major contributors to sea level rise in coming decades. VISL (Virtual Ice Sheet Laboratory) relies on state-of-the art Web GL rendering of polar ice sheets, Android/iPhone and web portability using Javascript, as well as C++ simulations (back-end) based on the Ice Sheet System Model, the NASA model for simulating the evolution of polar ice sheets. Using VISL, educators and students can have an immersive experience into the world of polar ice sheets, while at the same exercising the capabilities of a state-of-the-art climate model, all of it embedded into an education experience that follows the new STEM standards for education.This work was performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Cryosphere Science Program.

  19. The Relationships between University Students' Chemistry Laboratory Anxiety, Attitudes, and Self-Efficacy Beliefs

    ERIC Educational Resources Information Center

    Kurbanoglu, N. Izzet; Akin, Ahmet

    2010-01-01

    The aim of this study is to examine the relationships between chemistry laboratory anxiety, chemistry attitudes, and self-efficacy. Participants were 395 university students. Participants completed the Chemistry Laboratory Anxiety Scale, the Chemistry Attitudes Scale, and the Self-efficacy Scale. Results showed that chemistry laboratory anxiety…

  20. An interactive display system for large-scale 3D models

    NASA Astrophysics Data System (ADS)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  1. Scaling and percolation in the small-world network model

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.; Watts, D. J.

    1999-12-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Padé approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model.

  2. Black carbon absorption at the global scale is affected by particle-scale diversity in composition.

    PubMed

    Fierce, Laura; Bond, Tami C; Bauer, Susanne E; Mena, Francisco; Riemer, Nicole

    2016-09-01

    Atmospheric black carbon (BC) exerts a strong, but uncertain, warming effect on the climate. BC that is coated with non-absorbing material absorbs more strongly than the same amount of BC in an uncoated particle, but the magnitude of this absorption enhancement (Eabs) is not well constrained. Modelling studies and laboratory measurements have found stronger absorption enhancement than has been observed in the atmosphere. Here, using a particle-resolved aerosol model to simulate diverse BC populations, we show that absorption is overestimated by as much as a factor of two if diversity is neglected and population-averaged composition is assumed across all BC-containing particles. If, instead, composition diversity is resolved, we find Eabs=1-1.5 at low relative humidity, consistent with ambient observations. This study offers not only an explanation for the discrepancy between modelled and observed absorption enhancement, but also demonstrates how particle-scale simulations can be used to develop relationships for global-scale models.

  3. Black Carbon Absorption at the Global Scale Is Affected by Particle-Scale Diversity in Composition

    NASA Technical Reports Server (NTRS)

    Fierce, Laura; Bond, Tami C.; Bauer, Susanne E.; Mena, Francisco; Riemer, Nicole

    2016-01-01

    Atmospheric black carbon (BC) exerts a strong, but uncertain, warming effect on the climate. BC that is coated with non-absorbing material absorbs more strongly than the same amount of BC in an uncoated particle, but the magnitude of this absorption enhancement (E(sub abs)) is not well constrained. Modelling studies and laboratory measurements have found stronger absorption enhancement than has been observed in the atmosphere. Here, using a particle-resolved aerosol model to simulate diverse BC populations, we show that absorption is overestimated by as much as a factor of two if diversity is neglected and population-averaged composition is assumed across all BC-containing particles. If, instead, composition diversity is resolved, we find E(sub abs) = 1 - 1.5 at low relative humidity, consistent with ambient observations. This study offers not only an explanation for the discrepancy between modelled and observed absorption enhancement, but also demonstrates how particle-scale simulations can be used to develop relationships for global-scale models.

  4. A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation

    NASA Astrophysics Data System (ADS)

    Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.

    2016-12-01

    Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.

  5. Customer satisfaction assessment at the Pacific Northwest National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DN Anderson; ML Sours

    2000-03-23

    The Pacific Northwest National Laboratory (PNNL) is developing and implementing a customer satisfaction assessment program (CSAP) to assess the quality of research and development provided by the laboratory. This report presents the customer survey component of the PNNL CSAP. The customer survey questionnaire is composed of two major sections: Strategic Value and Project Performance. Both sections contain a set of questions that can be answered with a 5-point Likert scale response. The strategic value section consists of five questions that are designed to determine if a project directly contributes to critical future national needs. The project Performance section consists ofmore » nine questions designed to determine PNNL performance in meeting customer expectations. A statistical model for customer survey data is developed and this report discusses how to analyze the data with this model. The properties of the statistical model can be used to establish a gold standard or performance expectation for the laboratory, and then to assess progress. The gold standard is defined using laboratory management input--answers to four questions, in terms of the information obtained from the customer survey: (1) What should the average Strategic Value be for the laboratory project portfolio? (2) What Strategic Value interval should include most of the projects in the laboratory portfolio? (3) What should average Project Performance be for projects with a Strategic Value of about 2? (4) What should average Project Performance be for projects with a Strategic Value of about 4? To be able to provide meaningful answers to these questions, the PNNL customer survey will need to be fully implemented for several years, thus providing a link between management perceptions of laboratory performance and customer survey data.« less

  6. Machine learning to construct reduced-order models and scaling laws for reactive-transport applications

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.

    2017-12-01

    The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are

  7. Spatial calibration and temporal validation of flow for regional scale hydrologic modeling

    USDA-ARS?s Scientific Manuscript database

    Physically based regional scale hydrologic modeling is gaining importance for planning and management of water resources. Calibration and validation of such regional scale model is necessary before applying it for scenario assessment. However, in most regional scale hydrologic modeling, flow validat...

  8. Modeling laser-plasma acceleration in the laboratory frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-01-01

    A simulation of laser-plasma acceleration in the laboratory frame. Both the laser and the wakefield buckets must be resolved over the entire domain of the plasma, requiring many cells and many time steps. While researchers often use a simulation window that moves with the pulse, this reduces only the multitude of cells, not the multitude of time steps. For an artistic impression of how to solve the simulation by using the boosted-frame method, watch the video "Modeling laser-plasma acceleration in the wakefield frame".

  9. Modeling, simulation, and analysis at Sandia National Laboratories for health care systems

    NASA Astrophysics Data System (ADS)

    Polito, Joseph

    1994-12-01

    Modeling, Simulation, and Analysis are special competencies of the Department of Energy (DOE) National Laboratories which have been developed and refined through years of national defense work. Today, many of these skills are being applied to the problem of understanding the performance of medical devices and treatments. At Sandia National Laboratories we are developing models at all three levels of health care delivery: (1) phenomenology models for Observation and Test, (2) model-based outcomes simulations for Diagnosis and Prescription, and (3) model-based design and control simulations for the Administration of Treatment. A sampling of specific applications include non-invasive sensors for blood glucose, ultrasonic scanning for development of prosthetics, automated breast cancer diagnosis, laser burn debridement, surgical staple deformation, minimally invasive control for administration of a photodynamic drug, and human-friendly decision support aids for computer-aided diagnosis. These and other projects are being performed at Sandia with support from the DOE and in cooperation with medical research centers and private companies. Our objective is to leverage government engineering, modeling, and simulation skills with the biotechnical expertise of the health care community to create a more knowledge-rich environment for decision making and treatment.

  10. A rheological model for elastohydrodynamic contacts based on primary laboratory data

    NASA Technical Reports Server (NTRS)

    Bair, S.; Winer, W. O.

    1979-01-01

    A shear rheological model based on primary laboratory data is proposed for concentrated contact lubrication. The model is a Maxwell model modified with a limiting shear stress. Three material properties are required: Low shear stress viscosity, limiting elastic shear modulus, and the limiting shear stress the material can withstand. All three are functions of temperature and pressure. In applying the model to EHD contacts the predicted response possesses the characteristics expected from several experiments reported in the literature and, in one specific case where direct comparison could be made, good numerical agreement is shown.

  11. PEP Support: Laboratory Scale Leaching and Permeate Stability Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, Renee L.; Peterson, Reid A.; Rinehart, Donald E.

    2010-05-21

    This report documents results from a variety of activities requested by the Hanford Tank Waste Treatment and Immobilization Plant (WTP). The activities related to caustic leaching, oxidative leaching, permeate precipitation behavior of waste as well as chromium (Cr) leaching are: • Model Input Boehmite Leaching Tests • Pretreatment Engineering Platform (PEP) Support Leaching Tests • PEP Parallel Leaching Tests • Precipitation Study Results • Cr Caustic and Oxidative Leaching Tests. Leaching test activities using the PEP simulant provided input to a boehmite dissolution model and determined the effect of temperature on mass loss during caustic leaching, the reaction rate constantmore » for the boehmite dissolution, and the effect of aeration in enhancing the chromium dissolution during caustic leaching. Other tests were performed in parallel with the PEP tests to support the development of scaling factors for caustic and oxidative leaching. Another study determined if precipitate formed in the wash solution after the caustic leach in the PEP. Finally, the leaching characteristics of different chromium compounds under different conditions were examined to determine the best one to use in further testing.« less

  12. Estimating porosity and solid dielectric permittivity in the Miami Limestone using high-frequency ground penetrating radar (GPR) measurements at the laboratory scale

    NASA Astrophysics Data System (ADS)

    Mount, Gregory J.; Comas, Xavier

    2014-10-01

    Subsurface water flow in South Florida is largely controlled by the heterogeneous nature of the karst limestone in the Biscayne aquifer and its upper formation, the Miami Limestone. These heterogeneities are amplified by dissolution structures that induce changes in the aquifer's material and physical properties (i.e., porosity and dielectric permittivity) and create preferential flow paths. Understanding such patterns are critical for the development of realistic groundwater flow models, particularly in the Everglades, where restoration of hydrological conditions is intended. In this work, we used noninvasive ground penetrating radar (GPR) to estimate the spatial variability in porosity and the dielectric permittivity of the solid phase of the limestone at centimeter-scale resolution to evaluate the potential for field-based GPR studies. A laboratory setup that included high-frequency GPR measurements under completely unsaturated and saturated conditions was used to estimate changes in electromagnetic wave velocity through Miami Limestone samples. The Complex Refractive Index Model was used to derive estimates of porosity and dielectric permittivity of the solid phase of the limestone. Porosity estimates of the samples ranged between 45.2 and 66.0% and showed good correspondence with estimates of porosity using analytical and digital image techniques. Solid dielectric permittivity values ranged between 7.0 and 13.0. This study shows the ability of GPR to image the spatial variability of porosity and dielectric permittivity in the Miami Limestone and shows potential for expanding these results to larger scales and other karst aquifers.

  13. A process proof test for model concepts: Modelling the meso-scale

    NASA Astrophysics Data System (ADS)

    Hellebrand, Hugo; Müller, Christoph; Matgen, Patrick; Fenicia, Fabrizio; Savenije, Huub

    In hydrological modelling the use of detailed soil data is sometimes troublesome, since often these data are hard to obtain and, if available at all, difficult to interpret and process in a way that makes them meaningful for the model at hand. Intuitively the understanding and mapping of dominant runoff processes in the soil show high potential for improving hydrological models. In this study a labour-intensive methodology to assess dominant runoff processes is simplified in such a way that detailed soil maps are no longer needed. Nonetheless, there is an ongoing debate on how to integrate this type of information in hydrological models. In this study, dominant runoff processes (DRP) are mapped for meso-scale basins using the permeability of the substratum, land use information and the slope in a GIS. During a field campaign the processes are validated and for each DRP assumptions are made concerning their water storage capacity. The latter is done by means of combining soil data obtained during the field campaign with soil data obtained from the literature. Second, several parsimoniously parameterized conceptual hydrological models are used that incorporate certain aspects of the DRP. The result of these models are compared with a benchmark model in which the soil is represented as only one lumped parameter to test the contribution of the DRP in hydrological models. The proposed methodology is tested for 15 meso-scale river basins located in Luxembourg. The main goal of this study is to investigate if integrating dominant runoff processes, which have high information content concerning soil characteristics, with hydrological models allows the improvement of simulation results models with a view to regionalization and predictions in ungauged basins. The regionalization procedure gave no clear results. The calibration procedure and the well-mixed discharge signal of the calibration basins are considered major causes for this and it made the deconvolution of

  14. Posttest analysis of the 1:6-scale reinforced concrete containment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeiffer, P.A.; Kennedy, J.M.; Marchertas, A.H.

    A prediction of the response of the Sandia National Laboratories 1:6- scale reinforced concrete containment model test was made by Argonne National Laboratory. ANL along with nine other organizations performed a detailed nonlinear response analysis of the 1:6-scale model containment subjected to overpressurization in the fall of 1986. The two-dimensional code TEMP-STRESS and the three-dimensional NEPTUNE code were utilized (1) to predict the global response of the structure, (2) to identify global failure sites and the corresponding failure pressures and (3) to identify some local failure sites and pressure levels. A series of axisymmetric models was studied with the two-dimensionalmore » computer program TEMP-STRESS. The comparison of these pretest computations with test data from the containment model has provided a test for the capability of the respective finite element codes to predict global failure modes, and hence serves as a validation of these codes. Only the two-dimensional analyses will be discussed in this paper. 3 refs., 10 figs.« less

  15. From catchment scale hydrologic processes to numerical models and robust predictions of climate change impacts at regional scales

    NASA Astrophysics Data System (ADS)

    Wagener, T.

    2017-12-01

    Current societal problems and questions demand that we increasingly build hydrologic models for regional or even continental scale assessment of global change impacts. Such models offer new opportunities for scientific advancement, for example by enabling comparative hydrology or connectivity studies, and for improved support of water management decision, since we might better understand regional impacts on water resources from large scale phenomena such as droughts. On the other hand, we are faced with epistemic uncertainties when we move up in scale. The term epistemic uncertainty describes those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past (e.g. due to climate change), because the historical data is unreliable (e.g. because it is imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is no observation network available at all). In this talk I will explore: (1) how we might build a bridge between what we have learned about catchment scale processes and hydrologic model development and evaluation at larger scales. (2) How we can understand the impact of epistemic uncertainty in large scale hydrologic models. And (3) how we might utilize large scale hydrologic predictions to understand climate change impacts, e.g. on infectious disease risk.

  16. Modelling fragile X syndrome in the laboratory setting: A behavioral perspective.

    PubMed

    Melancia, Francesca; Trezza, Viviana

    2018-04-25

    Fragile X syndrome is the most common form of inherited mental retardation and the most frequent monogenic cause of syndromic autism spectrum disorders. The syndrome is caused by the loss of the Fragile X Mental Retardation Protein (FMRP), a key RNA-binding protein involved in synaptic plasticity and neuronal morphology. Patients show intellectual disability, social deficits, repetitive behaviors and impairments in social communication. The aim of this review is to outline the importance of behavioral phenotyping of animal models of FXS from a developmental perspective, by showing how the behavioral characteristics of FXS at the clinical level can be translated into effective, developmentally-specific and clinically meaningful behavioral readouts in the laboratory setting. After introducing the behavioral features, diagnostic criteria and off-label pharmacotherapy of FXS, we outline how FXS-relevant behavioral features can be modelled in laboratory animals in the course of development: we review the progress to date, discuss how behavioral phenotyping in animal models of FXS is essential to identify potential treatments, and discuss caveats and future directions in this research field. Copyright © 2018. Published by Elsevier B.V.

  17. Evaluation of Icing Scaling on Swept NACA 0012 Airfoil Models

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Lee, Sam

    2012-01-01

    Icing scaling tests in the NASA Glenn Icing Research Tunnel (IRT) were performed on swept wing models using existing recommended scaling methods that were originally developed for straight wing. Some needed modifications on the stagnation-point local collection efficiency (i.e., beta(sub 0) calculation and the corresponding convective heat transfer coefficient for swept NACA 0012 airfoil models have been studied and reported in 2009, and the correlations will be used in the current study. The reference tests used a 91.4-cm chord, 152.4-cm span, adjustable sweep airfoil model of NACA 0012 profile at velocities of 100 and 150 knot and MVD of 44 and 93 mm. Scale-to-reference model size ratio was 1:2.4. All tests were conducted at 0deg angle of attack (AoA) and 45deg sweep angle. Ice shape comparison results were presented for stagnation-point freezing fractions in the range of 0.4 to 1.0. Preliminary results showed that good scaling was achieved for the conditions test by using the modified scaling methods developed for swept wing icing.

  18. A classroom activity and laboratory on astronomical scale

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael

    2017-10-01

    The four basics "scales" at which astronomy is studied, that of (1) the Earth-Moon system, (2) the solar system, (3) the galaxy, and (4) the universe (Fig. 1), are a common place to start an intro astronomy course. In fact, courses and textbooks are often divided into approximately four sections based on these scales.

  19. SCALE Code System 6.2.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less

  20. SCALE Code System 6.2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  1. Round-robin analysis of the behavior of a 1:6-scale reinforced concrete containment model pressurized to failure: Posttest evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clauss, D.B.

    A 1:6-scale model of a reinforced concrete containment building was pressurized incrementally to failure at a remote site at Sandia National Laboratories. The response of the model was recorded with more than 1000 channels of data (primarily strain and displacement measurements) at 37 discrete pressure levels. The primary objective of this test was to generate data that could be used to validate methods for predicting the performance of containment buildings subject to loads beyond their design basis. Extensive analyses were conducted before the test to predict the behavior of the model. Ten organizations in Europe and the US conducted independentmore » analyses of the model and contributed to a report on the pretest predictions. Predictions included structural response at certain predetermined locations in the model as well as capacity and failure mode. This report discusses comparisons between the pretest predictions and the experimental results. Posttest evaluations that were conducted to provide additional insight into the model behavior are also described. The significance of the analysis and testing of the 1:6-scale model to performance evaluations of actual containments subject to beyond design basis loads is also discussed. 70 refs., 428 figs., 24 tabs.« less

  2. Using Hybrid Techniques for Generating Watershed-scale Flood Models in an Integrated Modeling Framework

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Merwade, V.; Singhofen, P.

    2017-12-01

    There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.

  3. Large-Scale Laboratory Experiments of Incipient Motion, Transport, and Fate of Underwater Munitions Under Waves, Currents, and Combined Flows

    DTIC Science & Technology

    2015-12-01

    little or no sediment cover (e.g., such as on coral reefs ) versus a sandy or muddy bottom. However, there is a dearth of direct observations made under...where there is little or no sediment cover (e.g., such as on coral reefs ) versus a sandy or muddy bottom. However, there is a dearth of direct...INTERIM REPORT Large-Scale Laboratory Experiments of Incipient Motion, Transport, and Fate of Underwater Munitions under Waves , Currents, and

  4. Application and comparison of the SCS-CN-based rainfall-runoff model in meso-scale watershed and field scale

    NASA Astrophysics Data System (ADS)

    Luo, L.; Wang, Z.

    2010-12-01

    Soil Conservation Service Curve Number (SCS-CN) based hydrologic model, has widely been used for agricultural watersheds in recent years. However, there will be relative error when applying it due to differentiation of geographical and climatological conditions. This paper introduces a more adaptable and propagable model based on the modified SCS-CN method, which specializes into two different scale cases of research regions. Combining the typical conditions of the Zhanghe irrigation district in southern part of China, such as hydrometeorologic conditions and surface conditions, SCS-CN based models were established. The Xinbu-Qiao River basin (area =1207 km2) and the Tuanlin runoff test area (area =2.87 km2)were taken as the study areas of basin scale and field scale in Zhanghe irrigation district. Applications were extended from ordinary meso-scale watershed to field scale in Zhanghe paddy field-dominated irrigated . Based on actual measurement data of land use, soil classification, hydrology and meteorology, quantitative evaluation and modifications for two coefficients, i.e. preceding loss and runoff curve, were proposed with corresponding models, table of CN values for different landuse and AMC(antecedent moisture condition) grading standard fitting for research cases were proposed. The simulation precision was increased by putting forward a 12h unit hydrograph of the field area, and 12h unit hydrograph were simplified. Comparison between different scales show that it’s more effectively to use SCS-CN model on field scale after parameters calibrated in basin scale These results can help discovering the rainfall-runoff rule in the district. Differences of established SCS-CN model's parameters between the two study regions are also considered. Varied forms of landuse and impacts of human activities were the important factors which can impact the rainfall-runoff relations in Zhanghe irrigation district.

  5. Multi-scale Modeling of the Evolution of a Large-Scale Nourishment

    NASA Astrophysics Data System (ADS)

    Luijendijk, A.; Hoonhout, B.

    2016-12-01

    Morphological predictions are often computed using a single morphological model commonly forced with schematized boundary conditions representing the time scale of the prediction. Recent model developments are now allowing us to think and act differently. This study presents some recent developments in coastal morphological modeling focusing on flexible meshes, flexible coupling between models operating at different time scales, and a recently developed morphodynamic model for the intertidal and dry beach. This integrated modeling approach is applied to the Sand Engine mega nourishment in The Netherlands to illustrate the added-values of this integrated approach both in accuracy and computational efficiency. The state-of-the-art Delft3D Flexible Mesh (FM) model is applied at the study site under moderate wave conditions. One of the advantages is that the flexibility of the mesh structure allows a better representation of the water exchange with the lagoon and corresponding morphological behavior than with the curvilinear grid used in the previous version of Delft3D. The XBeach model is applied to compute the morphodynamic response to storm events in detail incorporating the long wave effects on bed level changes. The recently developed aeolian transport and bed change model AeoLiS is used to compute the bed changes in the intertidal and dry beach area. In order to enable flexible couplings between the three abovementioned models, a component-based environment has been developed using the BMI method. This allows a serial coupling of Delft3D FM and XBeach steered by a control module that uses a hydrodynamic time series as input (see figure). In addition, a parallel online coupling, with information exchange in each timestep will be made with the AeoLiS model that predicts the bed level changes at the intertidal and dry beach area. This study presents the first years of evolution of the Sand Engine computed with the integrated modelling approach. Detailed comparisons

  6. AstraZeneca and Covance Laboratories Clinical Bioanalysis Alliance: an evolutionary outsourcing model.

    PubMed

    Arfvidsson, Cecilia; Severin, Paul; Holmes, Victoria; Mitchell, Richard; Bailey, Christopher; Cape, Stephanie; Li, Yan; Harter, Tammy

    2017-08-01

    The AstraZeneca and Covance Laboratories Clinical Bioanalysis Alliance (CBioA) was launched in 2011 after a period of global economic recession. In this challenging environment, AstraZeneca elected to move to a full and centralized outsourcing model that could optimize the number of people supporting bioanalytical work and reduce the analytical cost. This paper describes the key aspects of CBioA, the innovative operational model implemented, and our ways of ensuring this was much more than simply a cost reduction exercise. As we have recently passed the first 5-year cycle, this paper also summarizes some of the concluding benefits, wins and lessons learned, and how we now plan to extend and develop the relationship even further moving into a new clinical laboratory partnership.

  7. Multi-scale habitat selection modeling: A review and outlook

    Treesearch

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  8. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    NASA Astrophysics Data System (ADS)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  9. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    NASA Astrophysics Data System (ADS)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    , particularly at the laboratory scale.

  10. A tool for multi-scale modelling of the renal nephron

    PubMed Central

    Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.

    2011-01-01

    We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210

  11. Anaerobic Digestion of Laminaria japonica Waste from Industrial Production Residues in Laboratory- and Pilot-Scale.

    PubMed

    Barbot, Yann Nicolas; Thomsen, Claudia; Thomsen, Laurenz; Benz, Roland

    2015-09-18

    The cultivation of macroalgae to supply the biofuel, pharmaceutical or food industries generates a considerable amount of organic residue, which represents a potential substrate for biomethanation. Its use optimizes the total resource exploitation by the simultaneous disposal of waste biomaterials. In this study, we explored the biochemical methane potential (BMP) and biomethane recovery of industrial Laminaria japonica waste (LJW) in batch, continuous laboratory and pilot-scale trials. Thermo-acidic pretreatment with industry-grade HCl or industrial flue gas condensate (FGC), as well as a co-digestion approach with maize silage (MS) did not improve the biomethane recovery. BMPs between 172 mL and 214 mL g(-1) volatile solids (VS) were recorded. We proved the feasibility of long-term continuous anaerobic digestion with LJW as sole feedstock showing a steady biomethane production rate of 173 mL g(-1) VS. The quality of fermentation residue was sufficient to serve as biofertilizer, with enriched amounts of potassium, sulfur and iron. We further demonstrated the upscaling feasibility of the process in a pilot-scale system where a CH₄ recovery of 189 L kg(-1) VS was achieved and a biogas composition of 55% CH₄ and 38% CO₂ was recorded.

  12. Use of Laboratory Data to Model Interstellar Chemistry

    NASA Technical Reports Server (NTRS)

    Vidali, Gianfranco; Roser, J. E.; Manico, G.; Pirronello, V.

    2006-01-01

    Our laboratory research program is about the formation of molecules on dust grains analogues in conditions mimicking interstellar medium environments. Using surface science techniques, in the last ten years we have investigated the formation of molecular hydrogen and other molecules on different types of dust grain analogues. We analyzed the results to extract quantitative information on the processes of molecule formation on and ejection from dust grain analogues. The usefulness of these data lies in the fact that these results have been employed by theoreticians in models of the chemical evolution of ISM environments.

  13. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    USGS Publications Warehouse

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  14. CHARACTERISTIC LENGTH SCALE OF INPUT DATA IN DISTRIBUTED MODELS: IMPLICATIONS FOR MODELING GRID SIZE. (R824784)

    EPA Science Inventory

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...

  15. End-effects-regime in full scale and lab scale rocket nozzles

    NASA Astrophysics Data System (ADS)

    Rojo, Raymundo; Tinney, Charles; Baars, Woutijn; Ruf, Joseph

    2014-11-01

    Modern rockets utilize a thrust-optimized parabolic-contour design for their nozzles for its high performance and reliability. However, the evolving internal flow structures within these high area ratio rocket nozzles during start up generate a powerful amount of vibro-acoustic loads that act on the launch vehicle. Modern rockets must be designed to accommodate for these heavy loads or else risk a catastrophic failure. This study quantifies a particular moment referred to as the ``end-effects regime,'' or the largest source of vibro-acoustic loading during start-up [Nave & Coffey, AIAA Paper 1973-1284]. Measurements from full scale ignitions are compared with aerodynamically scaled representations in a fully anechoic chamber. Laboratory scale data is then matched with both static and dynamic wall pressure measurements to capture the associating shock structures within the nozzle. The event generated during the ``end-effects regime'' was successfully reproduced in the both the lab-scale models, and was characterized in terms of its mean, variance and skewness, as well as the spectral properties of the signal obtained by way of time-frequency analyses.

  16. Effects of Combined Hands-on Laboratory and Computer Modeling on Student Learning of Gas Laws: A Quasi-Experimental Study

    ERIC Educational Resources Information Center

    Liu, Xiufeng

    2006-01-01

    Based on current theories of chemistry learning, this study intends to test a hypothesis that computer modeling enhanced hands-on chemistry laboratories are more effective than hands-on laboratories or computer modeling laboratories alone in facilitating high school students' understanding of chemistry concepts. Thirty-three high school chemistry…

  17. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    NASA Astrophysics Data System (ADS)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  18. A new predictive indicator for development of pressure ulcers in bedridden patients based on common laboratory tests results.

    PubMed

    Hatanaka, N; Yamamoto, Y; Ichihara, K; Mastuo, S; Nakamura, Y; Watanabe, M; Iwatani, Y

    2008-04-01

    Various scales have been devised to predict development of pressure ulcers on the basis of clinical and laboratory data, such as the Braden Scale (Braden score), which is used to monitor activity and skin conditions of bedridden patients. However, none of these scales facilitates clinically reliable prediction. To develop a clinical laboratory data-based predictive equation for the development of pressure ulcers. Subjects were 149 hospitalised patients with respiratory disorders who were monitored for the development of pressure ulcers over a 3-month period. The proportional hazards model (Cox regression) was used to analyse the results of 12 basic laboratory tests on the day of hospitalisation in comparison with Braden score. Pressure ulcers developed in 38 patients within the study period. A Cox regression model consisting solely of Braden scale items showed that none of these items contributed to significantly predicting pressure ulcers. Rather, a combination of haemoglobin (Hb), C-reactive protein (CRP), albumin (Alb), age, and gender produced the best model for prediction. Using the set of explanatory variables, we created a new indicator based on a multiple logistic regression equation. The new indicator showed high sensitivity (0.73) and specificity (0.70), and its diagnostic power was higher than that of Alb, Hb, CRP, or the Braden score alone. The new indicator may become a more useful clinical tool for predicting presser ulcers than Braden score. The new indicator warrants verification studies to facilitate its clinical implementation in the future.

  19. Spatial structure and scaling of macropores in hydrological process at small catchment scale

    NASA Astrophysics Data System (ADS)

    Silasari, Rasmiaditya; Broer, Martine; Blöschl, Günter

    2013-04-01

    During rainfall events, the formation of overland flow can occur under the circumstances of saturation excess and/or infiltration excess. These conditions are affected by the soil moisture state which represents the soil water content in micropores and macropores. Macropores act as pathway for the preferential flows and have been widely studied locally. However, very little is known about their spatial structure and conductivity of macropores and other flow characteristic at the catchment scale. This study will analyze these characteristics to better understand its importance in hydrological processes. The research will be conducted in Petzenkirchen Hydrological Open Air Laboratory (HOAL), a 64 ha catchment located 100 km west of Vienna. The land use is divided between arable land (87%), pasture (5%), forest (6%) and paved surfaces (2%). Video cameras will be installed on an agricultural field to monitor the overland flow pattern during rainfall events. A wireless soil moisture network is also installed within the monitored area. These field data will be combined to analyze the soil moisture state and the responding surface runoff occurrence. The variability of the macropores spatial structure of the observed area (field scale) then will be assessed based on the topography and soil data. Soil characteristics will be supported with laboratory experiments on soil matrix flow to obtain proper definitions of the spatial structure of macropores and its variability. A coupled physically based distributed model of surface and subsurface flow will be used to simulate the variability of macropores spatial structure and its effect on the flow behaviour. This model will be validated by simulating the observed rainfall events. Upscaling from field scale to catchment scale will be done to understand the effect of macropores variability on larger scales by applying spatial stochastic methods. The first phase in this study is the installation and monitoring configuration of video

  20. A Large Scale, High Resolution Agent-Based Insurgency Model

    DTIC Science & Technology

    2013-09-30

    CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation