Sample records for scale model experiments

  1. Scale-up of ecological experiments: Density variation in the mobile bivalve Macomona liliana

    USGS Publications Warehouse

    Schneider, Davod C.; Walters, R.; Thrush, S.; Dayton, P.

    1997-01-01

    At present the problem of scaling up from controlled experiments (necessarily at a small spatial scale) to questions of regional or global importance is perhaps the most pressing issue in ecology. Most of the proposed techniques recommend iterative cycling between theory and experiment. We present a graphical technique that facilitates this cycling by allowing the scope of experiments, surveys, and natural history observations to be compared to the scope of models and theory. We apply the scope analysis to the problem of understanding the population dynamics of a bivalve exposed to environmental stress at the scale of a harbour. Previous lab and field experiments were found not to be 1:1 scale models of harbour-wide processes. Scope analysis allowed small scale experiments to be linked to larger scale surveys and to a spatially explicit model of population dynamics.

  2. Grade 12 Students' Conceptual Understanding and Mental Models of Galvanic Cells before and after Learning by Using Small-Scale Experiments in Conjunction with a Model Kit

    ERIC Educational Resources Information Center

    Supasorn, Saksri

    2015-01-01

    This study aimed to develop the small-scale experiments involving electrochemistry and the galvanic cell model kit featuring the sub-microscopic level. The small-scale experiments in conjunction with the model kit were implemented based on the 5E inquiry learning approach to enhance students' conceptual understanding of electrochemistry. The…

  3. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less

  4. Pore-scale and continuum simulations of solute transport micromodel benchmark experiments

    DOE PAGES

    Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...

    2014-06-18

    Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less

  5. An experimental method to verify soil conservation by check dams on the Loess Plateau, China.

    PubMed

    Xu, X Z; Zhang, H W; Wang, G Q; Chen, S C; Dang, W Q

    2009-12-01

    A successful experiment with a physical model requires necessary conditions of similarity. This study presents an experimental method with a semi-scale physical model. The model is used to monitor and verify soil conservation by check dams in a small watershed on the Loess Plateau of China. During experiments, the model-prototype ratio of geomorphic variables was kept constant under each rainfall event. Consequently, experimental data are available for verification of soil erosion processes in the field and for predicting soil loss in a model watershed with check dams. Thus, it can predict the amount of soil loss in a catchment. This study also mentions four criteria: similarities of watershed geometry, grain size and bare land, Froude number (Fr) for rainfall event, and soil erosion in downscaled models. The efficacy of the proposed method was confirmed using these criteria in two different downscaled model experiments. The B-Model, a large scale model, simulates watershed prototype. The two small scale models, D(a) and D(b), have different erosion rates, but are the same size. These two models simulate hydraulic processes in the B-Model. Experiment results show that while soil loss in the small scale models was converted by multiplying the soil loss scale number, it was very close to that of the B-Model. Obviously, with a semi-scale physical model, experiments are available to verify and predict soil loss in a small watershed area with check dam system on the Loess Plateau, China.

  6. Heterogeneity and scaling land-atmospheric water and energy fluxes in climate systems

    NASA Technical Reports Server (NTRS)

    Wood, Eric F.

    1993-01-01

    The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, three modeling experiments were performed and are reviewed in the paper. The first is concerned with the aggregation of parameters and inputs for a terrestrial water and energy balance model. The second experiment analyzed the scaling behavior of hydrologic responses during rain events and between rain events. The third experiment compared the hydrologic responses from distributed models with a lumped model that uses spatially constant inputs and parameters. The results show that the patterns of small scale variations can be represented statistically if the scale is larger than a representative elementary area scale, which appears to be about 2 - 3 times the correlation length of the process. For natural catchments this appears to be about 1 - 2 sq km. The results concerning distributed versus lumped representations are more complicated. For conditions when the processes are nonlinear, then lumping results in biases; otherwise a one-dimensional model based on 'equivalent' parameters provides quite good results. Further research is needed to fully understand these conditions.

  7. Pollutant dispersion in a large indoor space: Part 1 -- Scaled experiments using a water-filled model with occupants and furniture.

    PubMed

    Thatcher, T L; Wilson, D J; Wood, E E; Craig, M J; Sextro, R G

    2004-08-01

    Scale modeling is a useful tool for analyzing complex indoor spaces. Scale model experiments can reduce experimental costs, improve control of flow and temperature conditions, and provide a practical method for pretesting full-scale system modifications. However, changes in physical scale and working fluid (air or water) can complicate interpretation of the equivalent effects in the full-scale structure. This paper presents a detailed scaling analysis of a water tank experiment designed to model a large indoor space, and experimental results obtained with this model to assess the influence of furniture and people in the pollutant concentration field at breathing height. Theoretical calculations are derived for predicting the effects from losses of molecular diffusion, small scale eddies, turbulent kinetic energy, and turbulent mass diffusivity in a scale model, even without Reynolds number matching. Pollutant dispersion experiments were performed in a water-filled 30:1 scale model of a large room, using uranine dye injected continuously from a small point source. Pollutant concentrations were measured in a plane, using laser-induced fluorescence techniques, for three interior configurations: unobstructed, table-like obstructions, and table-like and figure-like obstructions. Concentrations within the measurement plane varied by more than an order of magnitude, even after the concentration field was fully developed. Objects in the model interior had a significant effect on both the concentration field and fluctuation intensity in the measurement plane. PRACTICAL IMPLICATION: This scale model study demonstrates both the utility of scale models for investigating dispersion in indoor environments and the significant impact of turbulence created by furnishings and people on pollutant transport from floor level sources. In a room with no furniture or occupants, the average concentration can vary by about a factor of 3 across the room. Adding furniture and occupants can increase this spatial variation by another factor of 3.

  8. Thermal Destruction Of CB Contaminants Bound On Building ...

    EPA Pesticide Factsheets

    Symposium Paper An experimental and theoretical program has been initiated by the U.S. EPA to investigate issues of chemical/biological agent destruction in incineration systems when the agent in question is bound on common porous building interior materials. This program includes 3-dimensional computational fluid dynamics modeling with matrix-bound agent destruction kinetics, bench-scale experiments to determine agent destruction kinetics while bound on various matrices, and pilot-scale experiments to scale-up the bench-scale experiments to a more practical scale. Finally, model predictions are made to predict agent destruction and combustion conditions in two full-scale incineration systems that are typical of modern combustor design.

  9. A microstructural model of motion of macro-twin interfaces in Ni-Mn-Ga 10 M martensite

    NASA Astrophysics Data System (ADS)

    Seiner, Hanuš; Straka, Ladislav; Heczko, Oleg

    2014-03-01

    We present a continuum-based model of microstructures forming at the macro-twin interfaces in thermoelastic martensites and apply this model to highly mobile interfaces in 10 M modulated Ni-Mn-Ga martensite. The model is applied at three distinct spatial scales observed in the experiment: meso-scale (modulation twinning), micro-scale (compound a-b lamination), and nano-scale (nanotwining in the concept of adaptive martensite). We show that two mobile interfaces (Type I and Type II macro-twins) have different micromorphologies at all considered spatial scales, which can directly explain their different twinning stress observed in experiments. The results of the model are discussed with respect to various experimental observations at all three considered spatial scales.

  10. Detonation failure characterization of non-ideal explosives

    NASA Astrophysics Data System (ADS)

    Janesheski, Robert S.; Groven, Lori J.; Son, Steven

    2012-03-01

    Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.

  11. 10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  12. Study on model design and dynamic similitude relations of vibro-acoustic experiment for elastic cavity

    NASA Astrophysics Data System (ADS)

    Shi, Ao; Lu, Bo; Yang, Dangguo; Wang, Xiansheng; Wu, Junqiang; Zhou, Fangqi

    2018-05-01

    Coupling between aero-acoustic noise and structural vibration under high-speed open cavity flow-induced oscillation may bring about severe random vibration of the structure, and even cause structure to fatigue destruction, which threatens the flight safety. Carrying out the research on vibro-acoustic experiments of scaled down model is an effective means to clarify the effects of high-intensity noise of cavity on structural vibration. Therefore, in allusion to the vibro-acoustic experiments of cavity in wind tunnel, taking typical elastic cavity as the research object, dimensional analysis and finite element method were adopted to establish the similitude relations of structural inherent characteristics and dynamics for distorted model, and verifying the proposed similitude relations by means of experiments and numerical simulation. Research shows that, according to the analysis of scale-down model, the established similitude relations can accurately simulate the structural dynamic characteristics of actual model, which provides theoretic guidance for structural design and vibro-acoustic experiments of scaled down elastic cavity model.

  13. Microbiological-enhanced mixing across scales during in-situ bioreduction of metals and radionuclides at Department of Energy Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valocchi, Albert; Werth, Charles; Liu, Wen-Tso

    Bioreduction is being actively investigated as an effective strategy for subsurface remediation and long-term management of DOE sites contaminated by metals and radionuclides (i.e. U(VI)). These strategies require manipulation of the subsurface, usually through injection of chemicals (e.g., electron donor) which mix at varying scales with the contaminant to stimulate metal reducing bacteria. There is evidence from DOE field experiments suggesting that mixing limitations of substrates at all scales may affect biological growth and activity for U(VI) reduction. Although current conceptual models hold that biomass growth and reduction activity is limited by physical mixing processes, a growing body of literaturemore » suggests that reaction could be enhanced by cell-to-cell interaction occurring over length scales extending tens to thousands of microns. Our project investigated two potential mechanisms of enhanced electron transfer. The first is the formation of single- or multiple-species biofilms that transport electrons via direct electrical connection such as conductive pili (i.e. ‘nanowires’) through biofilms to where the electron acceptor is available. The second is through diffusion of electron carriers from syntrophic bacteria to dissimilatory metal reducing bacteria (DMRB). The specific objectives of this work are (i) to quantify the extent and rate that electrons are transported between microorganisms in physical mixing zones between an electron donor and electron acceptor (e.g. U(IV)), (ii) to quantify the extent that biomass growth and reaction are enhanced by interspecies electron transport, and (iii) to integrate mixing across scales (e.g., microscopic scale of electron transfer and macroscopic scale of diffusion) in an integrated numerical model to quantify these mechanisms on overall U(VI) reduction rates. We tested these hypotheses with five tasks that integrate microbiological experiments, unique micro-fluidics experiments, flow cell experiments, and multi-scale numerical models. Continuous fed-batch reactors were used to derive kinetic parameters for DMRB, and to develop an enrichment culture for elucidation of syntrophic relationships in a complex microbial community. Pore and continuum scale experiments using microfluidic and bench top flow cells were used to evaluate the impact of cell-to-cell and microbial interactions on reaction enhancement in mixing-limited bioactive zones, and the mechanisms of this interaction. Some of the microfluidic experiments were used to develop and test models that considers direct cell-to-cell interactions during metal reduction. Pore scale models were incorporated into a multi-scale hybrid modeling framework that combines pore scale modeling at the reaction interface with continuum scale modeling. New computational frameworks for combining continuum and pore-scale models were also developed« less

  14. Strategy Plan A Methodology to Predict the Uniformity of Double-Shell Tank Waste Slurries Based on Mixing Pump Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.A. Bamberger; L.M. Liljegren; P.S. Lowery

    This document presents an analysis of the mechanisms influencing mixing within double-shell slurry tanks. A research program to characterize mixing of slurries within tanks has been proposed. The research program presents a combined experimental and computational approach to produce correlations describing the tank slurry concentration profile (and therefore uniformity) as a function of mixer pump operating conditions. The TEMPEST computer code was used to simulate both a full-scale (prototype) and scaled (model) double-shell waste tank to predict flow patterns resulting from a stationary jet centered in the tank. The simulation results were used to evaluate flow patterns in the tankmore » and to determine whether flow patterns are similar between the full-scale prototype and an existing 1/12-scale model tank. The flow patterns were sufficiently similar to recommend conducting scoping experiments at 1/12-scale. Also, TEMPEST modeled velocity profiles of the near-floor jet were compared to experimental measurements of the near-floor jet with good agreement. Reported values of physical properties of double-shell tank slurries were analyzed to evaluate the range of properties appropriate for conducting scaled experiments. One-twelfth scale scoping experiments are recommended to confirm the prioritization of the dimensionless groups (gravitational settling, Froude, and Reynolds numbers) that affect slurry suspension in the tank. Two of the proposed 1/12-scale test conditions were modeled using the TEMPEST computer code to observe the anticipated flow fields. This information will be used to guide selection of sampling probe locations. Additional computer modeling is being conducted to model a particulate laden, rotating jet centered in the tank. The results of this modeling effort will be compared to the scaled experimental data to quantify the agreement between the code and the 1/12-scale experiment. The scoping experiment results will guide selection of parameters to be varied in the follow-on experiments. Data from the follow-on experiments will be used to develop correlations to describe slurry concentration profile as a function of mixing pump operating conditions. This data will also be used to further evaluate the computer model applications. If the agreement between the experimental data and the code predictions is good, the computer code will be recommended for use to predict slurry uniformity in the tanks under various operating conditions. If the agreement between the code predictions and experimental results is not good, the experimental data correlations will be used to predict slurry uniformity in the tanks within the range of correlation applicability.« less

  15. Modelling solute dispersion in periodic heterogeneous porous media: Model benchmarking against intermediate scale experiments

    NASA Astrophysics Data System (ADS)

    Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham

    2018-06-01

    This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.

  16. Design, construction, and evaluation of a 1:8 scale model binaural manikin.

    PubMed

    Robinson, Philip; Xiang, Ning

    2013-03-01

    Many experiments in architectural acoustics require presenting listeners with simulations of different rooms to compare. Acoustic scale modeling is a feasible means to create accurate simulations of many rooms at reasonable cost. A critical component in a scale model room simulation is a receiver that properly emulates a human receiver. For this purpose, a scale model artificial head has been constructed and tested. This paper presents the design and construction methods used, proper equalization procedures, and measurements of its response. A headphone listening experiment examining sound externalization with various reflection conditions is presented that demonstrates its use for psycho-acoustic testing.

  17. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  18. Scale effects in the response and failure of fiber reinforced composite laminates loaded in tension and in flexure

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Kellas, Sotiris; Morton, John

    1992-01-01

    The feasibility of using scale model testing for predicting the full-scale behavior of flat composite coupons loaded in tension and beam-columns loaded in flexure is examined. Classical laws of similitude are applied to fabricate and test replica model specimens to identify scaling effects in the load response, strength, and mode of failure. Experiments were performed on graphite-epoxy composite specimens having different laminate stacking sequences and a range of scaled sizes. From the experiments it was deduced that the elastic response of scaled composite specimens was independent of size. However, a significant scale effect in strength was observed. In addition, a transition in failure mode was observed among scaled specimens of certain laminate stacking sequences. A Weibull statistical model and a fracture mechanics based model were applied to predict the strength scale effect since standard failure criteria cannot account for the influence of absolute specimen size on strength.

  19. MODELING HEXAVALENT CHROMIUM REDUCTION IN GROUND- WATER IN FIELD-SCALE TRANSPORT AND LABORATORY BATCH EXPERIMENTS

    EPA Science Inventory

    A plausible and consistent model is developed to obtain a quantitative description of the gradual disappearance of hexavalent chromium (Cr(VI)) from groundwater in a small-scale field tracer test and in batch kinetic experiments using aquifer sediments under similar chemical cond...

  20. An Idealized Test of the Response of the Community Atmosphere Model to Near-Grid-Scale Forcing Across Hydrostatic Resolutions

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Reed, K. A.

    2018-02-01

    A set of idealized experiments are developed using the Community Atmosphere Model (CAM) to understand the vertical velocity response to reductions in forcing scale that is known to occur when the horizontal resolution of the model is increased. The test consists of a set of rising bubble experiments, in which the horizontal radius of the bubble and the model grid spacing are simultaneously reduced. The test is performed with moisture, through incorporating moist physics routines of varying complexity, although convection schemes are not considered. Results confirm that the vertical velocity in CAM is to first-order, proportional to the inverse of the horizontal forcing scale, which is consistent with a scale analysis of the dry equations of motion. In contrast, experiments in which the coupling time step between the moist physics routines and the dynamical core (i.e., the "physics" time step) are relaxed back to more conventional values results in severely damped vertical motion at high resolution, degrading the scaling. A set of aqua-planet simulations using different physics time steps are found to be consistent with the results of the idealized experiments.

  1. Coarse-Grained Models for Protein-Cell Membrane Interactions

    PubMed Central

    Bradley, Ryan; Radhakrishnan, Ravi

    2015-01-01

    The physiological properties of biological soft matter are the product of collective interactions, which span many time and length scales. Recent computational modeling efforts have helped illuminate experiments that characterize the ways in which proteins modulate membrane physics. Linking these models across time and length scales in a multiscale model explains how atomistic information propagates to larger scales. This paper reviews continuum modeling and coarse-grained molecular dynamics methods, which connect atomistic simulations and single-molecule experiments with the observed microscopic or mesoscale properties of soft-matter systems essential to our understanding of cells, particularly those involved in sculpting and remodeling cell membranes. PMID:26613047

  2. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  3. Global Energy and Water Cycle Experiment (GEWEX) and the Continental-scale International Project (GCIP)

    NASA Technical Reports Server (NTRS)

    Vane, Deborah

    1993-01-01

    A discussion of the objectives of the Global Energy and Water Cycle Experiment (GEWEX) and the Continental-scale International Project (GCIP) is presented in vugraph form. The objectives of GEWEX are as follows: determine the hydrological cycle by global measurements; model the global hydrological cycle; improve observations and data assimilation; and predict response to environmental change. The objectives of GCIP are as follows: determine the time/space variability of the hydrological cycle over a continental-scale region; develop macro-scale hydrologic models that are coupled to atmospheric models; develop information retrieval schemes; and support regional climate change impact assessment.

  4. Virtual Patterson Experiment - A Way to Access the Rheology of Aggregates and Melanges

    NASA Astrophysics Data System (ADS)

    Delannoy, Thomas; Burov, Evgueni; Wolf, Sylvie

    2014-05-01

    Understanding the mechanisms of lithospheric deformation requires bridging the gap between human-scale laboratory experiments and the huge geological objects they represent. Those experiments are limited in spatial and time scale as well as in choice of materials (e.g., mono-phase minerals, exaggerated temperatures and strain rates), which means that the resulting constitutive laws may not fully represent real rocks at geological spatial and temporal scales. We use the thermo-mechanical numerical modelling approach as a tool to link both experiments and nature and hence better understand the rheology of the lithosphere, by enabling us to study the behavior of polymineralic aggregates and their impact on the localization of the deformation. We have adapted the large strain visco-elasto-plastic Flamar code to allow it to operate at all spatial and temporal scales, from sub-grain to geodynamic scale, and from seismic time scales to millions of years. Our first goal was to reproduce real rock mechanics experiments on deformation of mono and polymineralic aggregates in Patterson's load machine in order to deepen our understanding of the rheology of polymineralic rocks. In particular, we studied in detail the deformation of a 15x15 mm mica-quartz sample at 750 °C and 300 MPa. This mixture includes a molten phase and a solid phase in which shear bands develop as a result of interactions between ductile and brittle deformation and stress concentration at the boundaries between weak and strong phases. We used digitized x-ray scans of real samples as initial configuration for the numerical models so the model-predicted deformation and stress-strain behavior can match those observed in the laboratory experiment. Analyzing the numerical experiments providing the best match with the press experiments and making other complementary models by changing different parameters in the initial state (strength contrast between the phases, proportions, microstructure, etc.) provides a number of new elements of understanding of the mechanisms governing the localization of the deformation across the aggregates. We next used stress-strain curves derived from the numerical experiments to study in detail the evolution of the rheological behavior of each mineral phase as well as that of the mixtures in order to formulate constitutive relations for mélanges and polymineralic aggregates. The next step of our approach would be to link the constitutive laws obtained at small scale (laws that govern the rheology of a polymineralic aggregate, the effect of the presence of a molten phase, etc.) to the large-scale behavior of the Earth by implementing them in lithosphere-scale models.

  5. U.S. perspective on technology demonstration experiments for adaptive structures

    NASA Technical Reports Server (NTRS)

    Aswani, Mohan; Wada, Ben K.; Garba, John A.

    1991-01-01

    Evaluation of design concepts for adaptive structures is being performed in support of several focused research programs. These include programs such as Precision Segmented Reflector (PSR), Control Structure Interaction (CSI), and the Advanced Space Structures Technology Research Experiment (ASTREX). Although not specifically designed for adaptive structure technology validation, relevant experiments can be performed using the Passive and Active Control of Space Structures (PACOSS) testbed, the Space Integrated Controls Experiment (SPICE), the CSI Evolutionary Model (CEM), and the Dynamic Scale Model Test (DSMT) Hybrid Scale. In addition to the ground test experiments, several space flight experiments have been planned, including a reduced gravity experiment aboard the KC-135 aircraft, shuttle middeck experiments, and the Inexpensive Flight Experiment (INFLEX).

  6. Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hales, J. D.; Tonks, M. R.; Chockalingam, K.

    2015-03-01

    Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less

  7. Fully coupled approach to modeling shallow water flow, sediment transport, and bed evolution in rivers

    NASA Astrophysics Data System (ADS)

    Li, Shuangcai; Duffy, Christopher J.

    2011-03-01

    Our ability to predict complex environmental fluid flow and transport hinges on accurate and efficient simulations of multiple physical phenomenon operating simultaneously over a wide range of spatial and temporal scales, including overbank floods, coastal storm surge events, drying and wetting bed conditions, and simultaneous bed form evolution. This research implements a fully coupled strategy for solving shallow water hydrodynamics, sediment transport, and morphological bed evolution in rivers and floodplains (PIHM_Hydro) and applies the model to field and laboratory experiments that cover a wide range of spatial and temporal scales. The model uses a standard upwind finite volume method and Roe's approximate Riemann solver for unstructured grids. A multidimensional linear reconstruction and slope limiter are implemented, achieving second-order spatial accuracy. Model efficiency and stability are treated using an explicit-implicit method for temporal discretization with operator splitting. Laboratory-and field-scale experiments were compiled where coupled processes across a range of scales were observed and where higher-order spatial and temporal accuracy might be needed for accurate and efficient solutions. These experiments demonstrate the ability of the fully coupled strategy in capturing dynamics of field-scale flood waves and small-scale drying-wetting processes.

  8. Scaling water and energy fluxes in climate systems - Three land-atmospheric modeling experiments

    NASA Technical Reports Server (NTRS)

    Wood, Eric F.; Lakshmi, Venkataraman

    1993-01-01

    Three numerical experiments that investigate the scaling of land-surface processes - either of the inputs or parameters - are reported, and the aggregated processes are compared to the spatially variable case. The first is the aggregation of the hydrologic response in a catchment due to rainfall during a storm event and due to evaporative demands during interstorm periods. The second is the spatial and temporal aggregation of latent heat fluxes, as calculated from SiB. The third is the aggregation of remotely sensed land vegetation and latent and sensible heat fluxes using TM data from the FIFE experiment of 1987 in Kansas. In all three experiments it was found that the surface fluxes and land characteristics can be scaled, and that macroscale models based on effective parameters are sufficient to account for the small-scale heterogeneities investigated.

  9. Aerodynamic Simulation of the MARINTEK Braceless Semisubmersible Wave Tank Tests

    NASA Astrophysics Data System (ADS)

    Stewart, Gordon; Muskulus, Michael

    2016-09-01

    Model scale experiments of floating offshore wind turbines are important for both platform design for the industry as well as numerical model validation for the research community. An important consideration in the wave tank testing of offshore wind turbines are scaling effects, especially the tension between accurate scaling of both hydrodynamic and aerodynamic forces. The recent MARINTEK braceless semisubmersible wave tank experiment utilizes a novel aerodynamic force actuator to decouple the scaling of the aerodynamic forces. This actuator consists of an array of motors that pull on cables to provide aerodynamic forces that are calculated by a blade-element momentum code in real time as the experiment is conducted. This type of system has the advantage of supplying realistically scaled aerodynamic forces that include dynamic forces from platform motion, but does not provide the insights into the accuracy of the aerodynamic models that an actual model-scale rotor could provide. The modeling of this system presents an interesting challenge, as there are two ways to simulate the aerodynamics; either by using the turbulent wind fields as inputs to the aerodynamic model of the design code, or by surpassing the aerodynamic model and using the forces applied to the experimental turbine as direct inputs to the simulation. This paper investigates the best practices of modeling this type of novel aerodynamic actuator using a modified wind turbine simulation tool, and demonstrates that bypassing the dynamic aerodynamics solver of design codes can lead to erroneous results.

  10. Modifying mixing and instability growth through the adjustment of initial conditions in a high-energy-density counter-propagating shear experiment on OMEGA

    DOE PAGES

    Merritt, E. C.; Doss, F. W.; Loomis, E. N.; ...

    2015-06-24

    Counter-propagating shear experiments conducted at the OMEGA Laser Facility have been evaluating the effect of target initial conditions, specifically the characteristics of a tracer foil located at the shear boundary, on Kelvin-Helmholtz instability evolution and experiment transition toward nonlinearity and turbulence in the high-energy-density (HED) regime. Experiments are focused on both identifying and uncoupling the dependence of the model initial turbulent length scale in variable-density turbulence models of k-ϵ type on competing physical instability seed lengths as well as developing a path toward fully developed turbulent HED experiments. We present results from a series of experiments controllably and independently varyingmore » two initial types of scale lengths in the experiment: the thickness and surface roughness (surface perturbation scale spectrum) of a tracer layer at the shear interface. We show that decreasing the layer thickness and increasing the surface roughness both have the ability to increase the relative mixing in the system, and thus theoretically decrease the time required to begin transitioning to turbulence in the system. In addition, we also show that we can connect a change in observed mix width growth due to increased foil surface roughness to an analytically predicted change in model initial turbulent scale lengths.« less

  11. An Illustrative Guide to the Minerva Framework

    NASA Astrophysics Data System (ADS)

    Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration

    2017-10-01

    Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.

  12. Large-Scale Experiments in Microbially Induced Calcite Precipitation (MICP): Reactive Transport Model Development and Prediction

    NASA Astrophysics Data System (ADS)

    Nassar, Mohamed K.; Gurung, Deviyani; Bastani, Mehrdad; Ginn, Timothy R.; Shafei, Babak; Gomez, Michael G.; Graddy, Charles M. R.; Nelson, Doug C.; DeJong, Jason T.

    2018-01-01

    Design of in situ microbially induced calcite precipitation (MICP) strategies relies on a predictive capability. To date much of the mathematical modeling of MICP has focused on small-scale experiments and/or one-dimensional flow in porous media, and successful parameterizations of models in these settings may not pertain to larger scales or to nonuniform, transient flows. Our objective in this article is to report on modeling to test our ability to predict behavior of MICP under controlled conditions in a meter-scale tank experiment with transient nonuniform transport in a natural soil, using independently determined parameters. Flow in the tank was controlled by three wells, via a complex cycle of injection/withdrawals followed by no-flow intervals. Different injection solution recipes were used in sequence for transport characterization, biostimulation, cementation, and groundwater rinse phases of the 17 day experiment. Reaction kinetics were calibrated using separate column experiments designed with a similar sequence of phases. This allowed for a parsimonious modeling approach with zero fitting parameters for the tank experiment. These experiments and data were simulated using PHT3-D, involving transient nonuniform flow, alternating low and high Damköhler reactive transport, and combined equilibrium and kinetically controlled biogeochemical reactions. The assumption that microbes mediating the reaction were exclusively sessile, and with constant activity, in conjunction with the foregoing treatment of the reaction network, provided for efficient and accurate modeling of the entire process leading to nonuniform calcite precipitation. This analysis suggests that under the biostimulation conditions applied here the assumption of steady state sessile biocatalyst suffices to describe the microbially mediated calcite precipitation.

  13. The structural invariance of the Temporal Experience of Pleasure Scale across time and culture.

    PubMed

    Li, Zhi; Shi, Hai-Song; Elis, Ori; Yang, Zhuo-Ya; Wang, Ya; Lui, Simon S Y; Cheung, Eric F C; Kring, Ann M; Chan, Raymond C K

    2018-06-01

    The Temporal Experience of Pleasure Scale (TEPS) is a self-report instrument that assesses pleasure experience. Initial scale development and validation in the United States yielded a two-factor solution comprising anticipatory and consummatory pleasure. However, a four-factor model that further parsed anticipatory and consummatory pleasure experience into abstract and contextual components was a better model fit in China. In this study, we tested both models using confirmatory factor analysis in an American and a Chinese sample and examined the configural measurement invariance of both models across culture. We also examined the temporal stability of the four-factor model in the Chinese sample. The results indicated that the four-factor model of the TEPS was a better fit than the two-factor model in the Chinese sample. In contrast, both models fit the American sample, which also included many Asian American participants. The four-factor model fit both the Asian American and Chinese samples equally well. Finally, the four-factor model demonstrated good measurement and structural invariance across culture and time, suggesting that this model may be applicable in both cross-cultural and longitudinal studies. © 2018 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  14. Experimental congruence of interval scale production from paired comparisons and ranking for image evaluation

    NASA Astrophysics Data System (ADS)

    Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.

    2003-12-01

    Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.

  15. Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments

    DTIC Science & Technology

    2015-09-30

    statistical inference methodologies for ocean- acoustic problems by investigating and applying statistical methods to data collected from scale-model...to begin planning experiments for statistical inference applications. APPROACH In the ocean acoustics community over the past two decades...solutions for waveguide parameters. With the introduction of statistical inference to the field of ocean acoustics came the desire to interpret marginal

  16. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  17. Comparison of batch sorption tests, pilot studies, and modeling for estimating GAC bed life.

    PubMed

    Scharf, Roger G; Johnston, Robert W; Semmens, Michael J; Hozalski, Raymond M

    2010-02-01

    Saint Paul Regional Water Services (SPRWS) in Saint Paul, MN experiences annual taste and odor episodes during the warm summer months. These episodes are attributed primarily to geosmin that is produced by cyanobacteria growing in the chain of lakes used to convey and store the source water pumped from the Mississippi River. Batch experiments, pilot-scale experiments, and model simulations were performed to determine the geosmin removal performance and bed life of a granular activated carbon (GAC) filter-sorber. Using batch adsorption isotherm parameters, the estimated bed life for the GAC filter-sorber ranged from 920 to 1241 days when challenged with a constant concentration of 100 ng/L of geosmin. The estimated bed life obtained using the AdDesignS model and the actual pilot-plant loading history was 594 days. Based on the pilot-scale GAC column data, the actual bed life (>714 days) was much longer than the simulated values because bed life was extended by biological degradation of geosmin. The continuous feeding of high concentrations of geosmin (100-400 ng/L) in the pilot-scale experiments enriched for a robust geosmin-degrading culture that was sustained when the geosmin feed was turned off for 40 days. It is unclear, however, whether a geosmin-degrading culture can be established in a full-scale filter that experiences taste and odor episodes for only 1 or 2 months per year. The results of this research indicate that care must be exercised in the design and interpretation of pilot-scale experiments and model simulations for predicting taste and odor removal in full-scale GAC filter-sorbers. Adsorption and the potential for biological degradation must be considered to estimate GAC bed life for the conditions of intermittent geosmin loading typically experienced by full-scale systems. (c) 2009 Elsevier Ltd. All rights reserved.

  18. Evaluation of a Genome-Scale In Silico Metabolic Model for Geobacter metallireducens Using Proteomic Data from a Field Biostimulation Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Yilin; Wilkins, Michael J.; Yabusaki, Steven B.

    2012-12-12

    Biomass and shotgun global proteomics data that reflected relative protein abundances from samples collected during the 2008 experiment at the U.S. Department of Energy Integrated Field-Scale Subsurface Research Challenge site in Rifle, Colorado, provided an unprecedented opportunity to validate a genome-scale metabolic model of Geobacter metallireducens and assess its performance with respect to prediction of metal reduction, biomass yield, and growth rate under dynamic field conditions. Reconstructed from annotated genomic sequence, biochemical, and physiological data, the constraint-based in silico model of G. metallireducens relates an annotated genome sequence to the physiological functions with 697 reactions controlled by 747 enzyme-coding genes.more » Proteomic analysis showed that 180 of the 637 G. metallireducens proteins detected during the 2008 experiment were associated with specific metabolic reactions in the in silico model. When the field-calibrated Fe(III) terminal electron acceptor process reaction in a reactive transport model for the field experiments was replaced with the genome-scale model, the model predicted that the largest metabolic fluxes through the in silico model reactions generally correspond to the highest abundances of proteins that catalyze those reactions. Central metabolism predicted by the model agrees well with protein abundance profiles inferred from proteomic analysis. Model discrepancies with the proteomic data, such as the relatively low fluxes through amino acid transport and metabolism, revealed pathways or flux constraints in the in silico model that could be updated to more accurately predict metabolic processes that occur in the subsurface environment.« less

  19. Scaling, soil moisture and evapotranspiration in runoff models

    NASA Technical Reports Server (NTRS)

    Wood, Eric F.

    1993-01-01

    The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.

  20. Preferential flow across scales: how important are plot scale processes for a catchment scale model?

    NASA Astrophysics Data System (ADS)

    Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Klaus, Julian

    2017-04-01

    Numerous experimental studies showed the importance of preferential flow for solute transport and runoff generation. As a consequence, various approaches exist to incorporate preferential flow in hydrological models. However, few studies have applied models that incorporate preferential flow at hillslope scale and even fewer at catchment scale. Certainly, one main difficulty for progress is the determination of an adequate parameterization for preferential flow at these spatial scales. This study applies a 3D physically based model (HydroGeoSphere) of a headwater region (6 ha) of the Weierbach catchment (Luxembourg). The base model was implemented without preferential flow and was limited in simulating fast catchment responses. Thus we hypothesized that the discharge performance can be improved by utilizing a dual permeability approach for a representation of preferential flow. We used the information of bromide irrigation experiments performed on three 1m2 plots to parameterize preferential flow. In a first step we ran 20.000 Monte Carlo simulations of these irrigation experiments in a 1m2 column of the headwater catchment model, varying the dual permeability parameters (15 variable parameters). These simulations identified many equifinal, yet very different parameter sets that reproduced the bromide depth profiles well. Therefore, in the next step we chose 52 parameter sets (the 40 best and 12 low performing sets) for testing the effect of incorporating preferential flow in the headwater catchment scale model. The variability of the flow pattern responses at the headwater catchment scale was small between the different parameterizations and did not coincide with the variability at plot scale. The simulated discharge time series of the different parameterizations clustered in six groups of similar response, ranging from nearly unaffected to completely changed responses compared to the base case model without dual permeability. Yet, in none of the groups the simulated discharge response clearly improved compared to the base case. Same held true for some observed soil moisture time series, although at plot scale the incorporation of preferential flow was necessary to simulate the irrigation experiments correctly. These results rejected our hypothesis and open a discussion on how important plot scale processes and heterogeneities are at catchment scale. Our preliminary conclusion is that vertical preferential flow is important for the irrigation experiments at the plot scale, while discharge generation at the catchment scale is largely controlled by lateral preferential flow. The lateral component, however, was already considered in the base case model with different hydraulic conductivities in different soil layers. This can explain why the internal behavior of the model at single spots seems not to be relevant for the overall hydrometric catchment response. Nonetheless, the inclusion of vertical preferential flow improved the realism of internal processes of the model (fitting profiles at plot scale, unchanged response at catchment scale) and should be considered depending on the intended use of the model. Furthermore, we cannot exclude with certainty yet that the quantitative discharge performance at catchment scale cannot be improved by utilizing a dual permeability approach, which will be tested in parameter optimization process.

  1. A mechanical model of bacteriophage DNA ejection

    NASA Astrophysics Data System (ADS)

    Arun, Rahul; Ghosal, Sandip

    2017-08-01

    Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.

  2. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.

  3. SDG and qualitative trend based model multiple scale validation

    NASA Astrophysics Data System (ADS)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  4. Flume experimentation and simulation of bedrock channel processes

    NASA Astrophysics Data System (ADS)

    Thompson, Douglas; Wohl, Ellen

    Flume experiments can provide cost effective, physically manageable miniature representations of complex bedrock channels. The inherent change in scale in such experiments requires a corresponding change in the scale of the forces represented in the flume system. Three modeling approaches have been developed that either ignore the scaling effects, utilize the change in scaled forces, or assume similarity of process between scales. An understanding of the nonlinear influence of a change in scale on all the forces involved is important to correctly analyze model results. Similarly, proper design and operation of flume experiments requires knowledge of the fundamental components of flume systems. Entrance and exit regions of the flume are used to provide good experimental conditions in the measurement region of the flume where data are collected. To insure reproducibility, large-scale turbulence must be removed in the head of the flume and velocity profiles must become fully developed in the entrance region. Water-surface slope and flow acceleration effects from downstream water-depth control must also be isolated in the exit region. Statistical design and development of representative channel substrate also influence model results in these systems. With proper experimental design, flumes may be used to investigate bedrock channel hydraulics, sediment-transport relations, and morphologic evolution. In particular, researchers have successfully used flume experiments to demonstrate the importance of turbulence and substrate characteristics in bedrock channel evolution. Turbulence often operates in a self perpetuating fashion, can erode bedrock walls with clear water and increase the mobility of sediment particles. Bedrock substrate influences channel evolution by offering varying resistance to erosion, controlling the location or type of incision and modifying the local influence of turbulence. An increased usage of scaled flume models may help to clarify the remaining uncertainties involving turbulence, channel substrate and bedrock channel evolution.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E.; Lenee-Bluhm, Pukha; Prudell, Joseph H.

    The most prudent path to a full-scale design, build and deployment of a wave energy conversion (WEC) system involves establishment of validated numerical models using physical experiments in a methodical scaling program. This Project provides essential additional rounds of wave tank testing at 1:33 scale and ocean/bay testing at a 1:7 scale, necessary to validate numerical modeling that is essential to a utility-scale WEC design and associated certification.

  6. Scale construction utilising the Rasch unidimensional measurement model: A measurement of adolescent attitudes towards abortion.

    PubMed

    Hendriks, Jacqueline; Fyfe, Sue; Styles, Irene; Skinner, S Rachel; Merriman, Gareth

    2012-01-01

    Measurement scales seeking to quantify latent traits like attitudes, are often developed using traditional psychometric approaches. Application of the Rasch unidimensional measurement model may complement or replace these techniques, as the model can be used to construct scales and check their psychometric properties. If data fit the model, then a scale with invariant measurement properties, including interval-level scores, will have been developed. This paper highlights the unique properties of the Rasch model. Items developed to measure adolescent attitudes towards abortion are used to exemplify the process. Ten attitude and intention items relating to abortion were answered by 406 adolescents aged 12 to 19 years, as part of the "Teen Relationships Study". The sampling framework captured a range of sexual and pregnancy experiences. Items were assessed for fit to the Rasch model including checks for Differential Item Functioning (DIF) by gender, sexual experience or pregnancy experience. Rasch analysis of the original dataset initially demonstrated that some items did not fit the model. Rescoring of one item (B5) and removal of another (L31) resulted in fit, as shown by a non-significant item-trait interaction total chi-square and a mean log residual fit statistic for items of -0.05 (SD=1.43). No DIF existed for the revised scale. However, items did not distinguish as well amongst persons with the most intense attitudes as they did for other persons. A person separation index of 0.82 indicated good reliability. Application of the Rasch model produced a valid and reliable scale measuring adolescent attitudes towards abortion, with stable measurement properties. The Rasch process provided an extensive range of diagnostic information concerning item and person fit, enabling changes to be made to scale items. This example shows the value of the Rasch model in developing scales for both social science and health disciplines.

  7. Global Modeling, Field Campaigns, Upscaling and Ray Desjardins

    NASA Technical Reports Server (NTRS)

    Sellers, P. J.; Hall, F. G.

    2012-01-01

    In the early 1980's, it became apparent that land surface radiation and energy budgets were unrealistically represented in Global Circulation models (GCM's), Shortly thereafter, it became clear that the land carbon budget was also poorly represented in Earth System Models (ESM's), A number of scientific communities, including GCM/ESM modelers, micrometeorologists, satellite data specialists and plant physiologists, came together to design field experiments that could be used to develop and validate the contemporary prototype land surface models. These experiments were designed to measure land surface fluxes of radiation, heat, water vapor and CO2 using a network of flux towers and other plot-scale techniques, coincident with satellite measurements of related state variables, The interdisciplinary teams involved in these experiments quickly became aware of the scale gap between plot-scale measurements (approx 10 - 100m), satellite measurements (100m - 10 km), and GCM grid areas (l0 - 200km). At the time, there was no established flux measurement capability to bridge these scale gaps. Then, a Canadian science learn led by Ray Desjardins started to actively participate in the design and execution of the experiments, with airborne eddy correlation providing the radically innovative bridge across the scale gaps, In a succession of brilliantly executed field campaigns followed up by convincing scientific analyses, they demonstrated that airborne eddy correlation allied with satellite data was the most powerful upscaling tool available to the community, The rest is history: the realism and credibility of weather and climate models has been enormously improved enormously over the last 25 years with immense benefits to the public and policymakers.

  8. Flooding Fragility Experiments and Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Tahhan, Antonio; Muchmore, Cody

    2016-09-01

    This report describes the work that has been performed on flooding fragility, both the experimental tests being carried out and the probabilistic fragility predictive models being produced in order to use the text results. Flooding experiments involving full-scale doors have commenced in the Portal Evaluation Tank. The goal of these experiments is to develop a full-scale component flooding experiment protocol and to acquire data that can be used to create Bayesian regression models representing the fragility of these components. This work is in support of the Risk-Informed Safety Margin Characterization (RISMC) Pathway external hazards evaluation research and development.

  9. Local Scale Radiobrightness Modeling During the Intensive Observing Period-4 of the Cold Land Processes Experiment-1

    NASA Astrophysics Data System (ADS)

    Kim, E.; Tedesco, M.; de Roo, R.; England, A. W.; Gu, H.; Pham, H.; Boprie, D.; Graf, T.; Koike, T.; Armstrong, R.; Brodzik, M.; Hardy, J.; Cline, D.

    2004-12-01

    The NASA Cold Land Processes Field Experiment (CLPX-1) was designed to provide microwave remote sensing observations and ground truth for studies of snow and frozen ground remote sensing, particularly issues related to scaling. CLPX-1 was conducted in 2002 and 2003 in Colorado, USA. One of the goals of the experiment was to test the capabilities of microwave emission models at different scales. Initial forward model validation work has concentrated on the Local-Scale Observation Site (LSOS), a 0.8~ha study site consisting of open meadows separated by trees where the most detailed measurements were made of snow depth and temperature, density, and grain size profiles. Results obtained in the case of the 3rd Intensive Observing Period (IOP3) period (February, 2003, dry snow) suggest that a model based on Dense Medium Radiative Transfer (DMRT) theory is able to model the recorded brightness temperatures using snow parameters derived from field measurements. This paper focuses on the ability of forward DMRT modelling, combined with snowpack measurements, to reproduce the radiobrightness signatures observed by the University of Michigan's Truck-Mounted Radiometer System (TMRS) at 19 and 37~GHz during the 4th IOP (IOP4) in March, 2003. Unlike in IOP3, conditions during IOP4 include both wet and dry periods, providing a valuable test of DMRT model performance. In addition, a comparison will be made for the one day of coincident observations by the University of Tokyo's Ground-Based Microwave Radiometer-7 (GBMR-7) and the TMRS. The plot-scale study in this paper establishes a baseline of DMRT performance for later studies at successively larger scales. And these scaling studies will help guide the choice of future snow retrieval algorithms and the design of future Cold Lands observing systems.

  10. Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B

    2011-01-01

    In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less

  11. Beyond theories of plant invasions: Lessons from natural landscapes

    USGS Publications Warehouse

    Stohlgren, Thomas J.

    2002-01-01

    There are a growing number of contrasting theories about plant invasions, but most are only weakly supported by small-scale field experiments, observational studies, and mathematical models. Among the most contentious theories is that species-rich habitats should be less vulnerable to plant invasion than species-poor sites, stemming from earlier theories that competition is a major force in structuring plant communities. Early ecologists such as Charles Darwin (1859) and Charles Elton (1958) suggested that a lack of intense interspecific competition on islands made these low-diversity habitats vulnerable to invasion. Small-scale field experiments have supported and contradicted this theory, as have various mathematical models. In contrast, many large-scale observational studies and detailed vegetation surveys in continental areas often report that species-rich areas are more heavily invaded than species-poor areas, but there are exceptions here as well. In this article, I show how these seemingly contrasting patterns converge once appropriate spatial and temporal scales are considered in complex natural environments. I suggest ways in which small-scale experiments, mathematical models, and large- scale observational studies can be improved and better integrated to advance a theoretically based understanding of plant invasions.

  12. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  13. Upscaling of U (VI) desorption and transport from decimeter‐scale heterogeneity to plume‐scale modeling

    USGS Publications Warehouse

    Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan; Briggs, Martin A.; Day-Lewis, Frederick D.

    2015-01-01

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.

  14. Utility of the Social Responsiveness Scale-Parent Report (SRS-Parent) as a Diagnostic Tool for Autism Identification

    ERIC Educational Resources Information Center

    Begay, Kristin

    2016-01-01

    Rating scales are often used as part of the evaluation process to diagnose autism spectrum disorder (ASD). Rating scales that are modeled after the experiences and understanding of the Caucasian American race may not reflect the unique experiences of individuals from other races or ethnicities. If parent ratings do not uniformly identify the ASD…

  15. New mechanistically based model for predicting reduction of biosolids waste by ozonation of return activated sludge.

    PubMed

    Isazadeh, Siavash; Feng, Min; Urbina Rivas, Luis Enrique; Frigon, Dominic

    2014-04-15

    Two pilot-scale activated sludge reactors were operated for 98 days to provide the necessary data to develop and validate a new mathematical model predicting the reduction of biosolids production by ozonation of the return activated sludge (RAS). Three ozone doses were tested during the study. In addition to the pilot-scale study, laboratory-scale experiments were conducted with mixed liquor suspended solids and with pure cultures to parameterize the biomass inactivation process during exposure to ozone. The experiments revealed that biomass inactivation occurred even at the lowest doses, but that it was not associated with extensive COD solubilization. For validation, the model was used to simulate the temporal dynamics of the pilot-scale operational data. Increasing the description accuracy of the inactivation process improved the precision of the model in predicting the operational data. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Thermal/structural modeling of a large scale in situ overtest experiment for defense high level waste at the Waste Isolation Pilot Plant Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, H.S.; Stone, C.M.; Krieg, R.D.

    Several large scale in situ experiments in bedded salt formations are currently underway at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, USA. In these experiments, the thermal and creep responses of salt around several different underground room configurations are being measured. Data from the tests are to be compared to thermal and structural responses predicted in pretest reference calculations. The purpose of these comparisons is to evaluate computational models developed from laboratory data prior to fielding of the in situ experiments. In this paper, the computational models used in the pretest reference calculation for one of themore » large scale tests, The Overtest for Defense High Level Waste, are described; and the pretest computed thermal and structural responses are compared to early data from the experiment. The comparisons indicate that computed and measured temperatures for the test agree to within ten percent but that measured deformation rates are between two and three times greater than corresponsing computed rates. 10 figs., 3 tabs.« less

  17. Modeling of copper sorption onto GFH and design of full-scale GFH adsorbers.

    PubMed

    Steiner, Michele; Pronk, Wouter; Boller, Markus A

    2006-03-01

    During rain events, copper wash-off occurring from copper roofs results in environmental hazards. In this study, columns filled with granulated ferric hydroxide (GFH) were used to treat copper-containing roof runoff. It was shown that copper could be removed to a high extent. A model was developed to describe this removal process. The model was based on the Two Region Model (TRM), extended with an additional diffusion zone. The extended model was able to describe the copper removal in long-term experiments (up to 125 days) with variable flow rates reflecting realistic runoff events. The four parameters of the model were estimated based on data gained with specific column experiments according to maximum sensitivity for each parameter. After model validation, the parameter set was used for the design of full-scale adsorbers. These full-scale adsorbers show high removal rates during extended periods of time.

  18. Scale model experimentation: using terahertz pulses to study light scattering.

    PubMed

    Pearce, Jeremy; Mittleman, Daniel M

    2002-11-07

    We describe a new class of experiments involving applications of terahertz radiation to problems in biomedical imaging and diagnosis. These involve scale model measurements, in which information can be gained about pulse propagation in scattering media. Because of the scale invariance of Maxwell's equations, these experiments can provide insight for researchers working on similar problems at shorter wavelengths. As a first demonstration, we measure the propagation constants for pulses in a dense collection of spherical scatterers, and compare with the predictions of the quasi-crystalline approximation. Even though the fractional volume in our measurements exceeds the limit of validity of this model, we find that it still predicts certain features of the propagation with reasonable accuracy.

  19. Multi-scale Modeling of Plasticity in Tantalum.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describingmore » temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.« less

  20. Multi Length Scale Finite Element Design Framework for Advanced Woven Fabrics

    NASA Astrophysics Data System (ADS)

    Erol, Galip Ozan

    Woven fabrics are integral parts of many engineering applications spanning from personal protective garments to surgical scaffolds. They provide a wide range of opportunities in designing advanced structures because of their high tenacity, flexibility, high strength-to-weight ratios and versatility. These advantages result from their inherent multi scale nature where the filaments are bundled together to create yarns while the yarns are arranged into different weave architectures. Their highly versatile nature opens up potential for a wide range of mechanical properties which can be adjusted based on the application. While woven fabrics are viable options for design of various engineering systems, being able to understand the underlying mechanisms of the deformation and associated highly nonlinear mechanical response is important and necessary. However, the multiscale nature and relationships between these scales make the design process involving woven fabrics a challenging task. The objective of this work is to develop a multiscale numerical design framework using experimentally validated mesoscopic and macroscopic length scale approaches by identifying important deformation mechanisms and recognizing the nonlinear mechanical response of woven fabrics. This framework is exercised by developing mesoscopic length scale constitutive models to investigate plain weave fabric response under a wide range of loading conditions. A hyperelastic transversely isotropic yarn material model with transverse material nonlinearity is developed for woven yarns (commonly used in personal protection garments). The material properties/parameters are determined through an inverse method where unit cell finite element simulations are coupled with experiments. The developed yarn material model is validated by simulating full scale uniaxial tensile, bias extension and indentation experiments, and comparing to experimentally observed mechanical response and deformation mechanisms. Moreover, mesoscopic unit cell finite elements are coupled with a design-of-experiments method to systematically identify the important yarn material properties for the macroscale response of various weave architectures. To demonstrate the macroscopic length scale approach, two new material models for woven fabrics were developed. The Planar Material Model (PMM) utilizes two important deformation mechanisms in woven fabrics: (1) yarn elongation, and (2) relative yarn rotation due to shear loads. The yarns' uniaxial tensile response is modeled with a nonlinear spring using constitutive relations while a nonlinear rotational spring is implemented to define fabric's shear stiffness. The second material model, Sawtooth Material Model (SMM) adopts the sawtooth geometry while recognizing the biaxial nature of woven fabrics by implementing the interactions between the yarns. Material properties/parameters required by both PMM and SMM can be directly determined from standard experiments. Both macroscopic material models are implemented within an explicit finite element code and validated by comparing to the experiments. Then, the developed macroscopic material models are compared under various loading conditions to determine their accuracy. Finally, the numerical models developed in the mesoscopic and macroscopic length scales are linked thus demonstrating the new systematic design framework involving linked mesoscopic and macroscopic length scale modeling approaches. The approach is demonstrated with both Planar and Sawtooth Material Models and the simulation results are verified by comparing the results obtained from meso and macro models.

  1. Evaluation of a genome-scale in silico metabolic model for Geobacter metallireducens by using proteomic data from a field biostimulation experiment.

    PubMed

    Fang, Yilin; Wilkins, Michael J; Yabusaki, Steven B; Lipton, Mary S; Long, Philip E

    2012-12-01

    Accurately predicting the interactions between microbial metabolism and the physical subsurface environment is necessary to enhance subsurface energy development, soil and groundwater cleanup, and carbon management. This study was an initial attempt to confirm the metabolic functional roles within an in silico model using environmental proteomic data collected during field experiments. Shotgun global proteomics data collected during a subsurface biostimulation experiment were used to validate a genome-scale metabolic model of Geobacter metallireducens-specifically, the ability of the metabolic model to predict metal reduction, biomass yield, and growth rate under dynamic field conditions. The constraint-based in silico model of G. metallireducens relates an annotated genome sequence to the physiological functions with 697 reactions controlled by 747 enzyme-coding genes. Proteomic analysis showed that 180 of the 637 G. metallireducens proteins detected during the 2008 experiment were associated with specific metabolic reactions in the in silico model. When the field-calibrated Fe(III) terminal electron acceptor process reaction in a reactive transport model for the field experiments was replaced with the genome-scale model, the model predicted that the largest metabolic fluxes through the in silico model reactions generally correspond to the highest abundances of proteins that catalyze those reactions. Central metabolism predicted by the model agrees well with protein abundance profiles inferred from proteomic analysis. Model discrepancies with the proteomic data, such as the relatively low abundances of proteins associated with amino acid transport and metabolism, revealed pathways or flux constraints in the in silico model that could be updated to more accurately predict metabolic processes that occur in the subsurface environment.

  2. Similarity Rules for Scaling Solar Sail Systems

    NASA Technical Reports Server (NTRS)

    Canfield, Stephen L.; Peddieson, John; Garbe, Gregory

    2010-01-01

    Future science missions will require solar sails on the order of 200 square meters (or larger). However, ground demonstrations and flight demonstrations must be conducted at significantly smaller sizes, due to limitations of ground-based facilities and cost and availability of flight opportunities. For this reason, the ability to understand the process of scalability, as it applies to solar sail system models and test data, is crucial to the advancement of this technology. This paper will approach the problem of scaling in solar sail models by developing a set of scaling laws or similarity criteria that will provide constraints in the sail design process. These scaling laws establish functional relationships between design parameters of a prototype and model sail that are created at different geometric sizes. This work is applied to a specific solar sail configuration and results in three (four) similarity criteria for static (dynamic) sail models. Further, it is demonstrated that even in the context of unique sail material requirements and gravitational load of earth-bound experiments, it is possible to develop appropriate scaled sail experiments. In the longer term, these scaling laws can be used in the design of scaled experimental tests for solar sails and in analyzing the results from such tests.

  3. Integrated Modeling and Experimental Studies at the Meso Scale for Advanced Reactive Materials

    DTIC Science & Technology

    2016-07-01

    T E C H N IC A L R E P O R T DTRA-TR-16-76 Integrated Modeling and Experimental Studies at the Meso- Scale for Advanced Reactive Materials ...study the energy release processes that thermitic and/or exothermic intermetallic reactive materials experience when they are subjected to...thermitic and/or exothermic intermetallic materials experience when they are subjected to sustained shock loading. Data from highly spatially and

  4. Analyses of 1/15 scale Creare bypass transient experiments. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kmetyk, L.N.; Buxton, L.D.; Cole, R.K. Jr.

    1982-09-01

    RELAP4 analyses of several 1/15 scale Creare H-series bypass transient experiments have been done to investigate the effect of using different downcomer nodalizations, physical scales, slip models, and vapor fraction donoring methods. Most of the analyses were thermal equilibrium calculations performed with RELAP4/MOD5, but a few such calculations were done with RELAP4/MOD6 and RELAP4/MOD7, which contain improved slip models. In order to estimate the importance of nonequilibrium effects, additional analyses were performed with TRAC-PD2, RELAP5 and the nonequilibrium option of RELAP4/MOD7. The purpose of these studies was to determine whether results from Westinghouse's calculation of the Creare experiments, which weremore » done with a UHI-modified version of SATAN, were sufficient to guarantee SATAN would be conservative with respect to ECC bypass in full-scale plant analyses.« less

  5. Midwifery students׳ experiences of an innovative clinical placement model embedded within midwifery continuity of care in Australia.

    PubMed

    Carter, Amanda G; Wilkes, Elizabeth; Gamble, Jenny; Sidebotham, Mary; Creedy, Debra K

    2015-08-01

    midwifery continuity of care experiences can provide high quality clinical learning for students but can be challenging to implement. The Rural and Private Midwifery Education Project (RPMEP) is a strategic government funded initiative to (1) grow the midwifery workforce within private midwifery practice and rural midwifery, by (2) better preparing new graduates to work in private midwifery and rural continuity of care models. this study evaluated midwifery students׳ experience of an innovative continuity of care clinical placement model in partnership with private midwifery practice and rural midwifery group practices. a descriptive cohort design was used. All students in the RPMEP were invited to complete an online survey about their experiences of clinical placement within midwifery continuity models of care. Responses were analysed using descriptive statistics. Correlations between total scale scores were examined. Open-ended responses were analysed using content analysis. Internal reliability of the scales was assessed using Cronbach׳s alpha. sixteen out of 17 completed surveys were received (94% response rate). Scales included in the survey demonstrated good internal reliability. The majority of students felt inspired by caseload approaches to care, expressed overall satisfaction with the mentoring received and reported a positive learning environment at their placement site. Some students reported stress related to course expectations and demands in the clinical environment (e.g. skill acquisition and hours required for continuity of care). There were significant correlations between scales on perceptions of caseload care and learning culture (r=.87 p<.001) and assessment (r=.87 p<.001). Scores on the clinical learning environment scale were significantly correlated with perceptions of the caseload model (rho=.86 p<.001), learning culture (rho=.94 p<.001) and assessment (rho=.65 p<.01) scales. embedding students within midwifery continuity of care models was perceived to be highly beneficial to learning, developed partnerships with women, and provided appropriate clinical skills development required for registration, while promoting students׳ confidence and competence. The flexible academic programme enabled students to access learning at any time and prioritise continuity of care experiences. Strategies are needed to better support students achieve a satisfactory work-life balance. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  6. Late time neutrino masses, the LSND experiment, and the cosmic microwave background.

    PubMed

    Chacko, Z; Hall, Lawrence J; Oliver, Steven J; Perelstein, Maxim

    2005-03-25

    Models with low-scale breaking of global symmetries in the neutrino sector provide an alternative to the seesaw mechanism for understanding why neutrinos are light. Such models can easily incorporate light sterile neutrinos required by the Liquid Scintillator Neutrino Detector experiment. Furthermore, the constraints on the sterile neutrino properties from nucleosynthesis and large-scale structure can be removed due to the nonconventional cosmological evolution of neutrino masses and densities. We present explicit, fully realistic supersymmetric models, and discuss the characteristic signatures predicted in the angular distributions of the cosmic microwave background.

  7. A novel method of multi-scale simulation of macro-scale deformation and microstructure evolution on metal forming

    NASA Astrophysics Data System (ADS)

    Huang, Shiquan; Yi, Youping; Li, Pengchuan

    2011-05-01

    In recent years, multi-scale simulation technique of metal forming is gaining significant attention for prediction of the whole deformation process and microstructure evolution of product. The advances of numerical simulation at macro-scale level on metal forming are remarkable and the commercial FEM software, such as Deform2D/3D, has found a wide application in the fields of metal forming. However, the simulation method of multi-scale has little application due to the non-linearity of microstructure evolution during forming and the difficulty of modeling at the micro-scale level. This work deals with the modeling of microstructure evolution and a new method of multi-scale simulation in forging process. The aviation material 7050 aluminum alloy has been used as example for modeling of microstructure evolution. The corresponding thermal simulated experiment has been performed on Gleeble 1500 machine. The tested specimens have been analyzed for modeling of dislocation density, nucleation and growth of recrystallization(DRX). The source program using cellular automaton (CA) method has been developed to simulate the grain nucleation and growth, in which the change of grain topology structure caused by the metal deformation was considered. The physical fields at macro-scale level such as temperature field, stress and strain fields, which can be obtained by commercial software Deform 3D, are coupled with the deformed storage energy at micro-scale level by dislocation model to realize the multi-scale simulation. This method was explained by forging process simulation of the aircraft wheel hub forging. Coupled the results of Deform 3D with CA results, the forging deformation progress and the microstructure evolution at any point of forging could be simulated. For verifying the efficiency of simulation, experiments of aircraft wheel hub forging have been done in the laboratory and the comparison of simulation and experiment result has been discussed in details.

  8. Explaining Cold-Pulse Dynamics in Tokamak Plasmas Using Local Turbulent Transport Models

    NASA Astrophysics Data System (ADS)

    Rodriguez-Fernandez, P.; White, A. E.; Howard, N. T.; Grierson, B. A.; Staebler, G. M.; Rice, J. E.; Yuan, X.; Cao, N. M.; Creely, A. J.; Greenwald, M. J.; Hubbard, A. E.; Hughes, J. W.; Irby, J. H.; Sciortino, F.

    2018-02-01

    A long-standing enigma in plasma transport has been resolved by modeling of cold-pulse experiments conducted on the Alcator C-Mod tokamak. Controlled edge cooling of fusion plasmas triggers core electron heating on time scales faster than an energy confinement time, which has long been interpreted as strong evidence of nonlocal transport. This Letter shows that the steady-state profiles, the cold-pulse rise time, and disappearance at higher density as measured in these experiments are successfully captured by a recent local quasilinear turbulent transport model, demonstrating that the existence of nonlocal transport phenomena is not necessary for explaining the behavior and time scales of cold-pulse experiments in tokamak plasmas.

  9. A geometric scaling model for assessing the impact of aneurysm size ratio on hemodynamic characteristics

    PubMed Central

    2014-01-01

    Background The intracranial aneurysm (IA) size has been proved to have impacts on the hemodynamics and can be applied for the prediction of IA rupture risk. Although the relationship between aspect ratio and hemodynamic parameters was investigated using real patients and virtual models, few studies focused on longitudinal experiments of IAs based on patient-specific aneurysm models. We attempted to do longitudinal simulation experiments of IAs by developing a series of scaled models. Methods In this work, a novel scaling approach was proposed to create IA models with different aneurysm size ratios (ASRs) defined as IA height divided by average neck diameter from a patient-specific aneurysm model and the relationship between the ASR and hemodynamics was explored based on a simulated longitudinal experiment. Wall shear stress, flow patterns and vessel wall displacement were computed from these models. Pearson correlation analysis was performed to elucidate the relationship between the ASR and wall shear stress. The correlation of the ASR and flow velocity was also computed and analyzed. Results The experiment results showed that there was a significant increase in IA area exposed to low WSS once the ASR > 0.7, and the flow became slower and the blood was more difficult to flow into the aneurysm as the ASR increased. Meanwhile, the results also indicated that average blood flow velocity and WSS had strongly negative correlations with the ASR (r = −0.938 and −0.925, respectively). A narrower impingement region and a more concentrated inflow jet appeared as the ASR increased, and the large local deformation at aneurysm apex could be found as the ASR >1.7 or 0.7 < the ASR <1.0. Conclusion Hemodynamic characteristics varied with the ASR. Besides, it is helpful to further explore the relationship between morphologies and hemodynamics based on a longitudinal simulation by building a series of patient-specific aneurysm scaled models applying our proposed IA scaling algorithm. PMID:24528952

  10. Anisotropies of the cosmic microwave background in nonstandard cold dark matter models

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Silk, Joseph

    1992-01-01

    Small angular scale cosmic microwave anisotropies in flat, vacuum-dominated, cold dark matter cosmological models which fit large-scale structure observations and are consistent with a high value for the Hubble constant are reexamined. New predictions for CDM models in which the large-scale power is boosted via a high baryon content and low H(0) are presented. Both classes of models are consistent with current limits: an improvement in sensitivity by a factor of about 3 for experiments which probe angular scales between 7 arcmin and 1 deg is required, in the absence of very early reionization, to test boosted CDM models for large-scale structure formation.

  11. Determining erosion relevant soil characteristics with a small-scale rainfall simulator

    NASA Astrophysics Data System (ADS)

    Schindewolf, M.; Schmidt, J.

    2009-04-01

    The use of soil erosion models is of great importance in soil and water conservation. Routine application of these models on the regional scale is not at least limited by the high parameter demands. Although the EROSION 3D simulation model is operating with a comparable low number of parameters, some of the model input variables could only be determined by rainfall simulation experiments. The existing data base of EROSION 3D was created in the mid 90s based on large-scale rainfall simulation experiments on 22x2m sized experimental plots. Up to now this data base does not cover all soil and field conditions adequately. Therefore a new campaign of experiments would be essential to produce additional information especially with respect to the effects of new soil management practices (e.g. long time conservation tillage, non tillage). The rainfall simulator used in the actual campaign consists of 30 identic modules, which are equipped with oscillating rainfall nozzles. Veejet 80/100 (Spraying Systems Co., Wheaton, IL) are used in order to ensure best possible comparability to natural rainfalls with respect to raindrop size distribution and momentum transfer. Central objectives of the small-scale rainfall simulator are - effectively application - provision of comparable results to large-scale rainfall simulation experiments. A crucial problem in using the small scale simulator is the restriction on rather small volume rates of surface runoff. Under this conditions soil detachment is governed by raindrop impact. Thus impact of surface runoff on particle detachment cannot be reproduced adequately by a small-scale rainfall simulator With this problem in mind this paper presents an enhanced small-scale simulator which allows a virtual multiplication of the plot length by feeding additional sediment loaded water to the plot from upstream. Thus is possible to overcome the plot length limited to 3m while reproducing nearly similar flow conditions as in rainfall experiments on standard plots. The simulator is extensively applied to plots of different soil types, crop types and management systems. The comparison with existing data sets obtained by large-scale rainfall simulations show that results can adequately be reproduced by the applied combination of small-scale rainfall simulator and sediment loaded water influx.

  12. A Two-length Scale Turbulence Model for Single-phase Multi-fluid Mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwarzkopf, J. D.; Livescu, D.; Baltzer, J. R.

    2015-09-08

    A two-length scale, second moment turbulence model (Reynolds averaged Navier-Stokes, RANS) is proposed to capture a wide variety of single-phase flows, spanning from incompressible flows with single fluids and mixtures of different density fluids (variable density flows) to flows over shock waves. The two-length scale model was developed to address an inconsistency present in the single-length scale models, e.g. the inability to match both variable density homogeneous Rayleigh-Taylor turbulence and Rayleigh-Taylor induced turbulence, as well as the inability to match both homogeneous shear and free shear flows. The two-length scale model focuses on separating the decay and transport length scales,more » as the two physical processes are generally different in inhomogeneous turbulence. This allows reasonable comparisons with statistics and spreading rates over such a wide range of turbulent flows using a common set of model coefficients. The specific canonical flows considered for calibrating the model include homogeneous shear, single-phase incompressible shear driven turbulence, variable density homogeneous Rayleigh-Taylor turbulence, Rayleigh-Taylor induced turbulence, and shocked isotropic turbulence. The second moment model shows to compare reasonably well with direct numerical simulations (DNS), experiments, and theory in most cases. The model was then applied to variable density shear layer and shock tube data and shows to be in reasonable agreement with DNS and experiments. Additionally, the importance of using DNS to calibrate and assess RANS type turbulence models is highlighted.« less

  13. A multi-scale cardiovascular system model can account for the load-dependence of the end-systolic pressure-volume relationship

    PubMed Central

    2013-01-01

    Background The end-systolic pressure-volume relationship is often considered as a load-independent property of the heart and, for this reason, is widely used as an index of ventricular contractility. However, many criticisms have been expressed against this index and the underlying time-varying elastance theory: first, it does not consider the phenomena underlying contraction and second, the end-systolic pressure volume relationship has been experimentally shown to be load-dependent. Methods In place of the time-varying elastance theory, a microscopic model of sarcomere contraction is used to infer the pressure generated by the contraction of the left ventricle, considered as a spherical assembling of sarcomere units. The left ventricle model is inserted into a closed-loop model of the cardiovascular system. Finally, parameters of the modified cardiovascular system model are identified to reproduce the hemodynamics of a normal dog. Results Experiments that have proven the limitations of the time-varying elastance theory are reproduced with our model: (1) preload reductions, (2) afterload increases, (3) the same experiments with increased ventricular contractility, (4) isovolumic contractions and (5) flow-clamps. All experiments simulated with the model generate different end-systolic pressure-volume relationships, showing that this relationship is actually load-dependent. Furthermore, we show that the results of our simulations are in good agreement with experiments. Conclusions We implemented a multi-scale model of the cardiovascular system, in which ventricular contraction is described by a detailed sarcomere model. Using this model, we successfully reproduced a number of experiments that have shown the failing points of the time-varying elastance theory. In particular, the developed multi-scale model of the cardiovascular system can capture the load-dependence of the end-systolic pressure-volume relationship. PMID:23363818

  14. Modeling the effects of small turbulent scales on the drag force for particles below and above the Kolmogorov scale

    NASA Astrophysics Data System (ADS)

    Gorokhovski, Mikhael; Zamansky, Rémi

    2018-03-01

    Consistently with observations from recent experiments and DNS, we focus on the effects of strong velocity increments at small spatial scales for the simulation of the drag force on particles in high Reynolds number flows. In this paper, we decompose the instantaneous particle acceleration in its systematic and residual parts. The first part is given by the steady-drag force obtained from the large-scale energy-containing motions, explicitly resolved by the simulation, while the second denotes the random contribution due to small unresolved turbulent scales. This is in contrast with standard drag models in which the turbulent microstructures advected by the large-scale eddies are deemed to be filtered by the particle inertia. In our paper, the residual term is introduced as the particle acceleration conditionally averaged on the instantaneous dissipation rate along the particle path. The latter is modeled from a log-normal stochastic process with locally defined parameters obtained from the resolved field. The residual term is supplemented by an orientation model which is given by a random walk on the unit sphere. We propose specific models for particles with diameter smaller and larger size than the Kolmogorov scale. In the case of the small particles, the model is assessed by comparison with direct numerical simulation (DNS). Results showed that by introducing this modeling, the particle acceleration statistics from DNS is predicted fairly well, in contrast with the standard LES approach. For the particles bigger than the Kolmogorov scale, we propose a fluctuating particle response time, based on an eddy viscosity estimated at the particle scale. This model gives stretched tails of the particle acceleration distribution and dependence of its variance consistent with experiments.

  15. Transverse-velocity scaling of femtoscopy in \\sqrt{s}=7\\,{TeV} proton–proton collisions

    NASA Astrophysics Data System (ADS)

    Humanic, T. J.

    2018-05-01

    Although transverse-mass scaling of femtoscopic radii is found to hold to a good approximation in heavy-ion collision experiments, it is seen to fail for high-energy proton–proton collisions. It is shown that if invariant radius parameters are plotted versus the transverse velocity instead, scaling with the transverse velocity is seen in \\sqrt{s}=7 TeV proton–proton experiments. A simple semi-classical model is shown to qualitatively reproduce this transverse velocity scaling.

  16. Validity of thermally-driven small-scale ventilated filling box models

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie L.; Linden, P. F.

    2013-11-01

    The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.

  17. Flexible twist for pitch control in a high altitude long endurance aircraft with nonlinear response

    NASA Astrophysics Data System (ADS)

    Bond, Vanessa L.

    Information dominance is the key motivator for employing high-altitude long-endurance (HALE) aircraft to provide continuous coverage in the theaters of operation. A joined-wing configuration of such a craft gives the advantage of a platform for higher resolution sensors. Design challenges emerge with structural flexibility that arise from a long-endurance aircraft design. The goal of this research was to demonstrate that scaling the nonlinear response of a full-scale finite element model was possible if the model was aeroelastically and "nonlinearly" scaled. The research within this dissertation showed that using the first three modes and the first bucking modes was not sufficient for proper scaling. In addition to analytical scaling several experiments were accomplished to understand and overcome design challenges of HALE aircraft. One such challenge is combated by eliminating pitch control surfaces and replacing them with an aft-wing twist concept. This design option was physically realized through wind tunnel measurement of forces, moments and pressures on a subscale experimental model. This design and experiment demonstrated that pitch control with aft-wing twist is feasible. Another challenge is predicting the nonlinear response of long-endurance aircraft. This was addressed by experimental validation of modeling nonlinear response on a subscale experimental model. It is important to be able to scale nonlinear behavior in this type of craft due to its highly flexible nature. The validation accomplished during this experiment on a subscale model will reduce technical risk for full-scale development of such pioneering craft. It is also important to experimentally reproduce the air loads following the wing as it deforms. Nonlinearities can be attributed to these follower forces that might otherwise be overlooked. This was found to be a significant influence in HALE aircraft to include the case study of the FEM and experimental models herein.

  18. Upscaling of U(VI) Desorption and Transport from Decimeter-Scale Heterogeneity to Plume-Scale Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan

    2015-02-24

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research weremore » to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.« less

  19. Evaluating the Performance of the Goddard Multi-Scale Modeling Framework against GPM, TRMM and CloudSat/CALIPSO Products

    NASA Astrophysics Data System (ADS)

    Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.

    2014-12-01

    Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.

  20. Applications of computational models to better understand microvascular remodelling: a focus on biomechanical integration across scales

    PubMed Central

    Murfee, Walter L.; Sweat, Richard S.; Tsubota, Ken-ichi; Gabhann, Feilim Mac; Khismatullin, Damir; Peirce, Shayn M.

    2015-01-01

    Microvascular network remodelling is a common denominator for multiple pathologies and involves both angiogenesis, defined as the sprouting of new capillaries, and network patterning associated with the organization and connectivity of existing vessels. Much of what we know about microvascular remodelling at the network, cellular and molecular scales has been derived from reductionist biological experiments, yet what happens when the experiments provide incomplete (or only qualitative) information? This review will emphasize the value of applying computational approaches to advance our understanding of the underlying mechanisms and effects of microvascular remodelling. Examples of individual computational models applied to each of the scales will highlight the potential of answering specific questions that cannot be answered using typical biological experimentation alone. Looking into the future, we will also identify the needs and challenges associated with integrating computational models across scales. PMID:25844149

  1. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  2. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Lawrence; Howard, Mike; Levesque, George; Souers, Clark

    2011-06-01

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to ALE hydrodynamics codes to model detonations. We term our model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculate EOS values based on the concentrations. A validation suite of model simulations compared to recent high fidelity metal push experiments at ambient and cold temperatures has been developed. We present here a study of multi-time scale kinetic rate effects for these experiments. Prepared by LLNL under Contract DE-AC52-07NA27344.

  3. Modeling Climate Responses to Spectral Solar Forcing on Centennial and Decadal Time Scales

    NASA Technical Reports Server (NTRS)

    Wen, G.; Cahalan, R.; Rind, D.; Jonas, J.; Pilewskie, P.; Harder, J.

    2012-01-01

    We report a series of experiments to explore clima responses to two types of solar spectral forcing on decadal and centennial time scales - one based on prior reconstructions, and another implied by recent observations from the SORCE (Solar Radiation and Climate Experiment) SIM (Spectral 1rradiance Monitor). We apply these forcings to the Goddard Institute for Space Studies (GISS) Global/Middle Atmosphere Model (GCMAM). that couples atmosphere with ocean, and has a model top near the mesopause, allowing us to examine the full response to the two solar forcing scenarios. We show different climate responses to the two solar forCing scenarios on decadal time scales and also trends on centennial time scales. Differences between solar maximum and solar minimum conditions are highlighted, including impacts of the time lagged reSponse of the lower atmosphere and ocean. This contrasts with studies that assume separate equilibrium conditions at solar maximum and minimum. We discuss model feedback mechanisms involved in the solar forced climate variations.

  4. Explaining Cold-Pulse Dynamics in Tokamak Plasmas Using Local Turbulent Transport Models

    DOE PAGES

    Rodriguez-Fernandez, P.; White, A. E.; Howard, N. T.; ...

    2018-02-16

    A long-standing enigma in plasma transport has been resolved by modeling of cold-pulse experiments conducted on the Alcator C-Mod tokamak. Controlled edge cooling of fusion plasmas triggers core electron heating on time scales faster than an energy confinement time, which has long been interpreted as strong evidence of nonlocal transport. Here, this Letter shows that the steady-state profiles, the cold-pulse rise time, and disappearance at higher density as measured in these experiments are successfully captured by a recent local quasilinear turbulent transport model, demonstrating that the existence of nonlocal transport phenomena is not necessary for explaining the behavior and timemore » scales of cold-pulse experiments in tokamak plasmas.« less

  5. Explaining Cold-Pulse Dynamics in Tokamak Plasmas Using Local Turbulent Transport Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez-Fernandez, P.; White, A. E.; Howard, N. T.

    A long-standing enigma in plasma transport has been resolved by modeling of cold-pulse experiments conducted on the Alcator C-Mod tokamak. Controlled edge cooling of fusion plasmas triggers core electron heating on time scales faster than an energy confinement time, which has long been interpreted as strong evidence of nonlocal transport. Here, this Letter shows that the steady-state profiles, the cold-pulse rise time, and disappearance at higher density as measured in these experiments are successfully captured by a recent local quasilinear turbulent transport model, demonstrating that the existence of nonlocal transport phenomena is not necessary for explaining the behavior and timemore » scales of cold-pulse experiments in tokamak plasmas.« less

  6. Modeling Relevant to Safe Operations of U.S. Navy Vessels in Arctic Conditions: Physical Modeling of Ice Loads

    DTIC Science & Technology

    2016-06-01

    zones with ice concentrations up to 40%. To achieve this goal, the Navy must determine safe operational speeds as a function of ice concen- tration...and full-scale experience with ice-capable hull forms that have shallow entry angles to promote flexural ice failure preferentially over crushing...plan view) of the proposed large-scale ice–hull impact experiment to be conducted in CRREL’s refrigerated towing basin. Shown here is a side-panel

  7. Analytical solution for reactive solute transport considering incomplete mixing within a reference elementary volume

    NASA Astrophysics Data System (ADS)

    Chiogna, Gabriele; Bellin, Alberto

    2013-05-01

    The laboratory experiments of Gramling et al. (2002) showed that incomplete mixing at the pore scale exerts a significant impact on transport of reactive solutes and that assuming complete mixing leads to overestimation of product concentration in bimolecular reactions. Successively, several attempts have been made to model this experiment, either considering spatial segregation of the reactants, non-Fickian transport applying a Continuous Time Random Walk (CTRW) or an effective upscaled time-dependent kinetic reaction term. Previous analyses of these experimental results showed that, at the Darcy scale, conservative solute transport is well described by a standard advection dispersion equation, which assumes complete mixing at the pore scale. However, reactive transport is significantly affected by incomplete mixing at smaller scales, i.e., within a reference elementary volume (REV). We consider here the family of equilibrium reactions for which the concentration of the reactants and the product can be expressed as a function of the mixing ratio, the concentration of a fictitious non reactive solute. For this type of reactions we propose, in agreement with previous studies, to model the effect of incomplete mixing at scales smaller than the Darcy scale assuming that the mixing ratio is distributed within an REV according to a Beta distribution. We compute the parameters of the Beta model by imposing that the mean concentration is equal to the value that the concentration assumes at the continuum Darcy scale, while the variance decays with time as a power law. We show that our model reproduces the concentration profiles of the reaction product measured in the Gramling et al. (2002) experiments using the transport parameters obtained from conservative experiments and an instantaneous reaction kinetic. The results are obtained applying analytical solutions both for conservative and for reactive solute transport, thereby providing a method to handle the effect of incomplete mixing on multispecies reactive solute transport, which is simpler than other previously developed methods.

  8. Water related triggering mechanisms of shallow landslides: Numerical modelling of hydraulic flows in slopes verified with field experiments

    NASA Astrophysics Data System (ADS)

    Broennimann, C.; Tacher, L.

    2009-04-01

    To assess hill slope stability and landslide triggering mechanisms, it is essential to understand the hydrogeological regime in slopes. In this work finite element models are elaborated and field experiments are carried out to study particularly shallow landslides with thickness of a few meters. The basis hypothesis of the presented research assumes that even for shallow landslides the hydrogeological role of the substratum, mostly bedrock, is determinant for the slopes behaviour, either it is draining or feeding the overlaying unstable mass. The investigated area of about 1 square kilometre is situated next to the villages Buchberg and Rüdlingen (canton Schaffhausen, Switzerland) at the border of the river Rhine. The lithology in this region is characterized mainly by horizontally layered sandstones intersected by marls from the upper seawater and the lower freshwater molasse, overlaid by soil and weathered bedrock of about 1 to 4 m thickness, both classified as silty sands. With a slope inclination of locally up to 40° the area is rather steep and characterized by continuous regressive erosion processes. During heavy rainfall events, such as the one from May 2002, shallow landslides occurred in the area affecting afforested soils as well as woodless areas. Geological field observations, infiltration and tracer tests show a fairly complicated hydrogeological character of the region. Along the slope, in the first few meters of depth, no groundwater table was found. However, seasonally controlled sources can be observed in-between outcropping bedrock. Within the sandstone, vertical faults in decametre scale oriented parallel to the Rhine that most likely opened during decompression due to the cutting of the river affect locally the hydrogeological regime by draining the slope. This implies a high grade of heterogeneity in the water flows in a local scale. Based on these conceptual hydrological and geological models, a numerical flow model was obtained using finite element software. Different scenarios of groundwater flow pattern and hydraulic head distribution in the saturated and unsaturated zones were modelled considering transient hydraulic conditions. The hydraulic pressure boundary conditions can then be introduced in a geomechanical model in order to evaluate mass movements and to estimate the soil stability. In a next step, a 10 x 30 m large test side situated inside the above mentioned study area was chosen to investigate the slopes behaviour during a triggering field experiment carried out in October 2008. With the aim to provoke a shallow landslide the test site with a mean inclination of 35° was intensely irrigated with sprinklers during 5 days (20 - 30 mm/hr). Transient soil parameters such as suction, pore water pressure and saturation at different depth, water infiltration rate, ground water table and soil movements in a mm-scale were measured. During this first field experiment, the slope remained stable. At this state the results of experiment and models suggest that: - At the experiment scale, heavy rainfall is not sufficient to trigger a mass movement if the hydrogeological conditions inside the substratum (bedrock) are not in a critical state as well. During the experiment, the bedrock was not saturated and played a draining role. - The behaviour of the local area, at the experiment scale, must be modelled within a regional scale (e.g. kilometric) to consider the role of hydraulic pressures inside the bedrock. The results obtained from the experiment will be used to refine the numeric models and to design future field experiments.

  9. Recovery based on plot experiments is a poor predictor of landscape-level population impacts of agricultural pesticides.

    PubMed

    Topping, Christopher John; Kjaer, Lene Jung; Hommen, Udo; Høye, Toke Thomas; Preuss, Thomas G; Sibly, Richard M; van Vliet, Peter

    2014-07-01

    Current European Union regulatory risk assessment allows application of pesticides provided that recovery of nontarget arthropods in-crop occurs within a year. Despite the long-established theory of source-sink dynamics, risk assessment ignores depletion of surrounding populations and typical field trials are restricted to plot-scale experiments. In the present study, the authors used agent-based modeling of 2 contrasting invertebrates, a spider and a beetle, to assess how the area of pesticide application and environmental half-life affect the assessment of recovery at the plot scale and impact the population at the landscape scale. Small-scale plot experiments were simulated for pesticides with different application rates and environmental half-lives. The same pesticides were then evaluated at the landscape scale (10 km × 10 km) assuming continuous year-on-year usage. The authors' results show that recovery time estimated from plot experiments is a poor indicator of long-term population impact at the landscape level and that the spatial scale of pesticide application strongly determines population-level impact. This raises serious doubts as to the utility of plot-recovery experiments in pesticide regulatory risk assessment for population-level protection. Predictions from the model are supported by empirical evidence from a series of studies carried out in the decade starting in 1988. The issues raised then can now be addressed using simulation. Prediction of impacts at landscape scales should be more widely used in assessing the risks posed by environmental stressors. © 2014 SETAC.

  10. Scaling of hydrologic and erosion parameters derived from rainfall simulation

    NASA Astrophysics Data System (ADS)

    Sheridan, Gary; Lane, Patrick; Noske, Philip; Sherwin, Christopher

    2010-05-01

    Rainfall simulation experiments conducted at the temporal scale of minutes and the spatial scale of meters are often used to derive parameters for erosion and water quality models that operate at much larger temporal and spatial scales. While such parameterization is convenient, there has been little effort to validate this approach via nested experiments across these scales. In this paper we first review the literature relevant to some of these long acknowledged issues. We then present rainfall simulation and erosion plot data from a range of sources, including mining, roading, and forestry, to explore the issues associated with the scaling of parameters such as infiltration properties and erodibility coefficients.

  11. Generalized theory of semiflexible polymers.

    PubMed

    Wiggins, Paul A; Nelson, Philip C

    2006-03-01

    DNA bending on length scales shorter than a persistence length plays an integral role in the translation of genetic information from DNA to cellular function. Quantitative experimental studies of these biological systems have led to a renewed interest in the polymer mechanics relevant for describing the conformational free energy of DNA bending induced by protein-DNA complexes. Recent experimental results from DNA cyclization studies have cast doubt on the applicability of the canonical semiflexible polymer theory, the wormlike chain (WLC) model, to DNA bending on biologically relevant length scales. This paper develops a theory of the chain statistics of a class of generalized semiflexible polymer models. Our focus is on the theoretical development of these models and the calculation of experimental observables. To illustrate our methods, we focus on a specific, illustrative model of DNA bending. We show that the WLC model generically describes the long-length-scale chain statistics of semiflexible polymers, as predicted by renormalization group arguments. In particular, we show that either the WLC or our present model adequately describes force-extension, solution scattering, and long-contour-length cyclization experiments, regardless of the details of DNA bend elasticity. In contrast, experiments sensitive to short-length-scale chain behavior can in principle reveal dramatic departures from the linear elastic behavior assumed in the WLC model. We demonstrate this explicitly by showing that our toy model can reproduce the anomalously large short-contour-length cyclization factors recently measured by Cloutier and Widom. Finally, we discuss the applicability of these models to DNA chain statistics in the context of future experiments.

  12. Large-scale experimental technology with remote sensing in land surface hydrology and meteorology

    NASA Technical Reports Server (NTRS)

    Brutsaert, Wilfried; Schmugge, Thomas J.; Sellers, Piers J.; Hall, Forrest G.

    1988-01-01

    Two field experiments to study atmospheric and land surface processes and their interactions are summarized. The Hydrologic-Atmospheric Pilot Experiment, which tested techniques for measuring evaporation, soil moisture storage, and runoff at scales of about 100 km, was conducted over a 100 X 100 km area in France from mid-1985 to early 1987. The first International Satellite Land Surface Climatology Program field experiment was conducted in 1987 to develop and use relationships between current satellite measurements and hydrologic, climatic, and biophysical variables at the earth's surface and to validate these relationships with ground truth. This experiment also validated surface parameterization methods for simulation models that describe surface processes from the scale of vegetation leaves up to scales appropriate to satellite remote sensing.

  13. Scaling an in situ network for high resolution modeling during SMAPVEX15

    USDA-ARS?s Scientific Manuscript database

    Among the greatest challenges within the field of soil moisture estimation is that of scaling sparse point measurements within a network to produce higher resolution map products. Large-scale field experiments present an ideal opportunity to develop methodologies for this scaling, by coupling in si...

  14. A review of analogue modelling of geodynamic processes: Approaches, scaling, materials and quantification, with an application to subduction experiments

    NASA Astrophysics Data System (ADS)

    Schellart, Wouter P.; Strak, Vincent

    2016-10-01

    We present a review of the analogue modelling method, which has been used for 200 years, and continues to be used, to investigate geological phenomena and geodynamic processes. We particularly focus on the following four components: (1) the different fundamental modelling approaches that exist in analogue modelling; (2) the scaling theory and scaling of topography; (3) the different materials and rheologies that are used to simulate the complex behaviour of rocks; and (4) a range of recording techniques that are used for qualitative and quantitative analyses and interpretations of analogue models. Furthermore, we apply these four components to laboratory-based subduction models and describe some of the issues at hand with modelling such systems. Over the last 200 years, a wide variety of analogue materials have been used with different rheologies, including viscous materials (e.g. syrups, silicones, water), brittle materials (e.g. granular materials such as sand, microspheres and sugar), plastic materials (e.g. plasticine), visco-plastic materials (e.g. paraffin, waxes, petrolatum) and visco-elasto-plastic materials (e.g. hydrocarbon compounds and gelatins). These materials have been used in many different set-ups to study processes from the microscale, such as porphyroclast rotation, to the mantle scale, such as subduction and mantle convection. Despite the wide variety of modelling materials and great diversity in model set-ups and processes investigated, all laboratory experiments can be classified into one of three different categories based on three fundamental modelling approaches that have been used in analogue modelling: (1) The external approach, (2) the combined (external + internal) approach, and (3) the internal approach. In the external approach and combined approach, energy is added to the experimental system through the external application of a velocity, temperature gradient or a material influx (or a combination thereof), and so the system is open. In the external approach, all deformation in the system is driven by the externally imposed condition, while in the combined approach, part of the deformation is driven by buoyancy forces internal to the system. In the internal approach, all deformation is driven by buoyancy forces internal to the system and so the system is closed and no energy is added during an experimental run. In the combined approach, the externally imposed force or added energy is generally not quantified nor compared to the internal buoyancy force or potential energy of the system, and so it is not known if these experiments are properly scaled with respect to nature. The scaling theory requires that analogue models are geometrically, kinematically and dynamically similar to the natural prototype. Direct scaling of topography in laboratory models indicates that it is often significantly exaggerated. This can be ascribed to (1) The lack of isostatic compensation, which causes topography to be too high. (2) The lack of erosion, which causes topography to be too high. (3) The incorrect scaling of topography when density contrasts are scaled (rather than densities); In isostatically supported models, scaling of density contrasts requires an adjustment of the scaled topography by applying a topographic correction factor. (4) The incorrect scaling of externally imposed boundary conditions in isostatically supported experiments using the combined approach; When externally imposed forces are too high, this creates topography that is too high. Other processes that also affect surface topography in laboratory models but not in nature (or only in a negligible way) include surface tension (for models using fluids) and shear zone dilatation (for models using granular material), but these will generally only affect the model surface topography on relatively short horizontal length scales of the order of several mm across material boundaries and shear zones, respectively.

  15. Numerical experiments with a general circulation model concerning the distribution of ozone in the stratosphere

    NASA Technical Reports Server (NTRS)

    Kurzeja, R. J.; Haggard, K. V.; Grose, W. L.

    1984-01-01

    The distribution of ozone below 60 km altitude has been simulated in two experiments employing a nine-layer quasi-geostrophic spectral model and linear parameterization of ozone photochemistry, the first of which included thermal and orographic forcing of the planetary scale waves, while the second omitted it. The first experiment exhibited a high latitude winter ozone buildup which was due to a Brewer-Dodson circulation forced by large amplitude (planetary scale) waves in the winter lower stratosphere. Photochemistry was also found to be important down to lower altitudes (20 km) in the summer stratosphere than had previously been supposed.

  16. ISMIP6 - initMIP: Greenland ice sheet model initialisation experiments

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Payne, Tony; Larour, Eric; Abe Ouchi, Ayako; Gregory, Jonathan; Lipscomb, William; Seroussi, Helene; Shepherd, Andrew; Edwards, Tamsin

    2016-04-01

    Earlier large-scale Greenland ice sheet sea-level projections e.g. those run during ice2sea and SeaRISE initiatives have shown that ice sheet initialisation can have a large effect on the projections and gives rise to important uncertainties. This intercomparison exercise (initMIP) aims at comparing, evaluating and improving the initialization techniques used in the ice sheet modeling community and to estimate the associated uncertainties. It is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). The experiments are conceived for the large-scale Greenland ice sheet and are designed to allow intercomparison between participating models of 1) the initial present-day state of the ice sheet and 2) the response in two schematic forward experiments. The latter experiments serve to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss first results of the intercomparison and highlight important uncertainties with respect to projections of the Greenland ice sheet sea-level contribution.

  17. Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations

    NASA Astrophysics Data System (ADS)

    Choi, Suk-Jin; Lee, Dong-Kyou

    2016-06-01

    This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.

  18. Virtual experiments: a new approach for improving process conceptualization in hillslope hydrology

    NASA Astrophysics Data System (ADS)

    Weiler, Markus; McDonnell, Jeff

    2004-01-01

    We present an approach for process conceptualization in hillslope hydrology. We develop and implement a series of virtual experiments, whereby the interaction between water flow pathways, source and mixing at the hillslope scale is examined within a virtual experiment framework. We define these virtual experiments as 'numerical experiments with a model driven by collective field intelligence'. The virtual experiments explore the first-order controls in hillslope hydrology, where the experimentalist and modeler work together to cooperatively develop and analyze the results. Our hillslope model for the virtual experiments (HillVi) in this paper is based on conceptualizing the water balance within the saturated and unsaturated zone in relation to soil physical properties in a spatially explicit manner at the hillslope scale. We argue that a virtual experiment model needs to be able to capture all major controls on subsurface flow processes that the experimentalist might deem important, while at the same time being simple with few 'tunable parameters'. This combination makes the approach, and the dialog between experimentalist and modeler, a useful hypothesis testing tool. HillVi simulates mass flux for different initial conditions under the same flow conditions. We analyze our results in terms of an artificial line source and isotopic hydrograph separation of water and subsurface flow. Our results for this first set of virtual experiments showed how drainable porosity and soil depth variability exert a first order control on flow and transport at the hillslope scale. We found that high drainable porosity soils resulted in a restricted water table rise, resulting in more pronounced channeling of lateral subsurface flow along the soil-bedrock interface. This in turn resulted in a more anastomosing network of tracer movement across the slope. The virtual isotope hydrograph separation showed higher proportions of event water with increasing drainable porosity. When combined with previous experimental findings and conceptualizations, virtual experiments can be an effective way to isolate certain controls and examine their influence over a range of rainfall and antecedent wetness conditions.

  19. Should we trust build-up/wash-off water quality models at the scale of urban catchments?

    PubMed

    Bonhomme, Céline; Petrucci, Guido

    2017-01-01

    Models of runoff water quality at the scale of an urban catchment usually rely on build-up/wash-off formulations obtained through small-scale experiments. Often, the physical interpretation of the model parameters, valid at the small-scale, is transposed to large-scale applications. Testing different levels of spatial variability, the parameter distributions of a water quality model are obtained in this paper through a Monte Carlo Markov Chain algorithm and analyzed. The simulated variable is the total suspended solid concentration at the outlet of a periurban catchment in the Paris region (2.3 km 2 ), for which high-frequency turbidity measurements are available. This application suggests that build-up/wash-off models applied at the catchment-scale do not maintain their physical meaning, but should be considered as "black-box" models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. An Examination and Validation of an Adapted Youth Experience Scale for University Sport

    ERIC Educational Resources Information Center

    Rathwell, Scott; Young, Bradley W.

    2016-01-01

    Limited tools assess positive development through university sport. Such a tool was validated in this investigation using two independent samples of Canadian university athletes. In Study 1, 605 athletes completed 99 survey items drawn from the Youth Experience Scale (YES 2.0), and separate a priori measurement models were evaluated (i.e., 99…

  1. Micropollutant removal by attached and suspended growth in a hybrid biofilm-activated sludge process.

    PubMed

    Falås, P; Longrée, P; la Cour Jansen, J; Siegrist, H; Hollender, J; Joss, A

    2013-09-01

    Removal of organic micropollutants in a hybrid biofilm-activated sludge process was investigated through batch experiments, modeling, and full-scale measurements. Batch experiments with carriers and activated sludge from the same full-scale reactor were performed to assess the micropollutant removal rates of the carrier biofilm under oxic conditions and the sludge under oxic and anoxic conditions. Clear differences in the micropollutant removal kinetics of the attached and suspended growth were demonstrated, often with considerably higher removal rates for the biofilm compared to the sludge. For several micropollutants, the removal rates were also affected by the redox conditions, i.e. oxic and anoxic. Removal rates obtained from the batch experiments were used to model the micropollutant removal in the full-scale process. The results from the model and plant measurements showed that the removal efficiency of the process can be predicted with acceptable accuracy (± 25%) for most of the modeled micropollutants. Furthermore, the model estimations indicate that the attached growth in hybrid biofilm-activated sludge processes can contribute significantly to the removal of individual compounds, such as diclofenac. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1973-01-01

    Studies are reported of the long term responses of the model atmosphere to anomalies in snow cover and sea surface temperature. An abstract of a previously issued report on the computed response to surface anomalies in a global atmospheric model is presented, and the experiments on the effects of transient sea surface temperature on the Mintz-Arakawa atmospheric model are reported.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atamturktur, Sez; Unal, Cetin; Hemez, Francois

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed frameworkmore » is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this framework, the project team has focused on optimizing resource allocation for improving numerical models through further code development and experimentation. Related to further code development, we have developed a code prioritization index (CPI) for coupled numerical models. CPI is implemented to effectively improve the predictive capability of the coupled model by increasing the sophistication of constituent codes. In relation to designing new experiments, we investigated the information gained by the addition of each new experiment used for calibration and bias correction of a simulation model. Additionally, the variability of ‘information gain’ through the design domain has been investigated in order to identify the experiment settings where maximum information gain occurs and thus guide the experimenters in the selection of the experiment settings. This idea was extended to evaluate the information gain from each experiment can be improved by intelligently selecting the experiments, leading to the development of the Batch Sequential Design (BSD) technique. Additionally, we evaluated the importance of sufficiently exploring the domain of applicability in experiment-based validation of high-consequence modeling and simulation by developing a new metric to quantify coverage. This metric has also been incorporated into the design of new experiments. Finally, we have proposed a data-aware calibration approach for the calibration of numerical models. This new method considers the complexity of a numerical model (the number of parameters to be calibrated, parameter uncertainty, and form of the model) and seeks to identify the number of experiments necessary to calibrate the model based on the level of sophistication of the physics. The final component in the project team’s work to improve model calibration and validation methods is the incorporation of robustness to non-probabilistic uncertainty in the input parameters. This is an improvement to model validation and uncertainty quantification stemming beyond the originally proposed scope of the project. We have introduced a new metric for incorporating the concept of robustness into experiment-based validation of numerical models. This project has accounted for the graduation of two Ph.D. students (Kendra Van Buren and Josh Hegenderfer) and two M.S. students (Matthew Egeberg and Parker Shields). One of the doctoral students is now working in the nuclear engineering field and the other one is a post-doctoral fellow at the Los Alamos National Laboratory. Additionally, two more Ph.D. students (Garrison Stevens and Tunc Kulaksiz) who are working towards graduation have been supported by this project.« less

  4. On the limitations of General Circulation Climate Models

    NASA Technical Reports Server (NTRS)

    Stone, Peter H.; Risbey, James S.

    1990-01-01

    General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.

  5. Simultaneous estimation of local-scale and flow path-scale dual-domain mass transfer parameters using geoelectrical monitoring

    USGS Publications Warehouse

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Curtis, Gary P.; Lane, John W.

    2013-01-01

    Anomalous solute transport, modeled as rate-limited mass transfer, has an observable geoelectrical signature that can be exploited to infer the controlling parameters. Previous experiments indicate the combination of time-lapse geoelectrical and fluid conductivity measurements collected during ionic tracer experiments provides valuable insight into the exchange of solute between mobile and immobile porosity. Here, we use geoelectrical measurements to monitor tracer experiments at a former uranium mill tailings site in Naturita, Colorado. We use nonlinear regression to calibrate dual-domain mass transfer solute-transport models to field data. This method differs from previous approaches by calibrating the model simultaneously to observed fluid conductivity and geoelectrical tracer signals using two parameter scales: effective parameters for the flow path upgradient of the monitoring point and the parameters local to the monitoring point. We use regression statistics to rigorously evaluate the information content and sensitivity of fluid conductivity and geophysical data, demonstrating multiple scales of mass transfer parameters can simultaneously be estimated. Our results show, for the first time, field-scale spatial variability of mass transfer parameters (i.e., exchange-rate coefficient, porosity) between local and upgradient effective parameters; hence our approach provides insight into spatial variability and scaling behavior. Additional synthetic modeling is used to evaluate the scope of applicability of our approach, indicating greater range than earlier work using temporal moments and a Lagrangian-based Damköhler number. The introduced Eulerian-based Damköhler is useful for estimating tracer injection duration needed to evaluate mass transfer exchange rates that range over several orders of magnitude.

  6. Field_ac: a research project on ocean modelling in coastal areas. The experience in the Catalan Sea.

    NASA Astrophysics Data System (ADS)

    Grifoll, Manel; Pallarès, Elena; Tolosana-Delgado, Raimon; Fernandez, Juan; Lopez, Jaime; Mosso, Cesar; Hermosilla, Fernando; Espino, Manuel; Sanchez-Arcilla, Agustín

    2013-04-01

    The EU founded Field_ac project has investigated during the last three years methods and strategies for improving operational services in coastal areas. The objective has been to generate added value for shelf and regional scale predictions from GMES Marine Core Services. In this sense the experience in the Catalan Sea site has allowed to combine high-resolution numerical modeling tools nested into regional GMES services, data from intensive field campaigns or local observational networks and remote sensing products. Multi-scale coupled models have been implemented to evaluate different temporal and spatial scales of the dominant physical processes related with waves, currents, continental/river discharges or sediment transport. In this sense the experience of the Field_ac project in the Catalan Sea has permit to "connect" GMES marine core service results to the coastal (local) anthropogenic forcing (e.g. causes of morphodynamic evolution and ecosystem degradation) and will support a knowledge-based assessment of decisions in the coastal zone. This will contribute to the implementation of EU directives (e.g., the Water Framework Directive for water quality at beaches near harbour entrances or the Risk or Flood Directives for waves and sea-level at beach/river-mouth scales).

  7. Beyond the CMSSM without an accelerator: Proton decay and direct dark matter detection

    DOE PAGES

    Ellis, John; Evans, Jason L.; Luo, Feng; ...

    2016-01-05

    Here, we consider two potential non-accelerator signatures of generalizations of the well-studied constrained minimal supersymmetric standard model (CMSSM). In one generalization, the universality constraints on soft supersymmetry-breaking parameters are applied at some input scale M inbelow the grand unification (GUT) scale M GUT, a scenario referred to as ‘sub-GUT’. The other generalization we consider is to retain GUT-scale universality for the squark and slepton masses, but to relax universality for the soft supersymmetry-breaking contributions to the masses of the Higgs doublets. As with other CMSSM-like models, the measured Higgs mass requires supersymmetric particle masses near or beyond the TeV scale.more » Because of these rather heavy sparticle masses, the embedding of these CMSSM-like models in a minimal SU(5) model of grand unification can yield a proton lifetime consistent with current experimental limits, and may be accessible in existing and future proton decay experiments. Another possible signature of these CMSSM-like models is direct detection of supersymmetric dark matter. The direct dark matter scattering rate is typically below the reach of the LUX-ZEPLIN (LZ) experiment if M in is close to M GUT, but it may lie within its reach if M in≲10 11 GeV. Likewise, generalizing the CMSSM to allow non-universal supersymmetry-breaking contributions to the Higgs offers extensive possibilities for models within reach of the LZ experiment that have long proton lifetimes.« less

  8. Beyond the CMSSM without an accelerator: proton decay and direct dark matter detection.

    PubMed

    Ellis, John; Evans, Jason L; Luo, Feng; Nagata, Natsumi; Olive, Keith A; Sandick, Pearl

    We consider two potential non-accelerator signatures of generalizations of the well-studied constrained minimal supersymmetric standard model (CMSSM). In one generalization, the universality constraints on soft supersymmetry-breaking parameters are applied at some input scale [Formula: see text] below the grand unification (GUT) scale [Formula: see text], a scenario referred to as 'sub-GUT'. The other generalization we consider is to retain GUT-scale universality for the squark and slepton masses, but to relax universality for the soft supersymmetry-breaking contributions to the masses of the Higgs doublets. As with other CMSSM-like models, the measured Higgs mass requires supersymmetric particle masses near or beyond the TeV scale. Because of these rather heavy sparticle masses, the embedding of these CMSSM-like models in a minimal SU(5) model of grand unification can yield a proton lifetime consistent with current experimental limits, and may be accessible in existing and future proton decay experiments. Another possible signature of these CMSSM-like models is direct detection of supersymmetric dark matter. The direct dark matter scattering rate is typically below the reach of the LUX-ZEPLIN (LZ) experiment if [Formula: see text] is close to [Formula: see text], but it may lie within its reach if [Formula: see text] GeV. Likewise, generalizing the CMSSM to allow non-universal supersymmetry-breaking contributions to the Higgs offers extensive possibilities for models within reach of the LZ experiment that have long proton lifetimes.

  9. Microwave Remote Sensing and the Cold Land Processes Field Experiment

    NASA Technical Reports Server (NTRS)

    Kim, Edward J.; Cline, Don; Davis, Bert; Hildebrand, Peter H. (Technical Monitor)

    2001-01-01

    The Cold Land Processes Field Experiment (CLPX) has been designed to advance our understanding of the terrestrial cryosphere. Developing a more complete understanding of fluxes, storage, and transformations of water and energy in cold land areas is a critical focus of the NASA Earth Science Enterprise Research Strategy, the NASA Global Water and Energy Cycle (GWEC) Initiative, the Global Energy and Water Cycle Experiment (GEWEX), and the GEWEX Americas Prediction Project (GAPP). The movement of water and energy through cold regions in turn plays a large role in ecological activity and biogeochemical cycles. Quantitative understanding of cold land processes over large areas will require synergistic advancements in 1) understanding how cold land processes, most comprehensively understood at local or hillslope scales, extend to larger scales, 2) improved representation of cold land processes in coupled and uncoupled land-surface models, and 3) a breakthrough in large-scale observation of hydrologic properties, including snow characteristics, soil moisture, the extent of frozen soils, and the transition between frozen and thawed soil conditions. The CLPX Plan has been developed through the efforts of over 60 interested scientists that have participated in the NASA Cold Land Processes Working Group (CLPWG). This group is charged with the task of assessing, planning and implementing the required background science, technology, and application infrastructure to support successful land surface hydrology remote sensing space missions. A major product of the experiment will be a comprehensive, legacy data set that will energize many aspects of cold land processes research. The CLPX will focus on developing the quantitative understanding, models, and measurements necessary to extend our local-scale understanding of water fluxes, storage, and transformations to regional and global scales. The experiment will particularly emphasize developing a strong synergism between process-oriented understanding, land surface models and microwave remote sensing. The experimental design is a multi-sensor, multi-scale (1-ha to 160,000 km ^ {2}) approach to providing the comprehensive data set necessary to address several experiment objectives. A description focusing on the microwave remote sensing components (ground, airborne, and spaceborne) of the experiment will be presented.

  10. Starobinsky-like inflation and neutrino masses in a no-scale SO(10) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, John; Theoretical Physics Department, CERN,CH-1211 Geneva 23; Garcia, Marcos A.G.

    2016-11-08

    Using a no-scale supergravity framework, we construct an SO(10) model that makes predictions for cosmic microwave background observables similar to those of the Starobinsky model of inflation, and incorporates a double-seesaw model for neutrino masses consistent with oscillation experiments and late-time cosmology. We pay particular attention to the behaviour of the scalar fields during inflation and the subsequent reheating.

  11. 15. YAZOO BACKWATER PUMPING STATION MODEL, YAZOO RIVER BASIN (MODEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. YAZOO BACKWATER PUMPING STATION MODEL, YAZOO RIVER BASIN (MODEL SCALE: 1' = 26'). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  12. Upscaling Cement Paste Microstructure to Obtain the Fracture, Shear, and Elastic Concrete Mechanical LDPM Parameters.

    PubMed

    Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez

    2017-02-28

    Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10 -10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale.

  13. Upscaling Cement Paste Microstructure to Obtain the Fracture, Shear, and Elastic Concrete Mechanical LDPM Parameters

    PubMed Central

    Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez

    2017-01-01

    Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10−10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale. PMID:28772605

  14. Large-scale model quality assessment for improving protein tertiary structure prediction.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-06-15

    Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.

  15. Understanding Hydraulic Fracturing: A Multi-Scale Problem

    DOE PAGES

    Hyman, Jeffrey De'Haven; Gimenez Martinez, Joaquin; Viswanathan, Hari S.; ...

    2016-09-05

    Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nano-meters to kilo-meters. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical, and experimental efforts. At the field scale, we use discrete fracture network modeling to simulate production at a well site whose fracture network is based on a site characterization of a shale formation. At the core scale, we use triaxial fracture experiments and a finite-element discrete-elementmore » fracture propagation model with a coupled fluid solver to study dynamic crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and real micromodels to study pore-scale flow phenomenon such as multiphase flow and mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs.« less

  16. Validation Results for Core-Scale Oil Shale Pyrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staten, Josh; Tiwari, Pankaj

    2015-03-01

    This report summarizes a study of oil shale pyrolysis at various scales and the subsequent development a model for in situ production of oil from oil shale. Oil shale from the Mahogany zone of the Green River formation was used in all experiments. Pyrolysis experiments were conducted at four scales, powdered samples (100 mesh) and core samples of 0.75”, 1” and 2.5” diameters. The batch, semibatch and continuous flow pyrolysis experiments were designed to study the effect of temperature (300°C to 500°C), heating rate (1°C/min to 10°C/min), pressure (ambient and 500 psig) and size of the sample on product formation.more » Comprehensive analyses were performed on reactants and products - liquid, gas and spent shale. These experimental studies were designed to understand the relevant coupled phenomena (reaction kinetics, heat transfer, mass transfer, thermodynamics) at multiple scales. A model for oil shale pyrolysis was developed in the COMSOL multiphysics platform. A general kinetic model was integrated with important physical and chemical phenomena that occur during pyrolysis. The secondary reactions of coking and cracking in the product phase were addressed. The multiscale experimental data generated and the models developed provide an understanding of the simultaneous effects of chemical kinetics, and heat and mass transfer on oil quality and yield. The comprehensive data collected in this study will help advance the move to large-scale in situ oil production from the pyrolysis of oil shale.« less

  17. A robust quantitative near infrared modeling approach for blend monitoring.

    PubMed

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  18. Peak Communication Experiences: Concept, Structure, and Sex Differences.

    ERIC Educational Resources Information Center

    Gordon, Ron; Dulaney, Earl

    A study was conducted to test a "peak communication experience" (PCE) scale developed from Abraham Maslow's theory of PCE's, a model of one's highest interpersonal communication moments in terms of perceived mutual understanding, happiness, and personal fulfillment. Nineteen items, extrapolated from Maslow's model but rendered more…

  19. Infrared thermography applied to the study of heated and solar pavement: from numerical modeling to small scale laboratory experiments

    NASA Astrophysics Data System (ADS)

    Le Touz, N.; Toullier, T.; Dumoulin, J.

    2017-05-01

    The present study addresses the thermal behaviour of a modified pavement structure to prevent icing at its surface in adverse winter time conditions or overheating in hot summer conditions. First a multi-physic model based on infinite elements method was built to predict the evolution of the surface temperature. In a second time, laboratory experiments on small specimen were carried out and the surface temperature was monitored by infrared thermography. Results obtained are analyzed and performances of the numerical model for real scale outdoor application are discussed. Finally conclusion and perspectives are proposed.

  20. Assessing social isolation in motor neurone disease: a Rasch analysis of the MND Social Withdrawal Scale.

    PubMed

    Gibbons, Chris J; Thornton, Everard W; Ealing, John; Shaw, Pamela J; Talbot, Kevin; Tennant, Alan; Young, Carolyn A

    2013-11-15

    Social withdrawal is described as the condition in which an individual experiences a desire to make social contact, but is unable to satisfy that desire. It is an important issue for patients with motor neurone disease who are likely to experience severe physical impairment. This study aims to reassess the psychometric and scaling properties of the MND Social Withdrawal Scale (MND-SWS) domains and examine the feasibility of a summary scale, by applying scale data to the Rasch model. The MND Social Withdrawal Scale was administered to 298 patients with a diagnosis of MND, alongside the Hospital Anxiety and Depression Scale. The factor structure of the MND Social Withdrawal Scale was assessed using confirmatory factor analysis. Model fit, category threshold analysis, differential item functioning (DIF), dimensionality and local dependency were evaluated. Factor analysis confirmed the suitability of the four-factor solution suggested by the original authors. Mokken scale analysis suggested the removal of item five. Rasch analysis removed a further three items; from the Community (one item) and Emotional (two items) withdrawal subscales. Following item reduction, each scale exhibited excellent fit to the Rasch model. A 14-item Summary scale was shown to fit the Rasch model after subtesting the items into three subtests corresponding to the Community, Family and Emotional subscales, indicating that items from these three subscales could be summed together to create a total measure for social withdrawal. Removal of four items from the Social Withdrawal Scale led to a four factor solution with a 14-item hierarchical Summary scale that were all unidimensional, free for DIF and well fitted to the Rasch model. The scale is reliable and allows clinicians and researchers to measure social withdrawal in MND along a unidimensional construct. © 2013. Published by Elsevier B.V. All rights reserved.

  1. Three-dimensional Dendritic Needle Network model with application to Al-Cu directional solidification experiments

    NASA Astrophysics Data System (ADS)

    Tourret, D.; Karma, A.; Clarke, A. J.; Gibbs, P. J.; Imhoff, S. D.

    2015-06-01

    We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulations and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.

  2. Three-dimensional Dendritic Needle Network model with application to Al-Cu directional solidification experiments

    DOE PAGES

    Tourret, D.; Karma, A.; Clarke, A. J.; ...

    2015-06-11

    We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulationsmore » and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.« less

  3. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.

  4. Cirrus clouds. I - A cirrus cloud model. II - Numerical experiments on the formation and maintenance of cirrus

    NASA Technical Reports Server (NTRS)

    Starr, D. OC.; Cox, S. K.

    1985-01-01

    A simplified cirrus cloud model is presented which may be used to investigate the role of various physical processes in the life cycle of a cirrus cloud. The model is a two-dimensional, time-dependent, Eulerian numerical model where the focus is on cloud-scale processes. Parametrizations are developed to account for phase changes of water, radiative processes, and the effects of microphysical structure on the vertical flux of ice water. The results of a simulation of a thin cirrostratus cloud are given. The results of numerical experiments performed with the model are described in order to demonstrate the important role of cloud-scale processes in determining the cloud properties maintained in response to larger scale forcing. The effects of microphysical composition and radiative processes are considered, as well as their interaction with thermodynamic and dynamic processes within the cloud. It is shown that cirrus clouds operate in an entirely different manner than liquid phase stratiform clouds.

  5. Data for Room Fire Model Comparisons

    PubMed Central

    Peacock, Richard D.; Davis, Sanford; Babrauskas, Vytenis

    1991-01-01

    With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system. PMID:28184121

  6. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    NASA Astrophysics Data System (ADS)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  7. Data for Room Fire Model Comparisons.

    PubMed

    Peacock, Richard D; Davis, Sanford; Babrauskas, Vytenis

    1991-01-01

    With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system.

  8. A comparison of large-scale electron beam and bench-scale 60Co irradiations of simulated aqueous waste streams

    NASA Astrophysics Data System (ADS)

    Kurucz, Charles N.; Waite, Thomas D.; Otaño, Suzana E.; Cooper, William J.; Nickelsen, Michael G.

    2002-11-01

    The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater.

  9. Transforming SWAT for continental-scale high-resolution modeling of floodplain dynamics: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Rajib, A.; Merwade, V.; Liu, Z.; Lane, C.; Golden, H. E.; Tavakoly, A. A.; Follum, M. L.

    2017-12-01

    There have been many initiatives to develop frameworks for continental-scale modeling and mapping floodplain dynamics. The choice of a model for such needs should be governed by its suitability to be executed in high performance cyber platforms, ability to integrate supporting hydraulic/hydrodynamic tools, and ability to assimilate earth observations. Furthermore, disseminating large volume of outputs for public use and interoperability with similar frameworks should be considered. Considering these factors, we have conducted a series of modeling experiments and developed a suite of cyber-enabled platforms that have transformed Soil and Water Assessment Tool (SWAT) into an appropriate model for use in a continental-scale, high resolution, near real-time flood information framework. Our first experiment uses a medium size watershed in Indiana, USA and attempts burning-in a high resolution, National Hydrography Dataset Plus(NHDPlus) into the SWAT model. This is crucial with a view to make the outputs comparable with other global/national initiatives. The second experiment is built upon the first attempt to add a modified landscape representation in the model which differentiates between the upland and floodplain processes. Our third experiment involves two separate efforts: coupling SWAT with a hydrodynamic model LISFLOOD-FP and a new generation, low complexity hydraulic model AutoRoute. We have executed the prototype "loosely-coupled" models for the Upper Mississippi-Ohio River Basin in the USA, encompassing 1 million square km drainage area and nearly 0.2 million NHDPlus river reaches. The preliminary results suggest reasonable accuracy for both streamflow and flood inundation. In this presentation, we will also showcase three cyber-enabled platforms, including SWATShare to run and calibrate large scale SWAT models online using high performance computational resources, HydroGlobe to automatically extract and assimilate multiple remotely sensed earth observations in model sub-basins, and SWATFlow to visualize/download streamflow and flood inundation maps through an interactive interface. With all these transformational changes to enhance and support SWAT, it is expected that the model can be a sustainable alternative in the Global Flood Partnership program.

  10. Measuring Educators' Attitudes and Beliefs about Evaluation: Construct Validity and Reliability of the Teacher Evaluation Experience Scale

    ERIC Educational Resources Information Center

    Reddy, Linda A.; Dudek, Christopher M.; Kettler, Ryan J.; Kurz, Alexander; Peters, Stephanie

    2016-01-01

    This study presents the reliability and validity of the Teacher Evaluation Experience Scale--Teacher Form (TEES-T), a multidimensional measure of educators' attitudes and beliefs about teacher evaluation. Confirmatory factor analyses of data from 583 teachers were conducted on the TEES-T hypothesized five-factor model, as well as on alternative…

  11. 26. CURRENT METERS WITH FOLDING SCALE (MEASURED IN INCHES) IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. CURRENT METERS WITH FOLDING SCALE (MEASURED IN INCHES) IN FOREGROUND: GURLEY MODEL NO. 665 AT CENTER, GURLEY MODEL NO. 625 'PYGMY' CURRENT METER AT LEFT, AND WES MINIATURE PRICE-TYPE CURRENT METER AT RIGHT. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  12. Full-Scale Numerical Modeling of Turbulent Processes in the Earth's Ionosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliasson, B.; Stenflo, L.; Department of Physics, Linkoeping University, SE-581 83 Linkoeping

    2008-10-15

    We present a full-scale simulation study of ionospheric turbulence by means of a generalized Zakharov model based on the separation of variables into high-frequency and slow time scales. The model includes realistic length scales of the ionospheric profile and of the electromagnetic and electrostatic fields, and uses ionospheric plasma parameters relevant for high-latitude radio facilities such as Eiscat and HAARP. A nested grid numerical method has been developed to resolve the different length-scales, while avoiding severe restrictions on the time step. The simulation demonstrates the parametric decay of the ordinary mode into Langmuir and ion-acoustic waves, followed by a Langmuirmore » wave collapse and short-scale caviton formation, as observed in ionospheric heating experiments.« less

  13. Sensing, Measuring and Modelling the Mechanical Properties of Sandstone

    NASA Astrophysics Data System (ADS)

    Antony, S. J.; Olugbenga, A.; Ozerkan, N. G.

    2018-02-01

    We present a hybrid framework for simulating the strength and dilation characteristics of sandstone. Where possible, the grain-scale properties of sandstone are evaluated experimentally in detail. Also, using photo-stress analysis, we sense the deviator stress (/strain) distribution at the micro-scale and its components along the orthogonal directions on the surface of a V-notch sandstone sample under mechanical loading. Based on this measurement and applying a grain-scale model, the optical anisotropy index K 0 is inferred at the grain scale. This correlated well with the grain contact stiffness ratio K evaluated using ultrasound sensors independently. Thereafter, in addition to other experimentally characterised structural and grain-scale properties of sandstone, K is fed as an input into the discrete element modelling of fracture strength and dilation of the sandstone samples. Physical bulk-scale experiments are also conducted to evaluate the load-displacement relation, dilation and bulk fracture strength characteristics of sandstone samples under compression and shear. A good level of agreement is obtained between the results of the simulations and experiments. The current generic framework could be applied to understand the internal and bulk mechanical properties of such complex opaque and heterogeneous materials more realistically in future.

  14. Numerical evaluation of the scale problem on the wind flow of a windbreak

    PubMed Central

    Liu, Benli; Qu, Jianjun; Zhang, Weimin; Tan, Lihai; Gao, Yanhong

    2014-01-01

    The airflow field around wind fences with different porosities, which are important in determining the efficiency of fences as a windbreak, is typically studied via scaled wind tunnel experiments and numerical simulations. However, the scale problem in wind tunnels or numerical models is rarely researched. In this study, we perform a numerical comparison between a scaled wind-fence experimental model and an actual-sized fence via computational fluid dynamics simulations. The results show that although the general field pattern can be captured in a reduced-scale wind tunnel or numerical model, several flow characteristics near obstacles are not proportional to the size of the model and thus cannot be extrapolated directly. For example, the small vortex behind a low-porosity fence with a scale of 1:50 is approximately 4 times larger than that behind a full-scale fence. PMID:25311174

  15. Large scale shell model study of the evolution of mixed-symmetry states in chains of nuclei around 132Sn

    NASA Astrophysics Data System (ADS)

    Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.

    2012-10-01

    Large scale shell model calculations based on a new diagonalization algorithm are performed in order to investigate the mixed symmetry states in chains of nuclei in the proximity of N=82. The resulting spectra and transitions are in agreement with the experiments and consistent with the scheme provided by the interacting boson model.

  16. Insights about transport mechanisms and fracture flow channeling from multi-scale observations of tracer dispersion in shallow fractured crystalline rock.

    PubMed

    Guihéneuf, N; Bour, O; Boisson, A; Le Borgne, T; Becker, M W; Nigon, B; Wajiduddin, M; Ahmed, S; Maréchal, J-C

    2017-11-01

    In fractured media, solute transport is controlled by advection in open and connected fractures and by matrix diffusion that may be enhanced by chemical weathering of the fracture walls. These phenomena may lead to non-Fickian dispersion characterized by early tracer arrival time, late-time tailing on the breakthrough curves and potential scale effect on transport processes. Here we investigate the scale dependency of these processes by analyzing a series of convergent and push-pull tracer experiments with distance of investigation ranging from 4m to 41m in shallow fractured granite. The small and intermediate distances convergent experiments display a non-Fickian tailing, characterized by a -2 power law slope. However, the largest distance experiment does not display a clear power law behavior and indicates possibly two main pathways. The push-pull experiments show breakthrough curve tailing decreases as the volume of investigation increases, with a power law slope ranging from -3 to -2.3 from the smallest to the largest volume. The multipath model developed by Becker and Shapiro (2003) is used here to evaluate the hypothesis of the independence of flow pathways. The multipath model is found to explain the convergent data, when increasing local dispersivity and reducing the number of pathways with distance which suggest a transition from non-Fickian to Fickian transport at fracture scale. However, this model predicts an increase of tailing with push-pull distance, while the experiments show the opposite trend. This inconsistency may suggest the activation of cross channel mass transfer at larger volume of investigation, which leads to non-reversible heterogeneous advection with scale. This transition from independent channels to connected channels when the volume of investigation increases suggest that both convergent and push-pull breakthrough curves can inform the existence of characteristic length scales. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Insights about transport mechanisms and fracture flow channeling from multi-scale observations of tracer dispersion in shallow fractured crystalline rock

    NASA Astrophysics Data System (ADS)

    Guihéneuf, N.; Bour, O.; Boisson, A.; Le Borgne, T.; Becker, M. W.; Nigon, B.; Wajiduddin, M.; Ahmed, S.; Maréchal, J.-C.

    2017-11-01

    In fractured media, solute transport is controlled by advection in open and connected fractures and by matrix diffusion that may be enhanced by chemical weathering of the fracture walls. These phenomena may lead to non-Fickian dispersion characterized by early tracer arrival time, late-time tailing on the breakthrough curves and potential scale effect on transport processes. Here we investigate the scale dependency of these processes by analyzing a series of convergent and push-pull tracer experiments with distance of investigation ranging from 4 m to 41 m in shallow fractured granite. The small and intermediate distances convergent experiments display a non-Fickian tailing, characterized by a -2 power law slope. However, the largest distance experiment does not display a clear power law behavior and indicates possibly two main pathways. The push-pull experiments show breakthrough curve tailing decreases as the volume of investigation increases, with a power law slope ranging from - 3 to - 2.3 from the smallest to the largest volume. The multipath model developed by Becker and Shapiro (2003) is used here to evaluate the hypothesis of the independence of flow pathways. The multipath model is found to explain the convergent data, when increasing local dispersivity and reducing the number of pathways with distance which suggest a transition from non-Fickian to Fickian transport at fracture scale. However, this model predicts an increase of tailing with push-pull distance, while the experiments show the opposite trend. This inconsistency may suggest the activation of cross channel mass transfer at larger volume of investigation, which leads to non-reversible heterogeneous advection with scale. This transition from independent channels to connected channels when the volume of investigation increases suggest that both convergent and push-pull breakthrough curves can inform the existence of characteristic length scales.

  18. A design-build-test cycle using modeling and experiments reveals interdependencies between upper glycolysis and xylose uptake in recombinant S. cerevisiae and improves predictive capabilities of large-scale kinetic models.

    PubMed

    Miskovic, Ljubisa; Alff-Tuomala, Susanne; Soh, Keng Cher; Barth, Dorothee; Salusjärvi, Laura; Pitkänen, Juha-Pekka; Ruohonen, Laura; Penttilä, Merja; Hatzimanikatis, Vassily

    2017-01-01

    Recent advancements in omics measurement technologies have led to an ever-increasing amount of available experimental data that necessitate systems-oriented methodologies for efficient and systematic integration of data into consistent large-scale kinetic models. These models can help us to uncover new insights into cellular physiology and also to assist in the rational design of bioreactor or fermentation processes. Optimization and Risk Analysis of Complex Living Entities (ORACLE) framework for the construction of large-scale kinetic models can be used as guidance for formulating alternative metabolic engineering strategies. We used ORACLE in a metabolic engineering problem: improvement of the xylose uptake rate during mixed glucose-xylose consumption in a recombinant Saccharomyces cerevisiae strain. Using the data from bioreactor fermentations, we characterized network flux and concentration profiles representing possible physiological states of the analyzed strain. We then identified enzymes that could lead to improved flux through xylose transporters (XTR). For some of the identified enzymes, including hexokinase (HXK), we could not deduce if their control over XTR was positive or negative. We thus performed a follow-up experiment, and we found out that HXK2 deletion improves xylose uptake rate. The data from the performed experiments were then used to prune the kinetic models, and the predictions of the pruned population of kinetic models were in agreement with the experimental data collected on the HXK2 -deficient S. cerevisiae strain. We present a design-build-test cycle composed of modeling efforts and experiments with a glucose-xylose co-utilizing recombinant S. cerevisiae and its HXK2 -deficient mutant that allowed us to uncover interdependencies between upper glycolysis and xylose uptake pathway. Through this cycle, we also obtained kinetic models with improved prediction capabilities. The present study demonstrates the potential of integrated "modeling and experiments" systems biology approaches that can be applied for diverse applications ranging from biotechnology to drug discovery.

  19. Capillary filling rules and displacement mechanisms for spontaneous imbibition of CO2 for carbon storage and EOR using micro-model experiments and pore scale simulation

    NASA Astrophysics Data System (ADS)

    Chapman, E.; Yang, J.; Crawshaw, J.; Boek, E. S.

    2012-04-01

    In the 1980s, Lenormand et al. carried out their pioneering work on displacement mechanisms of fluids in etched networks [1]. Here we further examine displacement mechanisms in relation to capillary filling rules for spontaneous imbibition. Understanding the role of spontaneous imbibition in fluid displacement is essential for refining pore network models. Generally, pore network models use simple capillary filling rules and here we examine the validity of these rules for spontaneous imbibition. Improvement of pore network models is vital for the process of 'up-scaling' to the field scale for both enhanced oil recovery (EOR) and carbon sequestration. In this work, we present our experimental microfluidic research into the displacement of both supercritical CO2/deionised water (DI) systems and analogous n-decane/air - where supercritical CO2 and n-decane are the respective wetting fluids - controlled by imbibition at the pore scale. We conducted our experiments in etched PMMA and silicon/glass micro-fluidic hydrophobic chips. We first investigate displacement in single etched pore junctions, followed by displacement in complex network designs representing actual rock thin sections, i.e. Berea sandstone and Sucrosic dolomite. The n-decane/air experiments were conducted under ambient conditions, whereas the supercritical CO2/DI water experiments were conducted under high temperature and pressure in order to replicate reservoir conditions. Fluid displacement in all experiments was captured via a high speed video microscope. The direction and type of displacement the imbibing fluid takes when it enters a junction is dependent on the number of possible channels in which the wetting fluid can imbibe, i.e. I1, I2 and I3 [1]. Depending on the experiment conducted, the micro-models were initially filled with either DI water or air before the wetting fluid was injected. We found that the imbibition of the wetting fluid through a single pore is primarily controlled by the geometry of the pore body rather than the downstream pore throat sizes, contrary to the established capillary filling rules as used in current pore network models. Our experimental observations are confirmed by detailed lattice-Boltzmann pore scale computer simulations of fluid displacement in the same geometries. This suggests that capillary filling rules for imbibition as used in pore network models may need to be revised. [1] G. Lenormand, C. Zarcone and A. Sarr, J. Fluid Mech. 135 , 337-353 (1983).

  20. [Caregiver's health: adaption and validation in a Spanish population of the Experience of Caregiving Inventory (ECI)].

    PubMed

    Crespo-Maraver, Mariacruz; Doval, Eduardo; Fernández-Castro, Jordi; Giménez-Salinas, Jordi; Prat, Gemma; Bonet, Pere

    2018-04-04

    To adapt and to validate the Experience of Caregiving Inventory (ECI) in a Spanish population, providing empirical evidence of its internal consistency, internal structure and validity. Psychometric validation of the adapted version of the ECI. One hundred and seventy-two caregivers (69.2% women), mean age 57.51 years (range: 21-89) participated. Demographic and clinical data, standardized measures (ECI, suffering scale of SCL-90-R, Zarit burden scale) were used. The two scales of negative evaluation of the ECI most related to serious mental disorders (disruptive behaviours [DB] and negative symptoms [NS]) and the two scales of positive appreciation (positive personal experiences [PPE], and good aspects of the relationship [GAR]) were analyzed. Exploratory structural equation modelling was used to analyze the internal structure. The relationship between the ECI scales and the SCL-90-R and Zarit scores was also studied. The four-factor model presented a good fit. Cronbach's alpha (DB: 0.873; NS: 0.825; PPE: 0.720; GAR: 0.578) showed a higher homogeneity in the negative scales. The SCL-90-R scores correlated with the negative ECI scales, and none of the ECI scales correlated with the Zarit scale. The Spanish version of the ECI can be considered a valid, reliable, understandable and feasible self-report measure for its administration in the health and community context. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Modeling Hemispheric Detonation Experiments in 2-Dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howard, W M; Fried, L E; Vitello, P A

    2006-06-22

    Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data,more » including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.« less

  2. Validation of TGLF in C-Mod and DIII-D using machine learning and integrated modeling tools

    NASA Astrophysics Data System (ADS)

    Rodriguez-Fernandez, P.; White, Ae; Cao, Nm; Creely, Aj; Greenwald, Mj; Grierson, Ba; Howard, Nt; Meneghini, O.; Petty, Cc; Rice, Je; Sciortino, F.; Yuan, X.

    2017-10-01

    Predictive models for steady-state and perturbative transport are necessary to support burning plasma operations. A combination of machine learning algorithms and integrated modeling tools is used to validate TGLF in C-Mod and DIII-D. First, a new code suite, VITALS, is used to compare SAT1 and SAT0 models in C-Mod. VITALS exploits machine learning and optimization algorithms for the validation of transport codes. Unlike SAT0, the SAT1 saturation rule contains a model to capture cross-scale turbulence coupling. Results show that SAT1 agrees better with experiments, further confirming that multi-scale effects are needed to model heat transport in C-Mod L-modes. VITALS will next be used to analyze past data from DIII-D: L-mode ``Shortfall'' plasma and ECH swing experiments. A second code suite, PRIMA, allows for integrated modeling of the plasma response to Laser Blow-Off cold pulses. Preliminary results show that SAT1 qualitatively reproduces the propagation of cold pulses after LBO injections and SAT0 does not, indicating that cross-scale coupling effects play a role in the plasma response. PRIMA will be used to ``predict-first'' cold pulse experiments using the new LBO system at DIII-D, and analyze existing ECH heat pulse data. Work supported by DE-FC02-99ER54512, DE-FC02-04ER54698.

  3. Regional-Scale Salt Tectonics Modelling: Bench-Scale Validation and Extension to Field-Scale

    NASA Astrophysics Data System (ADS)

    Crook, A. J. L.; Yu, J. G.; Thornton, D. A.

    2010-05-01

    The role of salt in the evolution of the West African continental margin, and in particular its impact on hydrocarbon migration and trap formation, is an important research topic. It has attracted many researchers who have based their research on bench-scale experiments, numerical models and seismic observations. This research has shown that the evolution is very complex. For example, regional analogue bench-scale models of the Angolan margin (Fort et al., 2004) indicate a complex system with an upslope extensional domain with sealed tilted blocks, growth fault and rollover systems and extensional diapers, and a downslope contractional domain with squeezed diapirs, polyharmonic folds and thrust faults, and late-stage folding and thrusting. Numerical models have the potential to provide additional insight into the evolution of these salt driven passive margins. The longer-term aim is to calibrate regional-scale evolution models, and then to evaluate the effect of the depositional history on the current day geomechanical and hydrogeologic state in potential target hydrocarbon reservoir formations adjacent to individual salt bodies. To achieve this goal the burial and deformational history of the sediment must be modelled from initial deposition to the current-day state, while also accounting for the reaction and transport processes occurring in the margin. Accurate forward modeling is, however complex, and necessitates advanced procedures for the prediction of fault formation and evolution, representation of the extreme deformations in the salt, and for coupling the geomechanical, fluid flow and temperature fields. The evolution of the sediment due to a combination of mechanical compaction, chemical compaction and creep relaxation must also be represented. In this paper ongoing research on a computational approach for forward modelling complex structural evolution, with particular reference to passive margins driven by salt tectonics is presented. The approach is an extension of a previously published approach (Crook et al., 2006a, 2006b) that focused on predictive modelling of structure evolution in 2-D sandbox experiments, and in particular two extensional sand box experiments that exhibit complex fault development including a series of superimposed crestal collapse graben systems (McClay, 1990) . The formulation adopts a finite strain Lagrangian method, complemented by advanced localization prediction algorithms and robust and efficient automated adaptive meshing techniques. The sediment is represented by an elasto-viscoplastic constitutive model based on extended critical state concepts, which enables representation of the combined effect of mechanical and chemical compaction. This is achieved by directly coupling the evolution of the material state boundary surface with both the mechanically and chemically driven porosity change. Using these procedures the evolution of the geological structures arises naturally from the imposed boundary conditions without the requirement of seeding using initial imperfections. Simulations are presented for regional bench-scale models based on the analogue experiments presented by Fort et al. (2004), together with additional insights provided by the numerical models. It is shown that the behaviour observed in both the extensional and compressional zones of these analogue models arises naturally in the finite element simulations. Extension of these models to the field-scale is then discussed and several simulations are presented to highlight important issues related to practical field-scale numerical modelling.

  4. New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations

    NASA Technical Reports Server (NTRS)

    Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.

    2012-01-01

    In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.

  5. Results of the Greenland Ice Sheet Model Initialisation Experiments ISMIP6 - initMIP-Greenland

    NASA Astrophysics Data System (ADS)

    Goelzer, H.; Nowicki, S.; Edwards, T.; Beckley, M.; Abe-Ouchi, A.; Aschwanden, A.; Calov, R.; Gagliardini, O.; Gillet-chaulet, F.; Golledge, N. R.; Gregory, J. M.; Greve, R.; Humbert, A.; Huybrechts, P.; Larour, E. Y.; Lipscomb, W. H.; Le ´h, S.; Lee, V.; Kennedy, J. H.; Pattyn, F.; Payne, A. J.; Rodehacke, C. B.; Rückamp, M.; Saito, F.; Schlegel, N.; Seroussi, H. L.; Shepherd, A.; Sun, S.; Vandewal, R.; Ziemen, F. A.

    2016-12-01

    Earlier large-scale Greenland ice sheet sea-level projections e.g. those run during ice2sea and SeaRISE initiatives have shown that ice sheet initialisation can have a large effect on the projections and gives rise to important uncertainties. The goal of this intercomparison exercise (initMIP-Greenland) is to compare, evaluate and improve the initialization techniques used in the ice sheet modeling community and to estimate the associated uncertainties. It is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of 1) the initial present-day state of the ice sheet and 2) the response in two schematic forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss final results of the intercomparison and highlight important uncertainties with respect to projections of the Greenland ice sheet sea-level contribution.

  6. Scale-dependent cyclone-anticyclone asymmetry in a forced rotating turbulence experiment

    NASA Astrophysics Data System (ADS)

    Gallet, B.; Campagne, A.; Cortet, P.-P.; Moisy, F.

    2014-03-01

    We characterize the statistical and geometrical properties of the cyclone-anticyclone asymmetry in a statistically steady forced rotating turbulence experiment. Turbulence is generated by a set of vertical flaps which continuously inject velocity fluctuations towards the center of a tank mounted on a rotating platform. We first characterize the cyclone-anticyclone asymmetry from conventional single-point vorticity statistics. We propose a phenomenological model to explain the emergence of the asymmetry in the experiment, from which we predict scaling laws for the root-mean-square velocity in good agreement with the experimental data. We further quantify the cyclone-anticyclone asymmetry using a set of third-order two-point velocity correlations. We focus on the correlations which are nonzero only if the cyclone-anticyclone symmetry is broken. They offer two advantages over single-point vorticity statistics: first, they are defined from velocity measurements only, so an accurate resolution of the Kolmogorov scale is not required; second, they provide information on the scale-dependence of the cyclone-anticyclone asymmetry. We compute these correlation functions analytically for a random distribution of independent identical vortices. These model correlations describe well the experimental ones, indicating that the cyclone-anticyclone asymmetry is dominated by the large-scale long-lived cyclones.

  7. A two-scale scattering model with application to the JONSWAP '75 aircraft microwave scatterometer experiment

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1977-01-01

    The general problem of bistatic scattering from a two scale surface was evaluated. The treatment was entirely two-dimensional and in a vector formulation independent of any particular coordinate system. The two scale scattering model was then applied to backscattering from the sea surface. In particular, the model was used in conjunction with the JONSWAP 1975 aircraft scatterometer measurements to determine the sea surface's two scale roughness distributions, namely the probability density of the large scale surface slope and the capillary wavenumber spectrum. Best fits yield, on the average, a 0.7 dB rms difference between the model computations and the vertical polarization measurements of the normalized radar cross section. Correlations between the distribution parameters and the wind speed were established from linear, least squares regressions.

  8. Groundwater development stress: Global-scale indices compared to regional modeling

    USGS Publications Warehouse

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  9. Unmanned Vehicle Material Flammability Test

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Fernandez-Pello, A. Carlos; T’ien, James S.; Torero, Jose L.; Cowlard, Adam; Rouvreau, Sebastian; Minster, Olivier; Toth, Balazs; Legros, Guillaume; hide

    2013-01-01

    Microgravity combustion phenomena have been an active area of research for the past 3 decades however, there have been very few experiments directly studying spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample and environment sizes typical of those expected in a spacecraft fire. All previous experiments have been limited to samples of the order of 10 cm in length and width or smaller. Terrestrial fire safety standards for all other habitable volumes on earth, e.g. mines, buildings, airplanes, ships, etc., are based upon testing conducted with full-scale fires. Given the large differences between fire behavior in normal and reduced gravity, this lack of an experimental data base at relevant length scales forces spacecraft designers to base their designs using 1-g understanding. To address this question a large scale spacecraft fire experiment has been proposed by an international team of investigators. This poster presents the objectives, status and concept of this collaborative international project to examine spacecraft material flammability at realistic scales. The concept behind this project is to utilize an unmanned spacecraft such as Orbital Cygnus vehicle after it has completed its delivery of cargo to the ISS and it has begun its return journey to earth. This experiment will consist of a flame spread test involving a meter scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. A computer modeling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the examination of fire behavior on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This will be the first opportunity to examine microgravity flame behavior at scales approximating a spacecraft fire.

  10. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Paul T; Yin, Shengjun; Klasky, Hilda B

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current statusmore » of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite-element solutions and to also assess the level of confidence that can be placed in the best-estimate finiteelement solutions.« less

  11. Fate and Transport of Bacteriophage (MS2 and PRD1) During Field-Scale Infiltration at a Research Site in Los Angeles County, CA

    NASA Astrophysics Data System (ADS)

    Anders, R.; Chrysikopoulos, C. V.

    2003-12-01

    As the use of tertiary-treated municipal wastewater (recycled water) for replenishment purposes continues to increase, provisions are being established to protect ground-water resources by ensuring that adequate soil-retention time and distance requirements are met for pathogen removal. However, many of the factors controlling virus fate and transport (e.g. hydraulic conditions, ground-water chemistry, and sediment mineralogy) are interrelated and poorly understood. Therefore, conducting field-scale experiments using surrogates for human enteric viruses at an actual recharge basin that uses recycled water may represent the best approach for establishing adequate setback requirements. Three field-scale infiltration experiments were conducted at such a basin using bacterial viruses (bacteriophage) MS2 and PRD1 as surrogates for human viruses, bromide as a conservative tracer, and recycled water. The specific research site consists of a test basin constructed adjacent to a large recharge facility (spreading grounds) located in the Montebello Forebay of Los Angeles County, California. The soil beneath the test basin is predominantly medium to coarse, moderately sorted, grayish-brown sand. The first experiment was conducted over a 2-day period to determine the feasibility of conducting field-scale infiltration experiments using recycled water seeded with high concentrations of bacteriophage and bromide as tracers. Based on the results of the first experiment, a second experiment was completed when similar hydraulic conditions existed at the test basin. The third infiltration experiment was conducted to confirm the results obtained from the second experiment. Data were obtained for samples collected during the second and third field-scale infiltration experiments from the test basin itself and from depths of 0.3, 0.6, 1.0, 1.5, 3.0, and 7.6 m below the bottom of the test basin. These field-scale tracer experiments indicate bacteriophage are attenuated by removal and (or) inactivation during subsurface transport. To simulate the transport and fate of viruses during infiltration, a nonlinear least-squares regression program was used to fit a one-dimensional virus transport model to the experimental data. The model simulates virus transport in homogeneous, saturated porous media with first-order adsorption (or filtration) and inactivation. Furthermore, the model obtains a semi-analytical solution for the special case of a broad pulse and time-dependent source concentration using the principle of superposition. The fitted parameters include the clogging and declogging rate constants and the inactivation constants of suspended and adsorbed viruses. Preliminary results show a reasonable match of the first arrival of bacteriophage and bromide.

  12. Statistical Model to Analyze Quantitative Proteomics Data Obtained by 18O/16O Labeling and Linear Ion Trap Mass Spectrometry

    PubMed Central

    Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús

    2009-01-01

    Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins. PMID:19181660

  13. Probing the frontiers of particle physics with tabletop-scale experiments.

    PubMed

    DeMille, David; Doyle, John M; Sushkov, Alexander O

    2017-09-08

    The field of particle physics is in a peculiar state. The standard model of particle theory successfully describes every fundamental particle and force observed in laboratories, yet fails to explain properties of the universe such as the existence of dark matter, the amount of dark energy, and the preponderance of matter over antimatter. Huge experiments, of increasing scale and cost, continue to search for new particles and forces that might explain these phenomena. However, these frontiers also are explored in certain smaller, laboratory-scale "tabletop" experiments. This approach uses precision measurement techniques and devices from atomic, quantum, and condensed-matter physics to detect tiny signals due to new particles or forces. Discoveries in fundamental physics may well come first from small-scale experiments of this type. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  14. Three-dimensional, thermo-mechanical and dynamical analogue experiments of subduction: first results

    NASA Astrophysics Data System (ADS)

    Boutelier, D.; Oncken, O.

    2008-12-01

    We present a new analogue modeling technique developed to investigate the mechanics of the subduction process and the build-up of subduction orogenies. The model consists of a tank filled with water representing the asthenosphere and two lithospheric plates made of temperature-sensitive hydrocarbon compositional systems. These materials possess elasto-plastic properties allowing the scaling of thermal and mechanical processes. A conductive thermal gradient is imposed in the lithosphere prior to deformation. The temperature of the asthenosphere and model surface are imposed and controlled with an electric heater, two infrared ceramic heat emitters, two thermocouples and a thermo-regulator. This system allows an unobstructed view of the model surface, which is monitored using a stereoscopic particle image technique. This monitoring technique provides a precise quantification of the horizontal deformation and variations of elevation in the three-dimensional model. Convergence is imposed with a piston moving at a constant rate or pushing at a constant stress. The velocity is scaled using the dimensionless ratio of thermal conduction over advection. The experiments are first produced at a constant rate and the stress in the horizontal direction of the convergence is recorded. Then the experiment is reproduced with a constant stress boundary condition where the stress value is set to the averaged value obtained in the previous experiment. Therefore, an initial velocity allowing proper scaling of heat exchanges is obtained, but deformation in the model and spatial variations of parameters such as density or friction coefficient can produce variations of plate convergence velocity. This in turn impacts the strength of the model lithosphere because it changes the model thermal structure. In the first presented experiments the model lithosphere is one layer and the plate boundary is linear. The effects of variations of the subducting plate thickness, density and the lubrication of the interface between the plates are investigated.

  15. Temporal and Spatial Variation in Peatland Carbon Cycling and Implications for Interpreting Responses of an Ecosystem-Scale Warming Experiment

    Treesearch

    Natalie A. Griffiths; Paul J. Hanson; Daniel M. Ricciuto; Colleen M. Iversen; Anna M. Jensen; Avni Malhotra; Karis J. McFarlane; Richard J. Norby; Khachik Sargsyan; Stephen D. Sebestyen; Xiaoying Shi; Anthony P. Walker; Eric J. Ward; Jeffrey M. Warren; David J. Weston

    2017-01-01

    We are conducting a large-scale, long-term climate change response experiment in an ombrotrophic peat bog in Minnesota to evaluate the effects of warming and elevated CO2 on ecosystem processes using empirical and modeling approaches. To better frame future assessments of peatland responses to climate change, we characterized and compared spatial...

  16. Overview of the Bushland Evapotranspiration and Agricultural Remote sensing EXperiment 2008 (BEAREX08): A field experiment evaluating methods for quantifying ET at multiple scales

    NASA Astrophysics Data System (ADS)

    Evett, Steven R.; Kustas, William P.; Gowda, Prasanna H.; Anderson, Martha C.; Prueger, John H.; Howell, Terry A.

    2012-12-01

    In 2008, scientists from seven federal and state institutions worked together to investigate temporal and spatial variations of evapotranspiration (ET) and surface energy balance in a semi-arid irrigated and dryland agricultural region of the Southern High Plains in the Texas Panhandle. This Bushland Evapotranspiration and Agricultural Remote sensing EXperiment 2008 (BEAREX08) involved determination of micrometeorological fluxes (surface energy balance) in four weighing lysimeter fields (each 4.7 ha) containing irrigated and dryland cotton and in nearby bare soil, wheat stubble and rangeland fields using nine eddy covariance stations, three large aperture scintillometers, and three Bowen ratio systems. In coordination with satellite overpasses, flux and remote sensing aircraft flew transects over the surrounding fields and region encompassing an area contributing fluxes from 10 to 30 km upwind of the USDA-ARS lysimeter site. Tethered balloon soundings were conducted over the irrigated fields to investigate the effect of advection on local boundary layer development. Local ET was measured using four large weighing lysimeters, while field scale estimates were made by soil water balance with a network of neutron probe profile water sites and from the stationary flux systems. Aircraft and satellite imagery were obtained at different spatial and temporal resolutions. Plot-scale experiments dealt with row orientation and crop height effects on spatial and temporal patterns of soil surface temperature, soil water content, soil heat flux, evaporation from soil in the interrow, plant transpiration and canopy and soil radiation fluxes. The BEAREX08 field experiment was unique in its assessment of ET fluxes over a broad range in spatial scales; comparing direct and indirect methods at local scales with remote sensing based methods and models using aircraft and satellite imagery at local to regional scales, and comparing mass balance-based ET ground truth with eddy covariance and remote sensing-based methods. Here we present an overview of the experiment and a summary of preliminary findings described in this special issue of AWR. Our understanding of the role of advection in the measurement and modeling of ET is advanced by these papers integrating measurements and model estimates.

  17. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    NASA Astrophysics Data System (ADS)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical PmP energy. Possibly related, inconsistencies in published velocity models are rectified by hypothesizing the existence of large, elongate, high-velocity bodies at the base of the crust oriented to and of similar scale as the basins and ranges at the surface. This structure would result in an anisotropic lower crust.

  18. High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6

    NASA Astrophysics Data System (ADS)

    Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; Senior, Catherine A.; Bellucci, Alessio; Bao, Qing; Chang, Ping; Corti, Susanna; Fučkar, Neven S.; Guemas, Virginie; von Hardenberg, Jost; Hazeleger, Wilco; Kodama, Chihiro; Koenigk, Torben; Leung, L. Ruby; Lu, Jian; Luo, Jing-Jia; Mao, Jiafu; Mizielinski, Matthew S.; Mizuta, Ryo; Nobre, Paulo; Satoh, Masaki; Scoccimarro, Enrico; Semmler, Tido; Small, Justin; von Storch, Jin-Song

    2016-11-01

    Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, "what are the origins and consequences of systematic model biases?", but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.

  19. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  20. NASA Downscaling Project: Final Report

    NASA Technical Reports Server (NTRS)

    Ferraro, Robert; Waliser, Duane; Peters-Lidard, Christa

    2017-01-01

    A team of researchers from NASA Ames Research Center, Goddard Space Flight Center, the Jet Propulsion Laboratory, and Marshall Space Flight Center, along with university partners at UCLA, conducted an investigation to explore whether downscaling coarse resolution global climate model (GCM) predictions might provide valid insights into the regional impacts sought by decision makers. Since the computational cost of running global models at high spatial resolution for any useful climate scale period is prohibitive, the hope for downscaling is that a coarse resolution GCM provides sufficiently accurate synoptic scale information for a regional climate model (RCM) to accurately develop fine scale features that represent the regional impacts of a changing climate. As a proxy for a prognostic climate forecast model, and so that ground truth in the form of satellite and in-situ observations could be used for evaluation, the MERRA and MERRA - 2 reanalyses were used to drive the NU - WRF regional climate model and a GEOS - 5 replay. This was performed at various resolutions that were at factors of 2 to 10 higher than the reanalysis forcing. A number of experiments were conducted that varied resolution, model parameterizations, and intermediate scale nudging, for simulations over the continental US during the period from 2000 - 2010. The results of these experiments were compared to observational datasets to evaluate the output.

  1. Development of a Scale-up Tool for Pervaporation Processes

    PubMed Central

    Thiess, Holger; Strube, Jochen

    2018-01-01

    In this study, an engineering tool for the design and optimization of pervaporation processes is developed based on physico-chemical modelling coupled with laboratory/mini-plant experiments. The model incorporates the solution-diffusion-mechanism, polarization effects (concentration and temperature), axial dispersion, pressure drop and the temperature drop in the feed channel due to vaporization of the permeating components. The permeance, being the key model parameter, was determined via dehydration experiments on a mini-plant scale for the binary mixtures ethanol/water and ethyl acetate/water. A second set of experimental data was utilized for the validation of the model for two chemical systems. The industrially relevant ternary mixture, ethanol/ethyl acetate/water, was investigated close to its azeotropic point and compared to a simulation conducted with the determined binary permeance data. Experimental and simulation data proved to agree very well for the investigated process conditions. In order to test the scalability of the developed engineering tool, large-scale data from an industrial pervaporation plant used for the dehydration of ethanol was compared to a process simulation conducted with the validated physico-chemical model. Since the membranes employed in both mini-plant and industrial scale were of the same type, the permeance data could be transferred. The comparison of the measured and simulated data proved the scalability of the derived model. PMID:29342956

  2. Cross-flow turbines: progress report on physical and numerical model studies at large laboratory scale

    NASA Astrophysics Data System (ADS)

    Wosnik, Martin; Bachant, Peter

    2016-11-01

    Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.

  3. NASA Downscaling Project

    NASA Technical Reports Server (NTRS)

    Ferraro, Robert; Waliser, Duane; Peters-Lidard, Christa

    2017-01-01

    A team of researchers from NASA Ames Research Center, Goddard Space Flight Center, the Jet Propulsion Laboratory, and Marshall Space Flight Center, along with university partners at UCLA, conducted an investigation to explore whether downscaling coarse resolution global climate model (GCM) predictions might provide valid insights into the regional impacts sought by decision makers. Since the computational cost of running global models at high spatial resolution for any useful climate scale period is prohibitive, the hope for downscaling is that a coarse resolution GCM provides sufficiently accurate synoptic scale information for a regional climate model (RCM) to accurately develop fine scale features that represent the regional impacts of a changing climate. As a proxy for a prognostic climate forecast model, and so that ground truth in the form of satellite and in-situ observations could be used for evaluation, the MERRA and MERRA-2 reanalyses were used to drive the NU-WRF regional climate model and a GEOS-5 replay. This was performed at various resolutions that were at factors of 2 to 10 higher than the reanalysis forcing. A number of experiments were conducted that varied resolution, model parameterizations, and intermediate scale nudging, for simulations over the continental US during the period from 2000-2010. The results of these experiments were compared to observational datasets to evaluate the output.

  4. Preduction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation

    DTIC Science & Technology

    2016-08-02

    PREDICTION OF VEHICLE MOBILITY ON LARGE-SCALE SOFT- SOIL TERRAIN MAPS USING PHYSICS-BASED SIMULATION Tamer M. Wasfy, Paramsothy Jayakumar, Dave...NRMM • Objectives • Soft Soils • Review of Physics-Based Soil Models • MBD/DEM Modeling Formulation – Joint & Contact Constraints – DEM Cohesive... Soil Model • Cone Penetrometer Experiment • Vehicle- Soil Model • Vehicle Mobility DOE Procedure • Simulation Results • Concluding Remarks 2UNCLASSIFIED

  5. ASTP fluid transfer measurement experiment. [using breadboard model

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.

    1974-01-01

    The ASTP fluid transfer measurement experiment flight system design concept was verified by the demonstration and test of a breadboard model. In addition to the breadboard effort, a conceptual design of the corresponding flight system was generated and a full scale mockup fabricated. A preliminary CEI specification for the flight system was also prepared.

  6. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    NASA Astrophysics Data System (ADS)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  7. The propagation of sound in tunnels

    NASA Astrophysics Data System (ADS)

    Li, Kai Ming; Iu, King Kwong

    2002-11-01

    The sound propagation in tunnels is addressed theoretically and experimentally. In many previous studies, the image source method is frequently used. However, these early theoretical models are somewhat inadequate because the effect of multiple reflections in long enclosures is often modeled by the incoherent summation of contributions from all image sources. Ignoring the phase effect, these numerical models are unlikely to be satisfactory for predicting the intricate interference patterns due to contributions from each image source. In the present paper, the interference effect is incorporated by summing the contributions from the image sources coherently. To develop a simple numerical model, tunnels are represented by long rectangular enclosures with either geometrically reflecting or impedance boundaries. Scale model experiments are conducted for the validation of the numerical model. In some of the scale model experiments, the enclosure walls are lined with a carpet for simulating the impedance boundary condition. Large-scale outdoor measurements have also been conducted in two tunnels designed originally for road traffic use. It has been shown that the proposed numerical model agrees reasonably well with experimental data. [Work supported by the Research Grants Council, The Industry Department, NAP Acoustics (Far East) Ltd., and The Hong Kong Polytechnic University.

  8. A Comparison of Methods for a Priori Bias Correction in Soil Moisture Data Assimilation

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.; Reichle, Rolf H.; Harrison, Kenneth W.; Peters-Lidard, Christa D.; Yatheendradas, Soni; Santanello, Joseph A.

    2011-01-01

    Data assimilation is being increasingly used to merge remotely sensed land surface variables such as soil moisture, snow and skin temperature with estimates from land models. Its success, however, depends on unbiased model predictions and unbiased observations. Here, a suite of continental-scale, synthetic soil moisture assimilation experiments is used to compare two approaches that address typical biases in soil moisture prior to data assimilation: (i) parameter estimation to calibrate the land model to the climatology of the soil moisture observations, and (ii) scaling of the observations to the model s soil moisture climatology. To enable this research, an optimization infrastructure was added to the NASA Land Information System (LIS) that includes gradient-based optimization methods and global, heuristic search algorithms. The land model calibration eliminates the bias but does not necessarily result in more realistic model parameters. Nevertheless, the experiments confirm that model calibration yields assimilation estimates of surface and root zone soil moisture that are as skillful as those obtained through scaling of the observations to the model s climatology. Analysis of innovation diagnostics underlines the importance of addressing bias in soil moisture assimilation and confirms that both approaches adequately address the issue.

  9. Using Blur to Affect Perceived Distance and Size

    PubMed Central

    HELD, ROBERT T.; COOPER, EMILY A.; O’BRIEN, JAMES F.; BANKS, MARTIN S.

    2011-01-01

    We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our model explains how the pattern of blur in an image together with relative depth cues indicates the apparent scale of the image’s contents. From the model, we develop a semiautomated algorithm that applies blur to a sharply rendered image and thereby changes the apparent distance and scale of the scene’s contents. To examine the correspondence between the model/algorithm and actual viewer experience, we conducted an experiment with human viewers and compared their estimates of absolute distance to the model’s predictions. We did this for images with geometrically correct blur due to defocus and for images with commonly used approximations to the correct blur. The agreement between the experimental data and model predictions was excellent. The model predicts that some approximations should work well and that others should not. Human viewers responded to the various types of blur in much the way the model predicts. The model and algorithm allow one to manipulate blur precisely and to achieve the desired perceived scale efficiently. PMID:21552429

  10. Scaled Eagle Nebula Experiments on NIF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pound, Marc W.

    We performed scaled laboratory experiments at the National Ignition Facility laser to assess models for the creation of pillar structures in star-forming clouds of molecular hydrogen, in particular the famous Pillars of the Eagle Nebula. Because pillars typically point towards nearby bright ultraviolet stars, sustained directional illumination appears to be critical to pillar formation. The experiments mock up illumination from a cluster of ultraviolet-emitting stars, using a novel long duration (30--60 ns), directional, laser-driven x-ray source consisting of multiple radiation cavities illuminated in series. Our pillar models are assessed using the morphology of the Eagle Pillars observed with the Hubblemore » Space Telescope, and measurements of column density and velocity in Eagle Pillar II obtained at the BIMA and CARMA millimeter wave facilities. In the first experiments we assess a shielding model for pillar formation. The experimental data suggest that a shielding pillar can match the observed morphology of Eagle Pillar II, and the observed Pillar II column density and velocity, if augmented by late time cometary growth.« less

  11. Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis

    NASA Technical Reports Server (NTRS)

    Olevsky, Eugene; German, Randall M.

    2012-01-01

    A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.

  12. Reactive flow modeling of small scale detonation failure experiments for a baseline non-ideal explosive

    NASA Astrophysics Data System (ADS)

    Kittell, David E.; Cummock, Nick R.; Son, Steven F.

    2016-08-01

    Small scale characterization experiments using only 1-5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, it is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.

  13. Reactive flow modeling of small scale detonation failure experiments for a baseline non-ideal explosive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kittell, David E.; Cummock, Nick R.; Son, Steven F.

    2016-08-14

    Small scale characterization experiments using only 1–5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, itmore » is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.« less

  14. Experiments on integral length scale control in atmospheric boundary layer wind tunnel

    NASA Astrophysics Data System (ADS)

    Varshney, Kapil; Poddar, Kamal

    2011-11-01

    Accurate predictions of turbulent characteristics in the atmospheric boundary layer (ABL) depends on understanding the effects of surface roughness on the spatial distribution of velocity, turbulence intensity, and turbulence length scales. Simulation of the ABL characteristics have been performed in a short test section length wind tunnel to determine the appropriate length scale factor for modeling, which ensures correct aeroelastic behavior of structural models for non-aerodynamic applications. The ABL characteristics have been simulated by using various configurations of passive devices such as vortex generators, air barriers, and slot in the test section floor which was extended into the contraction cone. Mean velocity and velocity fluctuations have been measured using a hot-wire anemometry system. Mean velocity, turbulence intensity, turbulence scale, and power spectral density of velocity fluctuations have been obtained from the experiments for various configuration of the passive devices. It is shown that the integral length scale factor can be controlled using various combinations of the passive devices.

  15. Perceptibility and the "Choice Experience": User Sensory Perceptions and Experiences Inform Vaginal Prevention Product Design.

    PubMed

    Guthrie, Kate Morrow; Dunsiger, Shira; Vargas, Sara E; Fava, Joseph L; Shaw, Julia G; Rosen, Rochelle K; Kiser, Patrick F; Kojic, E Milu; Friend, David R; Katz, David F

    The development of pericoital (on demand) vaginal HIV prevention technologies remains a global health priority. Clinical trials to date have been challenged by nonadherence, leading to an inability to demonstrate product efficacy. The work here provides new methodology and results to begin to address this limitation. We created validated scales that allow users to characterize sensory perceptions and experiences when using vaginal gel formulations. In this study, we sought to understand the user sensory perceptions and experiences (USPEs) that characterize the preferred product experience for each participant. Two hundred four women evaluated four semisolid vaginal formulations using the USPE scales at four randomly ordered formulation evaluation visits. Women were asked to select their preferred formulation experience for HIV prevention among the four formulations evaluated. The scale scores on the Sex-associated USPE scales (e.g., Initial Penetration and Leakage) for each participant's selected formulation were used in a latent class model analysis. Four classes of preferred formulation experiences were identified. Sociodemographic and sexual history variables did not predict class membership; however, four specific scales were significantly related to class: Initial Penetration, Perceived Wetness, Messiness, and Leakage. The range of preferred user experiences represented by the scale scores creates a potential target range for product development, such that products that elicit scale scores that fall within the preferred range may be more acceptable, or tolerable, to the population under study. It is recommended that similar analyses should be conducted with other semisolid vaginal formulations, and in other cultures, to determine product property and development targets.

  16. Modeling and Analysis of Realistic Fire Scenarios in Spacecraft

    NASA Technical Reports Server (NTRS)

    Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A.

    2015-01-01

    An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV).

  17. Modelling landscape evolution at the flume scale

    NASA Astrophysics Data System (ADS)

    Cheraghi, Mohsen; Rinaldo, Andrea; Sander, Graham C.; Barry, D. Andrew

    2017-04-01

    The ability of a large-scale Landscape Evolution Model (LEM) to simulate the soil surface morphological evolution as observed in a laboratory flume (1-m × 2-m surface area) was investigated. The soil surface was initially smooth, and was subjected to heterogeneous rainfall in an experiment designed to avoid rill formation. Low-cohesive fine sand was placed in the flume while the slope and relief height were 5 % and 20 cm, respectively. Non-uniform rainfall with an average intensity of 85 mm h-1 and a standard deviation of 26 % was applied to the sediment surface for 16 h. We hypothesized that the complex overland water flow can be represented by a drainage discharge network, which was calculated via the micro-morphology and the rainfall distribution. Measurements included high resolution Digital Elevation Models that were captured at intervals during the experiment. The calibrated LEM captured the migration of the main flow path from the low precipitation area into the high precipitation area. Furthermore, both model and experiment showed a steep transition zone in soil elevation that moved upstream during the experiment. We conclude that the LEM is applicable under non-uniform rainfall and in the absence of surface incisions, thereby extending its applicability beyond that shown in previous applications. Keywords: Numerical simulation, Flume experiment, Particle Swarm Optimization, Sediment transport, River network evolution model.

  18. NBC Hazard Prediction Model Capability Analysis

    DTIC Science & Technology

    1999-09-01

    tactical units surveyed, only the 82nd Airborne Division indicated any real experience with either model. The tactical units surveyed did use some form...Tracer Experiment (1987) and ETEX =European Tracer Experiment (1994). 22 These data sets include Phase I Dugway data, the Prairie Grass data set...I 8 hr) HPAC Different scales shown swru. Doll ~ (1.ean) Tolll GD 111:3-Stp-88 2J:OOL (I.DOin) ... 1 .... .... ... ..... ,l

  19. Dynamic and impact contact mechanics of geologic materials: Grain-scale experiments and modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, David M.; Hopkins, Mark A.; Ketcham, Stephen A.

    2013-06-18

    High fidelity treatments of the generation and propagation of seismic waves in naturally occurring granular materials is becoming more practical given recent advancements in our ability to model complex particle shapes and their mechanical interaction. Of particular interest are the grain-scale processes that are activated by impact events and the characteristics of force transmission through grain contacts. To address this issue, we have developed a physics based approach that involves laboratory experiments to quantify the dynamic contact and impact behavior of granular materials and incorporation of the observed behavior indiscrete element models. The dynamic experiments do not involve particle damagemore » and emphasis is placed on measured values of contact stiffness and frictional loss. The normal stiffness observed in dynamic contact experiments at low frequencies (e.g., 10 Hz) are shown to be in good agreement with quasistatic experiments on quartz sand. The results of impact experiments - which involve moderate to extensive levels of particle damage - are presented for several types of naturally occurring granular materials (several quartz sands, magnesite and calcium carbonate ooids). Implementation of the experimental findings in discrete element models is discussed and the results of impact simulations involving up to 5 Multiplication-Sign 105 grains are presented.« less

  20. Validation and Application of Pharmacokinetic Models for Interspecies Extrapolations in Toxicity Risk Assessments of Volatile Organics

    DTIC Science & Technology

    1989-07-21

    formulation of physiologically-based pharmacokinetic models. Adult male Sprague-Dawley rats and male beagle dogs will be administered equal doses...experiments in the 0 dog . Physiologically-based pharmacokinetic models will be developed and validated for oral and inhalation exposures to halocarbons...of conducting experiments in dogs . The original physiolo ic model for the rat will be scaled up to predict halocarbon pharmacokinetics in the dog . The

  1. Extreme value statistics and finite-size scaling at the ecological extinction/laminar-turbulence transition

    NASA Astrophysics Data System (ADS)

    Shih, Hong-Yan; Goldenfeld, Nigel

    Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.

  2. Elimination of the Reaction Rate "Scale Effect": Application of the Lagrangian Reactive Particle-Tracking Method to Simulate Mixing-Limited, Field-Scale Biodegradation at the Schoolcraft (MI, USA) Site

    NASA Astrophysics Data System (ADS)

    Ding, Dong; Benson, David A.; Fernández-Garcia, Daniel; Henri, Christopher V.; Hyndman, David W.; Phanikumar, Mantha S.; Bolster, Diogo

    2017-12-01

    Measured (or empirically fitted) reaction rates at groundwater remediation sites are typically much lower than those found in the same material at the batch or laboratory scale. The reduced rates are commonly attributed to poorer mixing at the larger scales. A variety of methods have been proposed to account for this scaling effect in reactive transport. In this study, we use the Lagrangian particle-tracking and reaction (PTR) method to simulate a field bioremediation experiment at the Schoolcraft, MI site. A denitrifying bacterium, Pseudomonas Stutzeri strain KC (KC), was injected to the aquifer, along with sufficient substrate, to degrade the contaminant, carbon tetrachloride (CT), under anaerobic conditions. The PTR method simulates chemical reactions through probabilistic rules of particle collisions, interactions, and transformations to address the scale effect (lower apparent reaction rates for each level of upscaling, from batch to column to field scale). In contrast to a prior Eulerian reaction model, the PTR method is able to match the field-scale experiment using the rate coefficients obtained from batch experiments.

  3. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    NASA Astrophysics Data System (ADS)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  4. Measuring engagement in nurses: the psychometric properties of the Persian version of Utrecht Work Engagement Scale

    PubMed Central

    Torabinia, Mansour; Mahmoudi, Sara; Dolatshahi, Mojtaba; Abyaz, Mohamad Reza

    2017-01-01

    Background: Considering the overall tendency in psychology, researchers in the field of work and organizational psychology have become progressively interested in employees’ effective and optimistic experiments at work such as work engagement. This study was conducted to investigate 2 main purposes: assessing the psychometric properties of the Utrecht Work Engagement Scale, and finding any association between work engagement and burnout in nurses. Methods: The present methodological study was conducted in 2015 and included 248 females and 34 males with 6 months to 30 years of job experience. After the translation process, face and content validity were calculated by qualitative and quantitative methods. Moreover, content validation ratio, scale-level content validity index and item-level content validity index were measured for this scale. Construct validity was determined by factor analysis. Moreover, internal consistency and stability reliability were assessed. Factor analysis, test-retest, Cronbach’s alpha, and association analysis were used as statistical methods. Results: Face and content validity were acceptable. Exploratory factor analysis suggested a new 3- factor model. In this new model, some items from the construct model of the original version were dislocated with the same 17 items. The new model was confirmed by divergent Copenhagen Burnout Inventory as the Persian version of UWES. Internal consistency reliability for the total scale and the subscales was 0.76 to 0.89. Results from Pearson correlation test indicated a high degree of test-retest reliability (r = 0. 89). ICC was also 0.91. Engagement was negatively related to burnout and overtime per month, whereas it was positively related with age and job experiment. Conclusion: The Persian 3– factor model of Utrecht Work Engagement Scale is a valid and reliable instrument to measure work engagement in Iranian nurses as well as in other medical professionals. PMID:28955665

  5. Technical note: Coordination and harmonization of the multi-scale, multi-model activities HTAP2, AQMEII3, and MICS-Asia3: simulations, emission inventories, boundary conditions, and model output formats.

    PubMed

    Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank

    2017-01-31

    We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.

  6. Technical note: Coordination and harmonization of the multi-scale, multi-model activities HTAP2, AQMEII3, and MICS-Asia3: simulations, emission inventories, boundary conditions, and model output formats

    NASA Astrophysics Data System (ADS)

    Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank

    2017-01-01

    We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.

  7. Technical note: Coordination and harmonization of the multi-scale, multi-model activities HTAP2, AQMEII3, and MICS-Asia3: simulations, emission inventories, boundary conditions, and model output formats

    PubMed Central

    Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank

    2018-01-01

    We present an overview of the coordinated global numerical modelling experiments performed during 2012–2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue. PMID:29541091

  8. SCALE TSUNAMI Analysis of Critical Experiments for Validation of 233U Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don; Rearden, Bradley T

    2009-01-01

    Oak Ridge National Laboratory (ORNL) staff used the SCALE TSUNAMI tools to provide a demonstration evaluation of critical experiments considered for use in validation of current and anticipated operations involving {sup 233}U at the Radiochemical Development Facility (RDF). This work was reported in ORNL/TM-2008/196 issued in January 2009. This paper presents the analysis of two representative safety analysis models provided by RDF staff.

  9. Geotechnical centrifuge use at University of Cambridge Geotechnical Centre, August-September 1991

    NASA Astrophysics Data System (ADS)

    Gilbert, Paul A.

    1992-01-01

    A geotechnical centrifuge applies elevated acceleration to small-scale soil models to simulate body forces and stress levels characteristic of full-size soil structures. Since the constitutive behavior of soil is stress level development, the centrifuge offers considerable advantage in studying soil structures using models. Several experiments were observed and described in relative detail, including experiments in soil dynamics and liquefaction study, an experiment investigation leaning towers on soft foundations, and an experiment investigating migration of hot pollutants through soils.

  10. Development of a Dynamically Scaled Generic Transport Model Testbed for Flight Research Experiments

    NASA Technical Reports Server (NTRS)

    Jordan, Thomas; Langford, William; Belcastro, Christine; Foster, John; Shah, Gautam; Howland, Gregory; Kidd, Reggie

    2004-01-01

    This paper details the design and development of the Airborne Subscale Transport Aircraft Research (AirSTAR) test-bed at NASA Langley Research Center (LaRC). The aircraft is a 5.5% dynamically scaled, remotely piloted, twin-turbine, swept wing, Generic Transport Model (GTM) which will be used to provide an experimental flight test capability for research experiments pertaining to dynamics modeling and control beyond the normal flight envelope. The unique design challenges arising from the dimensional, weight, dynamic (inertial), and actuator scaling requirements necessitated by the research community are described along with the specific telemetry and control issues associated with a remotely piloted subscale research aircraft. Development of the necessary operational infrastructure, including operational and safety procedures, test site identification, and research pilots is also discussed. The GTM is a unique vehicle that provides significant research capacity due to its scaling, data gathering, and control characteristics. By combining data from this testbed with full-scale flight and accident data, wind tunnel data, and simulation results, NASA will advance and validate control upset prevention and recovery technologies for transport aircraft, thereby reducing vehicle loss-of-control accidents resulting from adverse and upset conditions.

  11. Modeling nearshore morphological evolution at seasonal scale

    USGS Publications Warehouse

    Walstra, D.-J.R.; Ruggiero, P.; Lesser, G.; Gelfenbaum, G.

    2006-01-01

    A process-based model is compared with field measurements to test and improve our ability to predict nearshore morphological change at seasonal time scales. The field experiment, along the dissipative beaches adjacent to Grays Harbor, Washington USA, successfully captured the transition between the high-energy erosive conditions of winter and the low-energy beach-building conditions typical of summer. The experiment documented shoreline progradation on the order of 20 m and as much as 175 m of onshore bar migration. Significant alongshore variability was observed in the morphological response of the sandbars over a 4 km reach of coast. A detailed sensitivity analysis suggests that the model results are more sensitive to adjusting the sediment transport associated with asymmetric oscillatory wave motions than to adjusting the transport due to mean currents. Initial results suggest that alongshore variations in the initial bathymetry are partially responsible for the observed alongshore variable morphological response during the experiment. Copyright ASCE 2006.

  12. Evaluation of the feasibility of scale modeling to quantify wind and terrain effects on low-angle sound propagation

    NASA Technical Reports Server (NTRS)

    Anderson, G. S.; Hayden, R. E.; Thompson, A. R.; Madden, R.

    1985-01-01

    The feasibility of acoustical scale modeling techniques for modeling wind effects on long range, low frequency outdoor sound propagation was evaluated. Upwind and downwind propagation was studied in 1/100 scale for flat ground and simple hills with both rigid and finite ground impedance over a full scale frequency range from 20 to 500 Hz. Results are presented as 1/3-octave frequency spectra of differences in propagation loss between the case studied and a free-field condition. Selected sets of these results were compared with validated analytical models for propagation loss, when such models were available. When they were not, results were compared with predictions from approximate models developed. Comparisons were encouraging in many cases considering the approximations involved in both the physical modeling and analysis methods. Of particular importance was the favorable comparison between theory and experiment for propagation over soft ground.

  13. Scaling an in situ network for high resolution modeling during SMAPVEX15

    NASA Astrophysics Data System (ADS)

    Coopersmith, E. J.; Cosh, M. H.; Jacobs, J. M.; Jackson, T. J.; Crow, W. T.; Holifield Collins, C.; Goodrich, D. C.; Colliander, A.

    2015-12-01

    Among the greatest challenges within the field of soil moisture estimation is that of scaling sparse point measurements within a network to produce higher resolution map products. Large-scale field experiments present an ideal opportunity to develop methodologies for this scaling, by coupling in situ networks, temporary networks, and aerial mapping of soil moisture. During the Soil Moisture Active Passive Validation Experiments in 2015 (SMAPVEX15) in and around the USDA-ARS Walnut Gulch Experimental Watershed and LTAR site in southeastern Arizona, USA, a high density network of soil moisture stations was deployed across a sparse, permanent in situ network in coordination with intensive soil moisture sampling and an aircraft campaign. This watershed is also densely instrumented with precipitation gages (one gauge/0.57 km2) to monitor the North American Monsoon System, which dominates the hydrologic cycle during the summer months in this region. Using the precipitation and soil moisture time series values provided, a physically-based model is calibrated that will provide estimates at the 3km, 9km, and 36km scales. The results from this model will be compared with the point-scale gravimetric samples, aircraft-based sensor, and the satellite-based products retrieved from NASA's Soil Moisture Active Passive mission.

  14. Modeling the MJO rain rates using parameterized large scale dynamics: vertical structure, radiation, and horizontal advection of dry air

    NASA Astrophysics Data System (ADS)

    Wang, S.; Sobel, A. H.; Nie, J.

    2015-12-01

    Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the importance of both horizontal advection of moisture and cloud-radiative feedback to the dynamics of the MJO, as well as to accurate simulation and prediction of it in models.

  15. Simulation of pump-turbine prototype fast mode transition for grid stability support

    NASA Astrophysics Data System (ADS)

    Nicolet, C.; Braun, O.; Ruchonnet, N.; Hell, J.; Béguin, A.; Avellan, F.

    2017-04-01

    The paper explores the additional services that Full Size Frequency Converter, FSFC, solution can provide for the case of an existing pumped storage power plant of 2x210 MW, for which conversion from fixed speed to variable speed is investigated with a focus on fast mode transition. First, reduced scale model tests experiments of fast transition of Francis pump-turbine which have been performed at the ANDRITZ HYDRO Hydraulic Laboratory in Linz Austria are presented. The tests consist of linear speed transition from pump to turbine and vice versa performed with constant guide vane opening. Then existing pumped storage power plant with pump-turbine quasi homologous to the reduced scale model is modelled using the simulation software SIMSEN considering the reservoirs, penstocks, the two Francis pump-turbines, the two downstream surge tanks, and the tailrace tunnel. For the electrical part, an FSFC configuration is considered with a detailed electrical model. The transitions from turbine to pump and vice versa are simulated, and similarities between prototype simulation results and reduced scale model experiments are highlighted.

  16. Forest landscape models, a tool for understanding the effect of the large-scale and long-term landscape processes

    Treesearch

    Hong S. He; Robert E. Keane; Louis R. Iverson

    2008-01-01

    Forest landscape models have become important tools for understanding large-scale and long-term landscape (spatial) processes such as climate change, fire, windthrow, seed dispersal, insect outbreak, disease propagation, forest harvest, and fuel treatment, because controlled field experiments designed to study the effects of these processes are often not possible (...

  17. Phenomenological Study of Business Models Used to Scale Online Enrollment at Institutions of Higher Education

    ERIC Educational Resources Information Center

    Williams, Dana E.

    2012-01-01

    The purpose of this qualitative phenomenological study was to explore factors for selecting a business model for scaling online enrollment by institutions of higher education. The goal was to explore the lived experiences of academic industry experts involved in the selection process. The research question for this study was: What were the lived…

  18. CRBR pump water test experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, M.E.; Huber, K.A.

    1983-01-01

    The hydraulic design features and water testing of the hydraulic scale model and prototype pump of the sodium pumps used in the primary and intermediate sodium loops of the Clinch River Breeder Reactor Plant (CRBRP) are described. The Hydraulic Scale Model tests are performed and the results of these tests are discussed. The Prototype Pump tests are performed and the results of these tests are discussed.

  19. Tropical precipitation extremes: Response to SST-induced warming in aquaplanet simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Teixeira, João.

    2017-04-01

    Scaling of tropical precipitation extremes in response to warming is studied in aquaplanet experiments using the global Weather Research and Forecasting (WRF) model. We show how the scaling of precipitation extremes is highly sensitive to spatial and temporal averaging: while instantaneous grid point extreme precipitation scales more strongly than the percentage increase (˜7% K-1) predicted by the Clausius-Clapeyron (CC) relationship, extremes for zonally and temporally averaged precipitation follow a slight sub-CC scaling, in agreement with results from Climate Model Intercomparison Project (CMIP) models. The scaling depends crucially on the employed convection parameterization. This is particularly true when grid point instantaneous extremes are considered. These results highlight how understanding the response of precipitation extremes to warming requires consideration of dynamic changes in addition to the thermodynamic response. Changes in grid-scale precipitation, unlike those in convective-scale precipitation, scale linearly with the resolved flow. Hence, dynamic changes include changes in both large-scale and convective-scale motions.

  20. The development of methods for predicting and measuring distribution patterns of aerial sprays

    NASA Technical Reports Server (NTRS)

    Ormsbee, A. I.; Bragg, M. B.; Maughmer, M. D.

    1979-01-01

    The capability of conducting scale model experiments which involve the ejection of small particles into the wake of an aircraft close to the ground is developed. A set of relationships used to scale small-sized dispersion studies to full-size results are experimentally verified and, with some qualifications, basic deposition patterns are presented. In the process of validating these scaling laws, the basic experimental techniques used in conducting such studies, both with and without an operational propeller, were developed. The procedures that evolved are outlined. The envelope of test conditions that can be accommodated in the Langley Vortex Research Facility, which were developed theoretically, are verified using a series of vortex trajectory experiments that help to define the limitations due to wall interference effects for models of different sizes.

  1. Dynamic response characteristics of two transport models tested in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Young, Clarence P., Jr.

    1993-01-01

    This paper documents recent experiences with measuring the dynamic response characteristics of a commercial transport and a military transport model during full scale Reynolds number tests in the National Transonic Facility. Both models were limited in angle of attack while testing at full scale Reynolds number and cruise Mach number due to pitch or stall buffet response. Roll buffet (wing buzz) was observed for both models at certain Mach numbers while testing at high Reynolds number. Roll buffet was more severe and more repeatable for the military transport model at cruise Mach number. Miniature strain-gage type accelerometers were used for the first time for obtaining dynamic data as a part of the continuing development of miniature dynamic measurements instrumentation for cryogenic applications. This paper presents the results of vibration measurements obtained for both the commercial and military transport models and documents the experience gained in the use of miniature strain gage type accelerometers.

  2. The ATLAS diboson resonance in non-supersymmetric SO(10)

    DOE PAGES

    Evans, Jason L.; Nagata, Natsumi; Olive, Keith A.; ...

    2016-02-18

    SO(10) grand uni cation accommodates intermediate gauge symmetries with which gauge coupling uni cation can be realized without supersymmetry. In this paper, we discuss the possibility that a new massive gauge boson associated with an intermediate gauge symmetry explains the excess observed in the diboson resonance search recently reported by the ATLAS experiment. The model we find has two intermediate symmetries, SU(4) C Ⓧ SU(2) L Ⓧ SU(2) R and SU(3) C Ⓧ SU(2) L Ⓧ SU(2)R Ⓧ U(1) B-L, where the latter gauge group is broken at the TeV scale. This model achieves gauge coupling uni cation with amore » uni cation scale su fficiently high to avoid proton decay. In addition, this model provides a good dark matter candidates, whose stability is guaranteed by a Z 2 symmetry present after the spontaneous breaking of the intermediate gauge symmetries. In addition, we discuss prospects for testing these models in the forthcoming LHC experiments and dark matter detection experiments.« less

  3. Understanding the shock and detonation response of high explosives at the continuum and meso scales

    NASA Astrophysics Data System (ADS)

    Handley, C. A.; Lambourn, B. D.; Whitworth, N. J.; James, H. R.; Belfield, W. J.

    2018-03-01

    The shock and detonation response of high explosives has been an active research topic for more than a century. In recent years, high quality data from experiments using embedded gauges and other diagnostic techniques have inspired the development of a range of new high-fidelity computer models for explosives. The experiments and models have led to new insights, both at the continuum scale applicable to most shock and detonation experiments, and at the mesoscale relevant to hotspots and burning within explosive microstructures. This article reviews the continuum and mesoscale models, and their application to explosive phenomena, gaining insights to aid future model development and improved understanding of the physics of shock initiation and detonation propagation. In particular, it is argued that "desensitization" and the effect of porosity on high explosives can both be explained by the combined effect of thermodynamics and hydrodynamics, rather than the traditional hotspot-based explanations linked to pressure-dependent reaction rates.

  4. Cloud computing and validation of expandable in silico livers.

    PubMed

    Ropella, Glen E P; Hunt, C Anthony

    2010-12-03

    In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware.

  5. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE PAGES

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    2015-12-07

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  6. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  7. Scaling tunable network model to reproduce the density-driven superlinear relation

    NASA Astrophysics Data System (ADS)

    Gao, Liang; Shan, Xiaoya; Qin, Yuhao; Yu, Senbin; Xu, Lida; Gao, Zi-You

    2018-03-01

    Previous works have shown the universality of allometric scaling under total and density values at the city level, but our understanding of the size effects of regions on the universality of allometric scaling remains inadequate. Here, we revisit the scaling relations between the gross domestic production (GDP) and the population based on the total and density values and first reveal that the allometric scaling under density values for different regions is universal. The scaling exponent β under the density value is in the range of (1.0, 2.0], which unexpectedly exceeds the range observed by Pan et al. [Nat. Commun. 4, 1961 (2013)]. For the wider range, we propose a network model based on a 2D lattice space with the spatial correlation factor α as a parameter. Numerical experiments prove that the generated scaling exponent β in our model is fully tunable by the spatial correlation factor α. Our model will furnish a general platform for extensive urban and regional studies.

  8. A stochastic two-scale model for pressure-driven flow between rough surfaces

    PubMed Central

    Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas

    2016-01-01

    Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975

  9. Propulsion mechanisms for Leidenfrost solids on ratchets.

    PubMed

    Baier, Tobias; Dupeux, Guillaume; Herbert, Stefan; Hardt, Steffen; Quéré, David

    2013-02-01

    We propose a model for the propulsion of Leidenfrost solids on ratchets based on viscous drag due to the flow of evaporating vapor. The model assumes pressure-driven flow described by the Navier-Stokes equations and is mainly studied in lubrication approximation. A scaling expression is derived for the dependence of the propulsive force on geometric parameters of the ratchet surface and properties of the sublimating solid. We show that the model results as well as the scaling law compare favorably with experiments and are able to reproduce the experimentally observed scaling with the size of the solid.

  10. Do we have the right models for scaling up health services to achieve the Millennium Development Goals?

    PubMed

    Subramanian, Savitha; Naimoli, Joseph; Matsubayashi, Toru; Peters, David H

    2011-12-14

    There is widespread agreement on the need for scaling up in the health sector to achieve the Millennium Development Goals (MDGs). But many countries are not on track to reach the MDG targets. The dominant approach used by global health initiatives promotes uniform interventions and targets, assuming that specific technical interventions tested in one country can be replicated across countries to rapidly expand coverage. Yet countries scale up health services and progress against the MDGs at very different rates. Global health initiatives need to take advantage of what has been learned about scaling up. A systematic literature review was conducted to identify conceptual models for scaling up health in developing countries, with the articles assessed according to the practical concerns of how to scale up, including the planning, monitoring and implementation approaches. We identified six conceptual models for scaling up in health based on experience with expanding pilot projects and diffusion of innovations. They place importance on paying attention to enhancing organizational, functional, and political capabilities through experimentation and adaptation of strategies in addition to increasing the coverage and range of health services. These scaling up approaches focus on fostering sustainable institutions and the constructive engagement between end users and the provider and financing organizations. The current approaches to scaling up health services to reach the MDGs are overly simplistic and not working adequately. Rather than relying on blueprint planning and raising funds, an approach characteristic of current global health efforts, experience with alternative models suggests that more promising pathways involve "learning by doing" in ways that engage key stakeholders, uses data to address constraints, and incorporates results from pilot projects. Such approaches should be applied to current strategies to achieve the MDGs.

  11. HAPEX-Sahel: A large-scale study of land-atmosphere interactions in the semi-arid tropics

    NASA Technical Reports Server (NTRS)

    Gutorbe, J-P.; Lebel, T.; Tinga, A.; Bessemoulin, P.; Brouwer, J.; Dolman, A.J.; Engman, E. T.; Gash, J. H. C.; Hoepffner, M.; Kabat, P.

    1994-01-01

    The Hydrologic Atmospheric Pilot EXperiment in the Sahel (HAPEX-Sahel) was carried out in Niger, West Africa, during 1991-1992, with an intensive observation period (IOP) in August-October 1992. It aims at improving the parameteriztion of land surface atmospheric interactions at the Global Circulation Model (GCM) gridbox scale. The experiment combines remote sensing and ground based measurements with hydrological and meteorological modeling to develop aggregation techniques for use in large scale estimates of the hydrological and meteorological behavior of large areas in the Sahel. The experimental strategy consisted of a period of intensive measurements during the transition period of the rainy to the dry season, backed up by a series of long term measurements in a 1 by 1 deg square in Niger. Three 'supersites' were instrumented with a variety of hydrological and (micro) meteorological equipment to provide detailed information on the surface energy exchange at the local scale. Boundary layer measurements and aircraft measurements were used to provide information at scales of 100-500 sq km. All relevant remote sensing images were obtained for this period. This program of measurements is now being analyzed and an extensive modelling program is under way to aggregate the information at all scales up to the GCM grid box scale. The experimental strategy and some preliminary results of the IOP are described.

  12. Memory Transmission in Small Groups and Large Networks: An Agent-Based Model.

    PubMed

    Luhmann, Christian C; Rajaram, Suparna

    2015-12-01

    The spread of social influence in large social networks has long been an interest of social scientists. In the domain of memory, collaborative memory experiments have illuminated cognitive mechanisms that allow information to be transmitted between interacting individuals, but these experiments have focused on small-scale social contexts. In the current study, we took a computational approach, circumventing the practical constraints of laboratory paradigms and providing novel results at scales unreachable by laboratory methodologies. Our model embodied theoretical knowledge derived from small-group experiments and replicated foundational results regarding collaborative inhibition and memory convergence in small groups. Ultimately, we investigated large-scale, realistic social networks and found that agents are influenced by the agents with which they interact, but we also found that agents are influenced by nonneighbors (i.e., the neighbors of their neighbors). The similarity between these results and the reports of behavioral transmission in large networks offers a major theoretical insight by linking behavioral transmission to the spread of information. © The Author(s) 2015.

  13. Mesocell study area snow distributions for the Cold Land Processes Experiment (CLPX)

    Treesearch

    Glen E. Liston; Christopher A. Hiemstra; Kelly Elder; Donald W. Cline

    2008-01-01

    The Cold Land Processes Experiment (CLPX) had a goal of describing snow-related features over a wide range of spatial and temporal scales. This required linking disparate snow tools and datasets into one coherent, integrated package. Simulating realistic high-resolution snow distributions and features requires a snow-evolution modeling system (SnowModel) that can...

  14. Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience

    USGS Publications Warehouse

    Hooper, R.P.

    2001-01-01

    A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.

  15. Towards a representation of priming on soil carbon decomposition in the global land biosphere model ORCHIDEE (version 1.9.5.2)

    NASA Astrophysics Data System (ADS)

    Guenet, Bertrand; Esteban Moyano, Fernando; Peylin, Philippe; Ciais, Philippe; Janssens, Ivan A.

    2016-03-01

    Priming of soil carbon decomposition encompasses different processes through which the decomposition of native (already present) soil organic matter is amplified through the addition of new organic matter, with new inputs typically being more labile than the native soil organic matter. Evidence for priming comes from laboratory and field experiments, but to date there is no estimate of its impact at global scale and under the current anthropogenic perturbation of the carbon cycle. Current soil carbon decomposition models do not include priming mechanisms, thereby introducing uncertainty when extrapolating short-term local observations to ecosystem and regional to global scale. In this study we present a simple conceptual model of decomposition priming, called PRIM, able to reproduce laboratory (incubation) and field (litter manipulation) priming experiments. Parameters for this model were first optimized against data from 20 soil incubation experiments using a Bayesian framework. The optimized parameter values were evaluated against another set of soil incubation data independent from the ones used for calibration and the PRIM model reproduced the soil incubations data better than the original, CENTURY-type soil decomposition model, whose decomposition equations are based only on first-order kinetics. We then compared the PRIM model and the standard first-order decay model incorporated into the global land biosphere model ORCHIDEE (Organising Carbon and Hydrology In Dynamic Ecosystems). A test of both models was performed at ecosystem scale using litter manipulation experiments from five sites. Although both versions were equally able to reproduce observed decay rates of litter, only ORCHIDEE-PRIM could simulate the observed priming (R2 = 0.54) in cases where litter was added or removed. This result suggests that a conceptually simple and numerically tractable representation of priming adapted to global models is able to capture the sign and magnitude of the priming of litter and soil organic matter.

  16. Towards a representation of priming on soil carbon decomposition in the global land biosphere model ORCHIDEE (version 1.9.5.2)

    NASA Astrophysics Data System (ADS)

    Guenet, B.; Moyano, F. E.; Peylin, P.; Ciais, P.; Janssens, I. A.

    2015-10-01

    Priming of soil carbon decomposition encompasses different processes through which the decomposition of native (already present) soil organic matter is amplified through the addition of new organic matter, with new inputs typically being more labile than the native soil organic matter. Evidence for priming comes from laboratory and field experiments, but to date there is no estimate of its impact at global scale and under the current anthropogenic perturbation of the carbon cycle. Current soil carbon decomposition models do not include priming mechanisms, thereby introducing uncertainty when extrapolating short-term local observations to ecosystem and regional to global scale. In this study we present a simple conceptual model of decomposition priming, called PRIM, able to reproduce laboratory (incubation) and field (litter manipulation) priming experiments. Parameters for this model were first optimized against data from 20 soil incubation experiments using a Bayesian framework. The optimized parameter values were evaluated against another set of soil incubation data independent from the ones used for calibration and the PRIM model reproduced the soil incubations data better than the original, CENTURY-type soil decomposition model, whose decomposition equations are based only on first order kinetics. We then compared the PRIM model and the standard first order decay model incorporated into the global land biosphere model ORCHIDEE. A test of both models was performed at ecosystem scale using litter manipulation experiments from 5 sites. Although both versions were equally able to reproduce observed decay rates of litter, only ORCHIDEE-PRIM could simulate the observed priming (R2 = 0.54) in cases where litter was added or removed. This result suggests that a conceptually simple and numerically tractable representation of priming adapted to global models is able to capture the sign and magnitude of the priming of litter and soil organic matter.

  17. High-scale SUSY from an R -invariant new inflation in the landscape

    NASA Astrophysics Data System (ADS)

    Kawasaki, Masahiro; Yamada, Masaki; Yanagida, Tsutomu T.; Yokozaki, Norimi

    2016-03-01

    We provide an anthropic reason for the supersymmetry breaking scale being much higher than the electroweak scale, as indicated by the null result of collider experiments and the observed 125 GeV Higgs boson. We focus on a new inflation model as a typical low-scale inflation model that may be expected in the string landscape. In this model, R symmetry is broken at the minimum of the inflaton potential, and its breaking scale is related to the reheating temperature. Once we admit that the anthropic principle requires thermal leptogenesis, we obtain a lower bound for the gravitino mass, which is related to the R symmetry breaking scale. This scenario and resulting gravitino mass predict the consistent amplitude of density perturbations. We also find that string axions and saxions are consistently implemented in this scenario.

  18. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  19. Downscaling of Global Climate Change Estimates to Regional Scales: An Application to Iberian Rainfall in Wintertime.

    NASA Astrophysics Data System (ADS)

    von Storch, Hans; Zorita, Eduardo; Cubasch, Ulrich

    1993-06-01

    A statistical strategy to deduct regional-scale features from climate general circulation model (GCM) simulations has been designed and tested. The main idea is to interrelate the characteristic patterns of observed simultaneous variations of regional climate parameters and of large-scale atmospheric flow using the canonical correlation technique.The large-scale North Atlantic sea level pressure (SLP) is related to the regional, variable, winter (DJF) mean Iberian Peninsula rainfall. The skill of the resulting statistical model is shown by reproducing, to a good approximation, the winter mean Iberian rainfall from 1900 to present from the observed North Atlantic mean SLP distributions. It is shown that this observed relationship between these two variables is not well reproduced in the output of a general circulation model (GCM).The implications for Iberian rainfall changes as the response to increasing atmospheric greenhouse-gas concentrations simulated by two GCM experiments are examined with the proposed statistical model. In an instantaneous `2 C02' doubling experiment, using the simulated change of the mean North Atlantic SLP field to predict Iberian rainfall yields, there is an insignificant increase of area-averaged rainfall of 1 mm/month, with maximum values of 4 mm/month in the northwest of the peninsula. In contrast, for the four GCM grid points representing the Iberian Peninsula, the change is 10 mm/month, with a minimum of 19 mm/month in the southwest. In the second experiment, with the IPCC scenario A ("business as usual") increase Of C02, the statistical-model results partially differ from the directly simulated rainfall changes: in the experimental range of 100 years, the area-averaged rainfall decreases by 7 mm/month (statistical model), and by 9 mm/month (GCM); at the same time the amplitude of the interdecadal variability is quite different.

  20. Carbon Cycle Model Linkage Project (CCMLP): Evaluating Biogeochemical Process Models with Atmospheric Measurements and Field Experiments

    NASA Astrophysics Data System (ADS)

    Heimann, M.; Prentice, I. C.; Foley, J.; Hickler, T.; Kicklighter, D. W.; McGuire, A. D.; Melillo, J. M.; Ramankutty, N.; Sitch, S.

    2001-12-01

    Models of biophysical and biogeochemical proceses are being used -either offline or in coupled climate-carbon cycle (C4) models-to assess climate- and CO2-induced feedbacks on atmospheric CO2. Observations of atmospheric CO2 concentration, and supplementary tracers including O2 concentrations and isotopes, offer unique opportunities to evaluate the large-scale behaviour of models. Global patterns, temporal trends, and interannual variability of the atmospheric CO2 concentration and its seasonal cycle provide crucial benchmarks for simulations of regionally-integrated net ecosystem exchange; flux measurements by eddy correlation allow a far more demanding model test at the ecosystem scale than conventional indicators, such as measurements of annual net primary production; and large-scale manipulations, such as the Duke Forest Free Air Carbon Enrichment (FACE) experiment, give a standard to evaluate modelled phenomena such as ecosystem-level CO2 fertilization. Model runs including historical changes of CO2, climate and land use allow comparison with regional-scale monthly CO2 balances as inferred from atmospheric measurements. Such comparisons are providing grounds for some confidence in current models, while pointing to processes that may still be inadequately treated. Current plans focus on (1) continued benchmarking of land process models against flux measurements across ecosystems and experimental findings on the ecosystem-level effects of enhanced CO2, reactive N inputs and temperature; (2) improved representation of land use, forest management and crop metabolism in models; and (3) a strategy for the evaluation of C4 models in a historical observational context.

  1. A large column analog experiment of stable isotope variations during reactive transport: I. A comprehensive model of sulfur cycling and δ34S fractionation

    NASA Astrophysics Data System (ADS)

    Druhan, Jennifer L.; Steefel, Carl I.; Conrad, Mark E.; DePaolo, Donald J.

    2014-01-01

    This study demonstrates a mechanistic incorporation of the stable isotopes of sulfur within the CrunchFlow reactive transport code to model the range of microbially-mediated redox processes affecting kinetic isotope fractionation. Previous numerical models of microbially mediated sulfate reduction using Monod-type rate expressions have lacked rigorous coupling of individual sulfur isotopologue rates, with the result that they cannot accurately simulate sulfur isotope fractionation over a wide range of substrate concentrations using a constant fractionation factor. Here, we derive a modified version of the dual-Monod or Michaelis-Menten formulation (Maggi and Riley, 2009, 2010) that successfully captures the behavior of the 32S and 34S isotopes over a broad range from high sulfate and organic carbon availability to substrate limitation using a constant fractionation factor. The new model developments are used to simulate a large-scale column study designed to replicate field scale conditions of an organic carbon (acetate) amended biostimulation experiment at the Old Rifle site in western Colorado. Results demonstrate an initial period of iron reduction that transitions to sulfate reduction, in agreement with field-scale behavior observed at the Old Rifle site. At the height of sulfate reduction, effluent sulfate concentrations decreased to 0.5 mM from an influent value of 8.8 mM over the 100 cm flow path, and thus were enriched in sulfate δ34S from 6.3‰ to 39.5‰. The reactive transport model accurately reproduced the measured enrichment in δ34S of both the reactant (sulfate) and product (sulfide) species of the reduction reaction using a single fractionation factor of 0.987 obtained independently from field-scale measurements. The model also accurately simulated the accumulation and δ34S signature of solid phase elemental sulfur over the duration of the experiment, providing a new tool to predict the isotopic signatures associated with reduced mineral pools. To our knowledge, this is the first rigorous treatment of sulfur isotope fractionation subject to Monod kinetics in a mechanistic reactive transport model that considers the isotopic spatial distribution of both dissolved and solid phase sulfur species during microbially-mediated sulfate reduction. describe the design and results of the large-scale column experiment; demonstrate incorporation of the stable isotopes of sulfur in a dual-Monod kinetic expression such that fractionation is accurately modeled at both high and low substrate availability; verify accurate simulation of the chemical and isotopic gradients in reactant and product sulfur species using a kinetic fractionation factor obtained from field-scale analysis (Druhan et al., 2012); utilize the model to predict the final δ34S values of secondary sulfur minerals accumulated in the sediment over the course of the experiment. The development of rigorous isotope-specific Monod-type rate expressions are presented here in application to sulfur cycling during amended biostimulation, but are readily applicable to a variety of stable isotope systems associated with both steady state and transient biogenic redox environments. In other words, the association of this model with a uranium remediation experiment does not limit its applicability to more general redox systems. Furthermore, the ability of this model treatment to predict the isotopic composition of secondary minerals accumulated as a result of fractionating processes (item 4) offers an important means of interpreting solid phase isotopic compositions and tracking long-term stability of precipitates.

  2. Impact Forces from Tsunami-Driven Debris

    NASA Astrophysics Data System (ADS)

    Ko, H.; Cox, D. T.; Riggs, H.; Naito, C. J.; Kobayashi, M. H.; Piran Aghl, P.

    2012-12-01

    Debris driven by tsunami inundation flow has been known to be a significant threat to structures, yet we lack the constitutive equations necessary to predict debris impact force. The objective of this research project is to improve our understanding of, and predictive capabilities for, tsunami-driven debris impact forces on structures. Of special interest are shipping containers, which are virtually everywhere and which will float even when fully loaded. The forces from such debris hitting structures, for example evacuation shelters and critical port facilities such as fuel storage tanks, are currently not known. This research project focuses on the impact by flexible shipping containers on rigid columns and investigated using large-scale laboratory testing. Full-scale in-air collision experiments were conducted at Lehigh University with 20 ft shipping containers to experimentally quantify the nonlinear behavior of full scale shipping containers as they collide into structural elements. The results from the full scale experiments were used to calibrate computer models and used to design a series of simpler, 1:5 scale wave flume experiments at Oregon State University. Scaled in-air collision tests were conducted using 1:5 scale idealized containers to mimic the container behavior observed in the full scale tests and to provide a direct comparison to the hydraulic model tests. Two specimens were constructed using different materials (aluminum, acrylic) to vary the stiffness. The collision tests showed that at higher speeds, the collision became inelastic as the slope of maximum impact force/velocity decreased with increasing velocity. Hydraulic model tests were conducted using the 1:5 scaled shipping containers to measure the impact load by the containers on a rigid column. The column was instrumented with a load cell to measure impact forces, strain gages to measure the column deflection, and a video camera was used to provide the debris orientation and speed. The tsunami was modeled as a transient pulse command signal to the wavemaker to provide a low amplitude long wave. Results are expected to show the effect of the water on the debris collision by comparing water tests with the in-air tests. It is anticipated that the water will provide some combination of added mass and cushioning of the collision. Results will be compared with proposed equations for the new ASCE-7 standard and with numerical models at the University of Hawaii.

  3. Mature thunderstorm cloud-top structure and dynamics - A three-dimensional numerical simulation study

    NASA Technical Reports Server (NTRS)

    Schlesinger, R. E.

    1984-01-01

    The present investigation is concerned with results from an initial set of comparative experiments in a project which utilize a three-dimensional convective storm model. The modeling results presented are related to four comparative experiments, designated Cases A through D. One of two scientific questions considered involves the dynamical processes, either near the cloud top or well within the cloud interior, which contribute to organize cloud thermal patterns such as those revealed by IR satellite imagery for some storms having strong internal cloud-scale rotation. The second question is concerned with differences, in cloud-top height and temperature field characteristics, between thunderstorms with and without significant internal cloud-scale rotation. The four experiments A-D are compared with regard to both interior and cloud-top configurations in the context of the second question. A particular strong-shear experiment, Case B, is analyzed to address question one.

  4. Analogue modeling for science outreach: glacier flows at Antarctic National Museum, Italy

    NASA Astrophysics Data System (ADS)

    Zeoli, A.; Corti, G.; Folco, L.; Ossola, C.

    2012-12-01

    Comprehension of internal deformation and of ice flow in the Antarctic ice sheet in relation with the bedrock topography and with the thickness variation induced by climatic variations represent an important target for the scientific community. Analogue modelling technique aims to analyze geological or geomorphological processes through physical models built at a reduced geometrical scale in laboratory and deformed at reasonable scale of times. Corti et al. (2003 and 2008) have shown that this technique could also be used successfully for ice flow dynamic. Moreover, this technique gives a three-dimensional view of the processes. The models, that obviously simplify the geometry and rheology of natural processes, represent a geometrically, cinematically, dynamically and rheologically scaled analogue of the natural glacial environment. Following a procedure described in previous papers, proper materials have been selected to simulate the rheological behaviour of ice. In particular, during the experiments a Polydimethilsyloxane (PDMS) has been used to simulate glacial flow. PDMS is a transparent Newtonian silicone with a viscosity of 1.4 104 Pa s and a density of 965 kg m-3 (see material properties in Weijermars, 1986). The scaling of the model to natural conditions let to obtain reliable results for a correct comparison with the glacial processes under investigation. Models have been built with a with a geometrical scaling ratio of ~1.5 10-5, such that 1 cm in the model represents ~700 m in nature. The physical models have been deformed in terrestrial gravity field by allowing the PDMS to flow inside a Plexiglas box. In particular, the silicone has been poured inside the Plexiglas box and allowed to settle in order to obtain a flat free surface; the box has been then inclined of some degrees in order to allow the silicone to flow. Several boxes illustrating different glacial processes have been realized; each of them could be easily performed in short time and in standard laboratories. One of the main aims of the Antarctic National Museum in Siena (Italy) is to establish a strategy to deliver results to a broader scientific community. Time and spatial small scale of the experiments lead the analogue modeling technique easy to be shown to non-technical audiences through direct participation during Museum visits. All these experiments engage both teachers and students from primary and secondary schools and the general public.

  5. Changes and Attribution of Extreme Precipitation in Climate Models: Subdaily and Daily Scales

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Villarini, G.; Scoccimarro, E.; Vecchi, G. A.

    2017-12-01

    Extreme precipitation events are responsible for numerous hazards, including flooding, soil erosion, and landslides. Because of their significant socio-economic impacts, the attribution and projection of these events is of crucial importance to improve our response, mitigation and adaptation strategies. Here we present results from our ongoing work.In terms of attribution, we use idealized experiments [pre-industrial control experiment (PI) and 1% per year increase (1%CO2) in atmospheric CO2] from ten general circulation models produced under the Coupled Model Intercomparison Project Phase 5 (CMIP5) and the fraction of attributable risk to examine the CO2 effects on extreme precipitation at the sub-daily and daily scales. We find that the increased CO2 concentration substantially increases the odds of the occurrence of sub-daily precipitation extremes compared to the daily scale in most areas of the world, with the exception of some regions in the sub-tropics, likely in relation to the subsidence of the Hadley Cell. These results point to the large role that atmospheric CO2 plays in extreme precipitation under an idealized framework. Furthermore, we investigate the changes in extreme precipitation events with the Community Earth System Model (CESM) climate experiments using the scenarios consistent with the 1.5°C and 2°C temperature targets. We find that the frequency of annual extreme precipitation at a global scale increases in both 1.5°C and 2°C scenarios until around 2070, after which the magnitudes of the trend become much weaker or even negative. Overall, the frequency of global annual extreme precipitation is similar between 1.5°C and 2°C for the period 2006-2035, and the changes in extreme precipitation in individual seasons are consistent with those for the entire year. The frequency of extreme precipitation in the 2°C experiments is higher than for the 1.5°C experiment after the late 2030s, particularly for the period 2071-2100.

  6. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  7. Comparing field investigations with laboratory models to predict landfill leachate emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fellner, Johann; Doeberl, Gernot; Allgaier, Gerhard

    2009-06-15

    Investigations into laboratory reactors and landfills are used for simulating and predicting emissions from municipal solid waste landfills. We examined water flow and solute transport through the same waste body for different volumetric scales (laboratory experiment: 0.08 m{sup 3}, landfill: 80,000 m{sup 3}), and assessed the differences in water flow and leachate emissions of chloride, total organic carbon and Kjeldahl nitrogen. The results indicate that, due to preferential pathways, the flow of water in field-scale landfills is less uniform than in laboratory reactors. Based on tracer experiments, it can be discerned that in laboratory-scale experiments around 40% of pore watermore » participates in advective solute transport, whereas this fraction amounts to less than 0.2% in the investigated full-scale landfill. Consequences of the difference in water flow and moisture distribution are: (1) leachate emissions from full-scale landfills decrease faster than predicted by laboratory experiments, and (2) the stock of materials remaining in the landfill body, and thus the long-term emission potential, is likely to be underestimated by laboratory landfill simulations.« less

  8. A Simple Laboratory Scale Model of Iceberg Dynamics and its Role in Undergraduate Education

    NASA Astrophysics Data System (ADS)

    Burton, J. C.; MacAyeal, D. R.; Nakamura, N.

    2011-12-01

    Lab-scale models of geophysical phenomena have a long history in research and education. For example, at the University of Chicago, Dave Fultz developed laboratory-scale models of atmospheric flows. The results from his laboratory were so stimulating that similar laboratories were subsequently established at a number of other institutions. Today, the Dave Fultz Memorial Laboratory for Hydrodynamics (http://geosci.uchicago.edu/~nnn/LAB/) teaches general circulation of the atmosphere and oceans to hundreds of students each year. Following this tradition, we have constructed a lab model of iceberg-capsize dynamics for use in the Fultz Laboratory, which focuses on the interface between glaciology and physical oceanography. The experiment consists of a 2.5 meter long wave tank containing water and plastic "icebergs". The motion of the icebergs is tracked using digital video. Movies can be found at: http://geosci.uchicago.edu/research/glaciology_files/tsunamigenesis_research.shtml. We have had 3 successful undergraduate interns with backgrounds in mathematics, engineering, and geosciences perform experiments, analyze data, and interpret results. In addition to iceberg dynamics, the wave-tank has served as a teaching tool in undergraduate classes studying dam-breaking and tsunami run-up. Motivated by the relatively inexpensive cost of our apparatus (~1K-2K dollars) and positive experiences of undergraduate students, we hope to serve as a model for undergraduate research and education that other universities may follow.

  9. Toward seamless hydrologic predictions across spatial scales

    NASA Astrophysics Data System (ADS)

    Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Rakovec, Oldrich; Zink, Matthias; Wanders, Niko; Eisner, Stephanie; Müller Schmied, Hannes; Sutanudjaja, Edwin H.; Warrach-Sagi, Kirsten; Attinger, Sabine

    2017-09-01

    Land surface and hydrologic models (LSMs/HMs) are used at diverse spatial resolutions ranging from catchment-scale (1-10 km) to global-scale (over 50 km) applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the chosen resolution, i.e., fulfills a flux-matching condition across scales. An analysis of state-of-the-art LSMs and HMs reveals that most do not have consistent hydrologic parameter fields. Multiple experiments with the mHM, Noah-MP, PCR-GLOBWB, and WaterGAP models demonstrate the pitfalls of deficient parameterization practices currently used in most operational models, which are insufficient to satisfy the flux-matching condition. These examples demonstrate that J. Dooge's 1982 statement on the unsolved problem of parameterization in these models remains true. Based on a review of existing parameter regionalization techniques, we postulate that the multiscale parameter regionalization (MPR) technique offers a practical and robust method that provides consistent (seamless) parameter and flux fields across scales. Herein, we develop a general model protocol to describe how MPR can be applied to a particular model and present an example application using the PCR-GLOBWB model. Finally, we discuss potential advantages and limitations of MPR in obtaining the seamless prediction of hydrological fluxes and states across spatial scales.

  10. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    NASA Astrophysics Data System (ADS)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed on JUQUEEN with processor counts on the order of 10,000. The instrumentation is used in weak and strong scaling studies with real data cases and hypothetical idealized numerical experiments for detailed profiling and tracing analysis. The profiling is not only useful in identifying wait states that are due to the MPMD execution model, but also in fine-tuning resource allocation to the component models in search of the most suitable load balancing. This is especially necessary, as with numerical experiments that cover multiple (high resolution) spatial scales, the time stepping, coupling frequencies, and communication overheads are constantly shifting, which makes it necessary to re-determine the model setup with each new experimental design.

  11. Wafer-scale fabrication of glass-FEP-glass microfluidic devices for lipid bilayer experiments.

    PubMed

    Bomer, Johan G; Prokofyev, Alexander V; van den Berg, Albert; Le Gac, Séverine

    2014-12-07

    We report a wafer-scale fabrication process for the production of glass-FEP-glass microdevices using UV-curable adhesive (NOA81) as gluing material, which is applied using a novel "spin & roll" approach. Devices are characterized for the uniformity of the gluing layer, presence of glue in the microchannels, and alignment precision. Experiments on lipid bilayers with electrophysiological recordings using a model pore-forming polypeptide are demonstrated.

  12. What controls deposition rate in electron-beam chemical vapor deposition?

    PubMed

    White, William B; Rykaczewski, Konrad; Fedorov, Andrei G

    2006-08-25

    The key physical processes governing electron-beam-assisted chemical vapor deposition are analyzed via a combination of theoretical modeling and supporting experiments. The scaling laws that define growth of the nanoscale deposits are developed and verified using carefully designed experiments of carbon deposition from methane onto a silicon substrate. The results suggest that the chamber-scale continuous transport of the precursor gas is the rate controlling process in electron-beam chemical vapor deposition.

  13. Monodisperse granular flows in viscous dispersions in a centrifugal acceleration field

    NASA Astrophysics Data System (ADS)

    Cabrera, Miguel Angel; Wu, Wei

    2016-04-01

    Granular flows are encountered in geophysical flows and innumerable industrial applications with particulate materials. When mixed with a fluid, a complex network of interactions between the particle- and fluid-phase develops, resulting in a compound material with a yet unclear physical behaviour. In the study of granular suspensions mixed with a viscous dispersion, the scaling of the stress-strain characteristics of the fluid phase needs to account for the level of inertia developed in experiments. However, the required model dimensions and amount of material becomes a main limitation for their study. In recent years, centrifuge modelling has been presented as an alternative for the study of particle-fluid flows in a reduced scaled model in an augmented acceleration field. By formulating simple scaling principles proportional to the equivalent acceleration Ng in the model, the resultant flows share many similarities with field events. In this work we study the scaling principles of the fluid phase and its effects on the flow of granular suspensions. We focus on the dense flow of a monodisperse granular suspension mixed with a viscous fluid phase, flowing down an inclined plane and being driven by a centrifugal acceleration field. The scaled model allows the continuous monitoring of the flow heights, velocity fields, basal pressure and mass flow rates at different Ng levels. The experiments successfully identify the effects of scaling the plastic viscosity of the fluid phase, its relation with the deposition of particles over the inclined plane, and allows formulating a discussion on the suitability of simulating particle-fluid flows in a centrifugal acceleration field.

  14. Comparative Modeling Studies of Boreal Water and Carbon Balance

    NASA Technical Reports Server (NTRS)

    Coughlan, J.; Peterson, David L. (Technical Monitor)

    1997-01-01

    The coordination of the modeling and field efforts for an Intensive Field Campaign (IFC) may resemble the chicken and egg dilemma. This session's theme advocates that early and proactive involvement by modeling teams can produce a scientific and operational benefit for the IFC and Experiment. This talk will provide some examples and suggestions originating from the NASA funded IFC's of the FIFE First ISLSCP (International Satellite Land Surface Climatology Project) Field Experiment, Oregon Transect Ecosystem Research (OTTER) and predominately Boreal Ecosystem-Atmosphere Study (BOREAS) Experiments. In February 1994 and prior to the final selection of the BOREAS study sites, a group of funded BOREAS investigators agreed to run their models with data for five community types representing the proposed tower flux sites. All participating models were given identical initial values and boundary conditions and driven with identical climate data. The objectives of the intercomparison exercise were: 1) compare simulation results of participating terrestrial, hydrological, and atmospheric models over selected time frames; 2) learn about model behavior and sensitivity to estimated boreal site and vegetation definitions; 3) prioritize BOREAS field data collection efforts supporting modeling studies; 4) identify individual model deficiencies as early as possible. Out of these objectives evolved some important coordination and science issues for the BOREAS Experiment that can be generalized to IFCs and long term archiving of the data. Some problems are acceptable because they are endemic to maintaining fair and open competition prior to the peer review process. Others are logistical and addressable through application of planning, management, and information sciences. This investigator has identified one source of measurement and model incompatibility that is manifest in the IFC scaling approach. Although intuitively obvious, scaling problems are already more formally defined in the Geography literature. An example of the scaling problem will be demonstrated with Vegetation/Ecosystem Mapping and Analysis Project (VEMAP) and OTTER data.

  15. Application of the SCALE TSUNAMI Tools for the Validation of Criticality Safety Calculations Involving 233U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don; Rearden, Bradley T; Hollenbach, Daniel F

    2009-02-01

    The Radiochemical Development Facility at Oak Ridge National Laboratory has been storing solid materials containing 233U for decades. Preparations are under way to process these materials into a form that is inherently safe from a nuclear criticality safety perspective. This will be accomplished by down-blending the {sup 233}U materials with depleted or natural uranium. At the request of the U.S. Department of Energy, a study has been performed using the SCALE sensitivity and uncertainty analysis tools to demonstrate how these tools could be used to validate nuclear criticality safety calculations of selected process and storage configurations. ISOTEK nuclear criticality safetymore » staff provided four models that are representative of the criticality safety calculations for which validation will be needed. The SCALE TSUNAMI-1D and TSUNAMI-3D sequences were used to generate energy-dependent k{sub eff} sensitivity profiles for each nuclide and reaction present in the four safety analysis models, also referred to as the applications, and in a large set of critical experiments. The SCALE TSUNAMI-IP module was used together with the sensitivity profiles and the cross-section uncertainty data contained in the SCALE covariance data files to propagate the cross-section uncertainties ({Delta}{sigma}/{sigma}) to k{sub eff} uncertainties ({Delta}k/k) for each application model. The SCALE TSUNAMI-IP module was also used to evaluate the similarity of each of the 672 critical experiments with each application. Results of the uncertainty analysis and similarity assessment are presented in this report. A total of 142 experiments were judged to be similar to application 1, and 68 experiments were judged to be similar to application 2. None of the 672 experiments were judged to be adequately similar to applications 3 and 4. Discussion of the uncertainty analysis and similarity assessment is provided for each of the four applications. Example upper subcritical limits (USLs) were generated for application 1 based on trending of the energy of average lethargy of neutrons causing fission, trending of the TSUNAMI similarity parameters, and use of data adjustment techniques.« less

  16. Reduced Complexity Modelling of Urban Floodplain Inundation

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Brasington, J.; Mihir, M.

    2004-12-01

    Significant recent advances in floodplain inundation modelling have been achieved by directly coupling 1d channel hydraulic models with a raster storage cell approximation for floodplain flows. The strengths of this reduced-complexity model structure derive from its explicit dependence on a digital elevation model (DEM) to parameterize flows through riparian areas, providing a computationally efficient algorithm to model heterogeneous floodplains. Previous applications of this framework have generally used mid-range grid scales (101-102 m), showing the capacity of the models to simulate long reaches (103-104 m). However, the increasing availability of precision DEMs derived from airborne laser altimetry (LIDAR) enables their use at very high spatial resolutions (100-101 m). This spatial scale offers the opportunity to incorporate the complexity of the built environment directly within the floodplain DEM and simulate urban flooding. This poster describes a series of experiments designed to explore model functionality at these reduced scales. Important questions are considered, raised by this new approach, about the reliability and representation of the floodplain topography and built environment, and the resultant sensitivity of inundation forecasts. The experiments apply a raster floodplain model to reconstruct a 1:100 year flood event on the River Granta in eastern England, which flooded 72 properties in the town of Linton in October 2001. The simulations use a nested-scale model to maintain efficiency. A 2km by 4km urban zone is represented by a high-resolution DEM derived from single-pulse LIDAR data supplied by the UK Environment Agency, together with surveyed data and aerial photography. Novel methods of processing the raw data to provide the individual structure detail required are investigated and compared. This is then embedded within a lower-resolution model application at the reach scale which provides boundary conditions based on recorded flood stage. The high resolution predictions on a scale commensurate with urban structures make possible a multi-criteria validation which combines verification of reach-scale characteristics such as downstream flow and inundation extent with internal validation of flood depth at individual sites.

  17. A methodology for least-squares local quasi-geoid modelling using a noisy satellite-only gravity field model

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-04-01

    The paper is about a methodology to combine a noisy satellite-only global gravity field model (GGM) with other noisy datasets to estimate a local quasi-geoid model using weighted least-squares techniques. In this way, we attempt to improve the quality of the estimated quasi-geoid model and to complement it with a full noise covariance matrix for quality control and further data processing. The methodology goes beyond the classical remove-compute-restore approach, which does not account for the noise in the satellite-only GGM. We suggest and analyse three different approaches of data combination. Two of them are based on a local single-scale spherical radial basis function (SRBF) model of the disturbing potential, and one is based on a two-scale SRBF model. Using numerical experiments, we show that a single-scale SRBF model does not fully exploit the information in the satellite-only GGM. We explain this by a lack of flexibility of a single-scale SRBF model to deal with datasets of significantly different bandwidths. The two-scale SRBF model performs well in this respect, provided that the model coefficients representing the two scales are estimated separately. The corresponding methodology is developed in this paper. Using the statistics of the least-squares residuals and the statistics of the errors in the estimated two-scale quasi-geoid model, we demonstrate that the developed methodology provides a two-scale quasi-geoid model, which exploits the information in all datasets.

  18. Scale dependant compensational stacking of channelized sedimentary deposits

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Straub, K. M.; Hajek, E. A.

    2010-12-01

    Compensational stacking, the tendency for sediment transport system to preferentially fill topographic lows, thus smoothing out topographic relief is a concept used in the interpretation of the stratigraphic record. Recently, a metric was developed to quantify the strength of compensation in sedimentary basins by comparing observed stacking patterns to what would be expected from simple, uncorrelated stacking. This method uses the rate of decay of spatial variability in sedimentation between picked depositional horizons with increasing vertical stratigraphic averaging distance. We explore how this metric varies as a function of stratigraphic scale using data from physical experiments, stratigraphy exposed in outcrops and numerical models. In an experiment conducted at Tulane University’s Sediment Dynamics Laboratory, the topography of a channelized delta formed by weakly cohesive sediment was monitored along flow-perpendicular transects at a high temporal resolution relative to channel kinematics. Over the course of this experiment a uniform relative subsidence pattern, designed to isolate autogenic processes, resulted in the construction of a stratigraphic package that is 25 times as thick as the depth of the experimental channels. We observe a scale-dependence on the compensational stacking of deposits set by the system’s avulsion time-scale. Above the avulsion time-scale deposits stack purely compensationally, but below this time-scale deposits stack somewhere between randomly and deterministically. The well-exposed Ferris Formation (Cretaceous/Paleogene, Hanna Basin, Wyoming, USA) also shows scale-dependant stratigraphic organization which appears to be set by an avulsion time-scale. Finally, we utilize simple object-based models to illustrate how channel avulsions influence compensation in alluvial basins.

  19. Development of a Shipboard Remote Control and Telemetry Experimental System for Large-Scale Model’s Motions and Loads Measurement in Realistic Sea Waves

    PubMed Central

    Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe

    2017-01-01

    Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship’s navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign. PMID:29109379

  20. Crustal evolution inferred from Apollo magnetic measurements

    NASA Technical Reports Server (NTRS)

    Dyal, P.; Daily, W. D.; Vanian, L. L.

    1978-01-01

    The topology of lunar remanent fields is investigated by analyzing simultaneous magnetometer and solar wind spectrometer data. The diffusion model proposed by Vanyan (1977) to describe the field-plasma interaction at the lunar surface is extended to describe the interaction with fields characterized by two scale lengths, and the extended model is compared with data from three Apollo landing sites (Apollo 12, 15 and 16) with crustal fields of differing intensity and topology. Local remanent field properties from this analysis are compared with high spatial resolution magnetic maps obtained from the electron reflection experiment. It is concluded that remanent fields over most of the lunar surface are characterized by spatial variations as small as a few kilometers. Large regions (50 to 100 km) of the lunar crust were probably uniformly magnetized early in the evolution of the crust. Smaller scale (5 to 10 km) magnetic sources close to the surface were left by bombardment and subsequent gardening of the upper layers of these magnetized regions. The small scale sized remanent fields of about 100 gammas are measured by surface experiments, whereas the larger scale sized fields of about 0.1 gammas are measured by the orbiting subsatellite experiments.

  1. Cold dark matter and degree-scale cosmic microwave background anisotropy statistics after COBE

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Stompor, Radoslaw; Juszkiewicz, Roman

    1993-01-01

    We conduct a Monte Carlo simulation of the cosmic microwave background (CMB) anisotropy in the UCSB South Pole 1991 degree-scale experiment. We examine cold dark matter cosmology with large-scale structure seeded by the Harrison-Zel'dovich hierarchy of Gaussian-distributed primordial inhomogeneities normalized to the COBE-DMR measurement of large-angle CMB anisotropy. We find it statistically implausible (in the sense of low cumulative probability F lower than 5 percent, of not measuring a cosmological delta-T/T signal) that the degree-scale cosmological CMB anisotropy predicted in such models could have escaped a detection at the level of sensitivity achieved in the South Pole 1991 experiment.

  2. Evaluating the Global Precipitation Measurement mission with NOAA/NSSL Multi-Radar Multisensor: current status and future directions.

    NASA Astrophysics Data System (ADS)

    Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C. D.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.

    2016-12-01

    Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.

  3. Visualizing and measuring flow in shale matrix using in situ synchrotron X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Kohli, A. H.; Kiss, A. M.; Kovscek, A. R.; Bargar, J.

    2017-12-01

    Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.

  4. Experiences Integrating Transmission and Distribution Simulations for DERs with the Integrated Grid Modeling System (IGMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias

    2016-08-11

    This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less

  5. Unifying Pore Network Modeling, Continuous Time Random Walk (CTRW) Theory and Experiment to Describe Impact of Spatial Heterogeneities on Solute Dispersion at Multiple Length-scales

    NASA Astrophysics Data System (ADS)

    Bijeljic, B.; Blunt, M. J.; Rhodes, M. E.

    2009-04-01

    This talk will describe and highlight the advantages offered by a novel methodology that unifies pore network modeling, CTRW theory and experiment in description of solute dispersion in porous media. Solute transport in a porous medium is characterized by the interplay of advection and diffusion (described by Peclet number, Pe) that cause dispersion of solute particles. Dispersion is traditionally described by dispersion coefficients, D, that are commonly calculated from the spatial moments of the plume. Using a pore-scale network model based on particle tracking, the rich Peclet-number dependence of dispersion coefficient is predicted from first principles and is shown to compare well with experimental data for restricted diffusion, transition, power-law and mechanical dispersion regimes in the asymptotic limit. In the asymptotic limit D is constant and can be used in an averaged advection-dispersion equation. However, it is highly important to recognize that, until the velocity field is fully sampled, the particle transport is non-Gaussian and D possesses temporal or spatial variation. Furthermore, temporal probability density functions (PDF) of tracer particles are studied in pore networks and an excellent agreement for the spectrum of transition times for particles from pore to pore is obtained between network model results and CTRW theory. Based on the truncated power-law interpretation of PDF-s, the physical origin of the power-law scaling of dispersion coefficient vs. Peclet number has been explained for unconsolidated porous media, sands and a number of sandstones, arriving at the same conclusion from numerical network modelling, analytic CTRW theory and experiment. The length traveled by solute plumes before Gaussian behaviour is reached increases with an increase in heterogeneity and/or Pe. This opens up the question on the nature of dispersion in natural systems where the heterogeneities at the larger scales will significantly increase the range of velocities in the reservoir, thus significantly delaying the asymptotic approach to Gaussian behaviour. As a consequence, the asymptotic behaviour might not be reached at the field scale. This is illustrated by the multi-scale approach in which transport at core, gridblock and field scale is viewed as a series of particle transitions between discrete nodes governed by probability distributions. At each scale of interest a distribution that represents transport physics (and the heterogeneity) is used as an input to model a subsequent reservoir scale. The extensions to reactive transport are discussed.

  6. High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6

    DOE PAGES

    Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; ...

    2016-11-22

    Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relativelymore » few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950–2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. Lastly, HighResMIP thereby focuses on one of the CMIP6 broad questions, “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.« less

  7. Can tonne-scale direct detection experiments discover nuclear dark matter?

    NASA Astrophysics Data System (ADS)

    Butcher, Alistair; Kirk, Russell; Monroe, Jocelyn; West, Stephen M.

    2017-10-01

    Models of nuclear dark matter propose that the dark sector contains large composite states consisting of dark nucleons in analogy to Standard Model nuclei. We examine the direct detection phenomenology of a particular class of nuclear dark matter model at the current generation of tonne-scale liquid noble experiments, in particular DEAP-3600 and XENON1T. In our chosen nuclear dark matter scenario distinctive features arise in the recoil energy spectra due to the non-point-like nature of the composite dark matter state. We calculate the number of events required to distinguish these spectra from those of a standard point-like WIMP state with a decaying exponential recoil spectrum. In the most favourable regions of nuclear dark matter parameter space, we find that a few tens of events are needed to distinguish nuclear dark matter from WIMPs at the 3 σ level in a single experiment. Given the total exposure time of DEAP-3600 and XENON1T we find that at best a 2 σ distinction is possible by these experiments individually, while 3 σ sensitivity is reached for a range of parameters by the combination of the two experiments. We show that future upgrades of these experiments have potential to distinguish a large range of nuclear dark matter models from that of a WIMP at greater than 3 σ.

  8. Can tonne-scale direct detection experiments discover nuclear dark matter?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butcher, Alistair; Kirk, Russell; Monroe, Jocelyn

    Models of nuclear dark matter propose that the dark sector contains large composite states consisting of dark nucleons in analogy to Standard Model nuclei. We examine the direct detection phenomenology of a particular class of nuclear dark matter model at the current generation of tonne-scale liquid noble experiments, in particular DEAP-3600 and XENON1T. In our chosen nuclear dark matter scenario distinctive features arise in the recoil energy spectra due to the non-point-like nature of the composite dark matter state. We calculate the number of events required to distinguish these spectra from those of a standard point-like WIMP state with amore » decaying exponential recoil spectrum. In the most favourable regions of nuclear dark matter parameter space, we find that a few tens of events are needed to distinguish nuclear dark matter from WIMPs at the 3 σ level in a single experiment. Given the total exposure time of DEAP-3600 and XENON1T we find that at best a 2 σ distinction is possible by these experiments individually, while 3 σ sensitivity is reached for a range of parameters by the combination of the two experiments. We show that future upgrades of these experiments have potential to distinguish a large range of nuclear dark matter models from that of a WIMP at greater than 3 σ .« less

  9. Understanding hydraulic fracturing: a multi-scale problem.

    PubMed

    Hyman, J D; Jiménez-Martínez, J; Viswanathan, H S; Carey, J W; Porter, M L; Rougier, E; Karra, S; Kang, Q; Frash, L; Chen, L; Lei, Z; O'Malley, D; Makedonska, N

    2016-10-13

    Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages.This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).

  10. Understanding hydraulic fracturing: a multi-scale problem

    PubMed Central

    Hyman, J. D.; Jiménez-Martínez, J.; Viswanathan, H. S.; Carey, J. W.; Porter, M. L.; Rougier, E.; Karra, S.; Kang, Q.; Frash, L.; Chen, L.; Lei, Z.; O’Malley, D.; Makedonska, N.

    2016-01-01

    Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597789

  11. Multidimensional analysis of data obtained in experiments with X-ray emulsion chambers and extensive air showers

    NASA Technical Reports Server (NTRS)

    Chilingaryan, A. A.; Galfayan, S. K.; Zazyan, M. Z.; Dunaevsky, A. M.

    1985-01-01

    Nonparametric statistical methods are used to carry out the quantitative comparison of the model and the experimental data. The same methods enable one to select the events initiated by the heavy nuclei and to calculate the portion of the corresponding events. For this purpose it is necessary to have the data on artificial events describing the experiment sufficiently well established. At present, the model with the small scaling violation in the fragmentation region is the closest to the experiments. Therefore, the treatment of gamma families obtained in the Pamir' experiment is being carried out at present with the application of these models.

  12. Downscaling Ocean Conditions: Initial Results using a Quasigeostrophic and Realistic Ocean Model

    NASA Astrophysics Data System (ADS)

    Katavouta, Anna; Thompson, Keith

    2014-05-01

    Previous theoretical work (Henshaw et al, 2003) has shown that the small-scale modes of variability of solutions of the unforced, incompressible Navier-Stokes equation, and Burgers' equation, can be reconstructed with surprisingly high accuracy from the time history of a few of the large-scale modes. Motivated by this theoretical work we first describe a straightforward method for assimilating information on the large scales in order to recover the small scale oceanic variability. The method is based on nudging in specific wavebands and frequencies and is similar to the so-called spectral nudging method that has been used successfully for atmospheric downscaling with limited area models (e.g. von Storch et al., 2000). The validity of the method is tested using a quasigestrophic model configured to simulate a double ocean gyre separated by an unstable mid-ocean jet. It is shown that important features of the ocean circulation including the position of the meandering mid-ocean jet and associated pinch-off eddies can indeed be recovered from the time history of a small number of large-scales modes. The benefit of assimilating additional time series of observations from a limited number of locations, that alone are too sparse to significantly improve the recovery of the small scales using traditional assimilation techniques, is also demonstrated using several twin experiments. The final part of the study outlines the application of the approach using a realistic high resolution (1/36 degree) model, based on the NEMO (Nucleus for European Modelling of the Ocean) modeling framework, configured for the Scotian Shelf of the east coast of Canada. The large scale conditions used in this application are obtained from the HYCOM (HYbrid Coordinate Ocean Model) + NCODA (Navy Coupled Ocean Data Assimilation) global 1/12 degree analysis product. Henshaw, W., Kreiss, H.-O., Ystrom, J., 2003. Numerical experiments on the interaction between the larger- and the small-scale motion of the Navier-Stokes equations. Multiscale Modeling and Simulation 1, 119-149. von Storch, H., Langenberg, H., Feser, F., 2000. A spectral nudging technique for dynamical downscaling purposes. Monthly Weather Review 128, 3664-3673.

  13. Preparation for Scaling Studies of Ice-Crystal Icing at the NRC Research Altitude Test Facility

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Bencic, Timothy J.; Tsao, Jen-Ching; Fuleki, Dan; Knezevici, Daniel C.

    2013-01-01

    This paper describes experiments conducted at the National Research Council (NRC) of Canadas Research Altitiude Test Facility between March 26 and April 11, 2012. The tests, conducted collaboratively between NASA and NRC, focus on three key aspects in preparation for later scaling work to be conducted with a NACA 0012 airfoil model in the NRC Cascade rig: (1) cloud characterization, (2) scaling model development, and (3) ice-shape profile measurements. Regarding cloud characterization, the experiments focus on particle spectra measurements using two shadowgraphy methods, cloud uniformity via particle scattering from a laser sheet, and characterization of the SEA Multi-Element probe. Overviews of each aspect as well as detailed information on the diagnostic method are presented. Select results from the measurements and interpretation are presented which will help guide future work.

  14. Multiscale atomistic simulation of metal-oxygen surface interactions: Methodological development, theoretical investigation, and correlation with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Judith C.

    The purpose of this grant is to develop the multi-scale theoretical methods to describe the nanoscale oxidation of metal thin films, as the PI (Yang) extensive previous experience in the experimental elucidation of the initial stages of Cu oxidation by primarily in situ transmission electron microscopy methods. Through the use and development of computational tools at varying length (and time) scales, from atomistic quantum mechanical calculation, force field mesoscale simulations, to large scale Kinetic Monte Carlo (KMC) modeling, the fundamental underpinings of the initial stages of Cu oxidation have been elucidated. The development of computational modeling tools allows for acceleratedmore » materials discovery. The theoretical tools developed from this program impact a wide range of technologies that depend on surface reactions, including corrosion, catalysis, and nanomaterials fabrication.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Sa V.; Athmer, C.J.; Sheridan, P.W.

    Contamination in low permeability soils poses a significant technical challenge to in-situ remediation efforts. Poor accessibility to the contaminants and difficulty in delivery of treatment reagents have rendered existing in-situ treatments such as bioremediation, vapor extraction, pump and treat rather ineffective when applied to low permeability soils present at many contaminated sites. This technology is an integrated in-situ treatment in which established geotechnical methods are used to install degradation zones directly in the contaminated W and electro-osmosis is utilized to move the contaminants back and forth through those zones until the treatment is completed. This topical report summarizes the resultsmore » of the lab and pilot sized Lasagna{trademark} experiments conducted at Monsanto. Experiments were conducted with kaofinite and an actual Paducah soil in units ranging from bench-scale containing kg-quantity of soil to pilot-scale containing about half a ton of soil having various treatment zone configurations. The obtained data support the feasibility of scaling up this technology with respect to electrokinetic parameters as well as removal of organic contaminants. A mathematical model was developed that was successful in predicting the temperature rises in the soil. The information and experience gained from these experiments along with the modeling effort enabled us to successfully design and operate a larger field experiment at a DOE TCE-contaminated clay site.« less

  16. Various Numerical Applications on Tropical Convective Systems Using a Cloud Resolving Model

    NASA Technical Reports Server (NTRS)

    Shie, C.-L.; Tao, W.-K.; Simpson, J.

    2003-01-01

    In recent years, increasing attention has been given to cloud resolving models (CRMs or cloud ensemble models-CEMs) for their ability to simulate the radiative-convective system, which plays a significant role in determining the regional heat and moisture budgets in the Tropics. The growing popularity of CRM usage can be credited to its inclusion of crucial and physically relatively realistic features such as explicit cloud-scale dynamics, sophisticated microphysical processes, and explicit cloud-radiation interaction. On the other hand, impacts of the environmental conditions (for example, the large-scale wind fields, heat and moisture advections as well as sea surface temperature) on the convective system can also be plausibly investigated using the CRMs with imposed explicit forcing. In this paper, by basically using a Goddard Cumulus Ensemble (GCE) model, three different studies on tropical convective systems are briefly presented. Each of these studies serves a different goal as well as uses a different approach. In the first study, which uses more of an idealized approach, the respective impacts of the large-scale horizontal wind shear and surface fluxes on the modeled tropical quasi-equilibrium states of temperature and water vapor are examined. In this 2-D study, the imposed large-scale horizontal wind shear is ideally either nudged (wind shear maintained strong) or mixed (wind shear weakened), while the minimum surface wind speed used for computing surface fluxes varies among various numerical experiments. For the second study, a handful of real tropical episodes (TRMM Kwajalein Experiment - KWAJEX, 1999; TRMM South China Sea Monsoon Experiment - SCSMEX, 1998) have been simulated such that several major atmospheric characteristics such as the rainfall amount and its associated stratiform contribution, the Qlheat and Q2/moisture budgets are investigated. In this study, the observed large-scale heat and moisture advections are continuously applied to the 2-D model. The modeled cloud generated from such an approach is termed continuously forced convection or continuous large-scale forced convection. A third study, which focuses on the respective impact of atmospheric components on upper Ocean heat and salt budgets, will be presented in the end. Unlike the two previous 2-D studies, this study employs the 3-D GCE-simulated diabatic source terms (using TOGA COARE observations) - radiation (longwave and shortwave), surface fluxes (sensible and latent heat, and wind stress), and precipitation as input for the Ocean mixed-layer (OML) model.

  17. Seismic and aseismic fault slip in response to fluid injection observed during field experiments at meter scale

    NASA Astrophysics Data System (ADS)

    Cappa, F.; Guglielmi, Y.; De Barros, L.; Wynants-Morel, N.; Duboeuf, L.

    2017-12-01

    During fluid injection, the observations of an enlarging cloud of seismicity are generally explained by a direct response to the pore pressure diffusion in a permeable fractured rock. However, fluid injection can also induce large aseismic deformations which provide an alternative mechanism for triggering and driving seismicity. Despite the importance of these two mechanisms during fluid injection, there are few studies on the effects of fluid pressure on the partitioning between seismic and aseismic motions under controlled field experiments. Here, we describe in-situ meter-scale experiments measuring synchronously the fluid pressure, the fault motions and the seismicity directly in a fault zone stimulated by controlled fluid injection at 280 m depth in carbonate rocks. The experiments were conducted in a gallery of an underground laboratory in south of France (LSBB, http://lsbb.eu). Thanks to the proximal monitoring at high-frequency, our data show that the fluid overpressure mainly induces a dilatant aseismic slip (several tens of microns up to a millimeter) at the injection. A sparse seismicity (-4 < Mw < -3) is observed several meters away from the injection, in a part of the fault zone where the fluid overpressure is null or very low. Using hydromechanical modeling with friction laws, we simulated an experiment and investigated the relative contribution of the fluid pressure diffusion and stress transfer on the seismic and aseismic fault behavior. The model reproduces the hydromechanical data measured at injection, and show that the aseismic slip induced by fluid injection propagates outside the pressurized zone where accumulated shear stress develops, and potentially triggers seismicity. Our models also show that the permeability enhancement and friction evolution are essential to explain the fault slip behavior. Our experimental results are consistent with large-scale observations of fault motions at geothermal sites (Wei et al., 2015; Cornet, 2016), and suggest that controlled field experiments at meter-scale are important for better assessing the role of fluid pressure in natural and human-induced earthquakes.

  18. Transfer of movement sequences: bigger is better.

    PubMed

    Dean, Noah J; Kovacs, Attila J; Shea, Charles H

    2008-02-01

    Experiment 1 was conducted to determine if proportional transfer from "small to large" scale movements is as effective as transferring from "large to small." We hypothesize that the learning of larger scale movement will require the participant to learn to manage the generation, storage, and dissipation of forces better than when practicing smaller scale movements. Thus, we predict an advantage for transfer of larger scale movements to smaller scale movements relative to transfer from smaller to larger scale movements. Experiment 2 was conducted to determine if adding a load to a smaller scale movement would enhance later transfer to a larger scale movement sequence. It was hypothesized that the added load would require the participants to consider the dynamics of the movement to a greater extent than without the load. The results replicated earlier findings of effective transfer from large to small movements, but consistent with our hypothesis, transfer was less effective from small to large (Experiment 1). However, when a load was added during acquisition transfer from small to large was enhanced even though the load was removed during the transfer test. These results are consistent with the notion that the transfer asymmetry noted in Experiment 1 was due to factors related to movement dynamics that were enhanced during practice of the larger scale movement sequence, but not during the practice of the smaller scale movement sequence. The findings that the movement structure is unaffected by transfer direction but the movement dynamics are influenced by transfer direction is consistent with hierarchal models of sequence production.

  19. Cloud/climate sensitivity experiments

    NASA Technical Reports Server (NTRS)

    Roads, J. O.; Vallis, G. K.; Remer, L.

    1982-01-01

    A study of the relationships between large-scale cloud fields and large scale circulation patterns is presented. The basic tool is a multi-level numerical model comprising conservation equations for temperature, water vapor and cloud water and appropriate parameterizations for evaporation, condensation, precipitation and radiative feedbacks. Incorporating an equation for cloud water in a large-scale model is somewhat novel and allows the formation and advection of clouds to be treated explicitly. The model is run on a two-dimensional, vertical-horizontal grid with constant winds. It is shown that cloud cover increases with decreased eddy vertical velocity, decreased horizontal advection, decreased atmospheric temperature, increased surface temperature, and decreased precipitation efficiency. The cloud field is found to be well correlated with the relative humidity field except at the highest levels. When radiative feedbacks are incorporated and the temperature increased by increasing CO2 content, cloud amounts decrease at upper-levels or equivalently cloud top height falls. This reduces the temperature response, especially at upper levels, compared with an experiment in which cloud cover is fixed.

  20. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  1. Experiments with data assimilation in comprehensive air quality models: Impacts on model predictions and observation requirements (Invited)

    NASA Astrophysics Data System (ADS)

    Mathur, R.

    2009-12-01

    Emerging regional scale atmospheric simulation models must address the increasing complexity arising from new model applications that treat multi-pollutant interactions. Sophisticated air quality modeling systems are needed to develop effective abatement strategies that focus on simultaneously controlling multiple criteria pollutants as well as use in providing short term air quality forecasts. In recent years the applications of such models is continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physical and chemical atmospheric processes occurring at these disparate spatial and temporal scales requires the use of observation data beyond traditional in-situ networks so that the model simulations can be reasonably constrained. Preliminary applications of assimilation of remote sensing and aloft observations within a comprehensive regional scale atmospheric chemistry-transport modeling system will be presented: (1) A methodology is developed to assimilate MODIS aerosol optical depths in the model to represent the impacts long-range transport associated with the summer 2004 Alaskan fires on surface-level regional fine particulate matter (PM2.5) concentrations across the Eastern U.S. The episodic impact of this pollution transport event on PM2.5 concentrations over the eastern U.S. during mid-July 2004, is quantified through the complementary use of the model with remotely-sensed, aloft, and surface measurements; (2) Simple nudging experiments with limited aloft measurements are performed to identify uncertainties in model representations of physical processes and assess the potential use of such measurements in improving the predictive capability of atmospheric chemistry-transport models. The results from these early applications will be discussed in context of uncertainties in the model and in the remote sensing data and needs for defining a future optimum observing strategy.

  2. An imperative need for global change research in tropical forests.

    PubMed

    Zhou, Xuhui; Fu, Yuling; Zhou, Lingyan; Li, Bo; Luo, Yiqi

    2013-09-01

    Tropical forests play a crucial role in regulating regional and global climate dynamics, and model projections suggest that rapid climate change may result in forest dieback or savannization. However, these predictions are largely based on results from leaf-level studies. How tropical forests respond and feedback to climate change is largely unknown at the ecosystem level. Several complementary approaches have been used to evaluate the effects of climate change on tropical forests, but the results are conflicting, largely due to confounding effects of multiple factors. Although altered precipitation and nitrogen deposition experiments have been conducted in tropical forests, large-scale warming and elevated carbon dioxide (CO2) manipulations are completely lacking, leaving many hypotheses and model predictions untested. Ecosystem-scale experiments to manipulate temperature and CO2 concentration individually or in combination are thus urgently needed to examine their main and interactive effects on tropical forests. Such experiments will provide indispensable data and help gain essential knowledge on biogeochemical, hydrological and biophysical responses and feedbacks of tropical forests to climate change. These datasets can also inform regional and global models for predicting future states of tropical forests and climate systems. The success of such large-scale experiments in natural tropical forests will require an international framework to coordinate collaboration so as to meet the challenges in cost, technological infrastructure and scientific endeavor.

  3. A catchment scale water balance model for FIFE

    NASA Technical Reports Server (NTRS)

    Famiglietti, J. S.; Wood, E. F.; Sivapalan, M.; Thongs, D. J.

    1992-01-01

    A catchment scale water balance model is presented and used to predict evaporation from the King's Creek catchment at the First ISLSCP Field Experiment site on the Konza Prairie, Kansas. The model incorporates spatial variability in topography, soils, and precipitation to compute the land surface hydrologic fluxes. A network of 20 rain gages was employed to measure rainfall across the catchment in the summer of 1987. These data were spatially interpolated and used to drive the model during storm periods. During interstorm periods the model was driven by the estimated potential evaporation, which was calculated using net radiation data collected at site 2. Model-computed evaporation is compared to that observed, both at site 2 (grid location 1916-BRS) and the catchment scale, for the simulation period from June 1 to October 9, 1987.

  4. Thermochemical conversion of biomass in smouldering combustion across scales: The roles of heterogeneous kinetics, oxygen and transport phenomena.

    PubMed

    Huang, Xinyan; Rein, Guillermo

    2016-05-01

    The thermochemical conversion of biomass in smouldering combustion is investigated here by combining experiments and modeling at two scales: matter (1mg) and bench (100g) scales. Emphasis is put on the effect of oxygen (0-33vol.%) and oxidation reactions because these are poorly studied in the literature in comparison to pyrolysis. The results are obtained for peat as a representative biomass for which there is high-quality experimental data published previously. Three kinetic schemes are explored, including various steps of drying, pyrolysis and oxidation. The kinetic parameters are found using the Kissinger-Genetic Algorithm method, and then implemented in a one-dimensional model of heat and mass transfer. The predictions are validated with thermogravimetric and bench-scale experiments and then analyzed to unravel the role of heterogeneous reaction. This is the first time that the influence of oxygen on biomass smouldering is explained in terms of both chemistry and transport phenomena across scales. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Moisture balance over the Iberian Peninsula computed using a high resolution regional climate model. The impact of 3DVAR data assimilation.

    NASA Astrophysics Data System (ADS)

    González-Rojí, Santos J.; Sáenz, Jon; Ibarra-Berastegi, Gabriel

    2016-04-01

    A numerical downscaling exercise over the Iberian Peninsula has been run nesting the WRF model inside ERA Interim. The Iberian Peninsula has been covered by a 15km x 15 km grid with 51 vertical levels. Two model configurations have been tested in two experiments spanning the period 2010-2014 after a one year spin-up (2009). In both cases, the model uses high resolution daily-varying SST fields and the Noah land surface model. In the first experiment (N), after the model is initialised, boundary conditions drive the model, as usual in numerical downscaling experiments. The second experiment (D) is configured the same way as the N case, but 3DVAR data assimilation is run every six hours (00Z, 06Z, 12Z and 18Z) using observations obtained from the PREPBUFR dataset (NCEP ADP Global Upper Air and Surface Weather Observations) using a 120' window around analysis times. For the data assimilation experiment (D), seasonally (monthly) varying background error covariance matrices have been prepared according to the parameterisations used and the mesoscale model domain. For both N and D runs, the moisture balance of the model runs has been evaluated over the Iberian Peninsula, both internally according to the model results (moisture balance in the model) and also in terms of the observed moisture fields from observational datasets (particularly precipitable water and precipitation from observations). Verification has been performed both at the daily and monthly time scales. The verification has also been performed for ERA Interim, the driving coarse-scale dataset used to drive the regional model too. Results show that the leading terms that must be considered over the area are the tendency in the precipitable water column, the divergence of moisture flux, evaporation (computed from latent heat flux at the surface) and precipitation. In the case of ERA Interim, the divergence of Qc is also relevant, although still a minor player in the moisture balance. Both mesoscale model runs are more effective at closing the moisture balance over the whole Iberian Peninsula than ERA Interim. The N experiment (no data assimilation) shows a better closure than the D case, as could be expected from the lack of analysis increments in it. This result is robust both at the daily and monthly time scales. Both ERA Interim and the D experiment produce a negative residual in the balance equation (compatible with excess evaporation or increased convergence of moisture over the Iberian Peninsula). This is a result of the data assimilation process in the D dataset, since in the N experiment the residual is mainly positive. The seasonal cycle of evaporation is much closer in the D experiment to the one in ERA Interim than in the N case, with a higher evaporation during summer months. However, both regional climate model runs show a lower evaporation rate than ERA Interim, particularly during summer months.

  6. High flexibility of DNA on short length scales probed by atomic force microscopy.

    PubMed

    Wiggins, Paul A; van der Heijden, Thijn; Moreno-Herrero, Fernando; Spakowitz, Andrew; Phillips, Rob; Widom, Jonathan; Dekker, Cees; Nelson, Philip C

    2006-11-01

    The mechanics of DNA bending on intermediate length scales (5-100 nm) plays a key role in many cellular processes, and is also important in the fabrication of artificial DNA structures, but previous experimental studies of DNA mechanics have focused on longer length scales than these. We use high-resolution atomic force microscopy on individual DNA molecules to obtain a direct measurement of the bending energy function appropriate for scales down to 5 nm. Our measurements imply that the elastic energy of highly bent DNA conformations is lower than predicted by classical elasticity models such as the worm-like chain (WLC) model. For example, we found that on short length scales, spontaneous large-angle bends are many times more prevalent than predicted by the WLC model. We test our data and model with an interlocking set of consistency checks. Our analysis also shows how our model is compatible with previous experiments, which have sometimes been viewed as confirming the WLC.

  7. An economy of scale system's mensuration of large spacecraft

    NASA Technical Reports Server (NTRS)

    Deryder, L. J.

    1981-01-01

    The systems technology and cost particulars of using multipurpose platforms versus several sizes of bus type free flyer spacecraft to accomplish the same space experiment missions. Computer models of these spacecraft bus designs were created to obtain data relative to size, weight, power, performance, and cost. To answer the question of whether or not large scale does produce economy, the dominant cost factors were determined and the programmatic effect on individual experiment costs were evaluated.

  8. The new climate data record of total and spectral solar irradiance: Current progress and future steps

    NASA Astrophysics Data System (ADS)

    Coddington, Odele; Lean, Judith; Rottman, Gary; Pilewskie, Peter; Snow, Martin; Lindholm, Doug

    2016-04-01

    We present a climate data record of Total Solar Irradiance (TSI) and Solar Spectral Irradiance (SSI), with associated time and wavelength dependent uncertainties, from 1610 to the present. The data record was developed jointly by the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado Boulder and the Naval Research Laboratory (NRL) as part of the National Oceanographic and Atmospheric Administration's (NOAA) National Centers for Environmental Information (NCEI) Climate Data Record (CDR) Program, where the data record, source code, and supporting documentation are archived. TSI and SSI are constructed from models that determine the changes from quiet Sun conditions arising from bright faculae and dark sunspots on the solar disk using linear regression of proxies of solar magnetic activity with observations from the SOlar Radiation and Climate Experiment (SORCE) Total Irradiance Monitor (TIM), Spectral Irradiance Monitor (SIM), and SOlar Stellar Irradiance Comparison Experiment (SOLSTICE). We show that TSI can be separately modeled to within TIM's measurement accuracy from solar rotational to solar cycle time scales and we assume that SSI measurements are reliable on solar rotational time scales. We discuss the model formulation, uncertainty estimates, and operational implementation and present comparisons of the modeled TSI and SSI with the measurement record and with other solar irradiance models. We also discuss ongoing work to assess the sensitivity of the modeled irradiances to model assumptions, namely, the scaling of solar variability from rotational-to-cycle time scales and the representation of the sunspot darkening index.

  9. Crustal evolution inferred from Apollo magnetic measurements

    NASA Technical Reports Server (NTRS)

    Dyal, P.; Daily, W. D.; Vanyan, L. L.

    1978-01-01

    Magnetic field and solar wind plasma density measurements were analyzed to determine the scale size characteristics of remanent fields at the Apollo 12, 15, and 16 landing sites. Theoretical model calculations of the field-plasma interaction, involving diffusion of the remanent field into the solar plasma, were compared to the data. The information provided by all these experiments shows that remanent fields over most of the lunar surface are characterized by spatial variations as small as a few kilometers. Large regions (50 to 100 km) of the lunar crust were probably uniformly magnetized during early crustal evolution. Bombardment and subsequent gardening of the upper layers of these magnetized regions left randomly oriented, smaller scale (5 to 10 km) magnetic sources close to the surface. The larger scale size fields of magnitude approximately 0.1 gammas are measured by the orbiting subsatellite experiments and the small scale sized remanent fields of magnitude approximately 100 gammas are measured by the surface experiments.

  10. Crystal Plasticity Model of Reactor Pressure Vessel Embrittlement in GRIZZLY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Pritam; Biner, Suleyman Bulent; Zhang, Yongfeng

    2015-07-01

    The integrity of reactor pressure vessels (RPVs) is of utmost importance to ensure safe operation of nuclear reactors under extended lifetime. Microstructure-scale models at various length and time scales, coupled concurrently or through homogenization methods, can play a crucial role in understanding and quantifying irradiation-induced defect production, growth and their influence on mechanical behavior of RPV steels. A multi-scale approach, involving atomistic, meso- and engineering-scale models, is currently being pursued within the GRIZZLY project to understand and quantify irradiation-induced embrittlement of RPV steels. Within this framework, a dislocation-density based crystal plasticity model has been developed in GRIZZLY that captures themore » effect of irradiation-induced defects on the flow stress behavior and is presented in this report. The present formulation accounts for the interaction between self-interstitial loops and matrix dislocations. The model predictions have been validated with experiments and dislocation dynamics simulation.« less

  11. Modeling of impulsive propellant reorientation

    NASA Technical Reports Server (NTRS)

    Hochstein, John I.; Patag, Alfredo E.; Chato, David J.

    1988-01-01

    The impulsive propellant reorientation process is modeled using the (Energy Calculations for Liquid Propellants in a Space Environment (ECLIPSE) code. A brief description of the process and the computational model is presented. Code validation is documented via comparison to experimentally derived data for small scale tanks. Predictions of reorientation performance are presented for two tanks designed for use in flight experiments and for a proposed full scale OTV tank. A new dimensionless parameter is developed to correlate reorientation performance in geometrically similar tanks. Its success is demonstrated.

  12. Prediction of flyover jet noise spectra from static tests

    NASA Technical Reports Server (NTRS)

    Michel, U.; Michalke, A.

    1981-01-01

    A scaling law is derived for predicting the flyover noise spectra of a single-stream shock-free circular jet from static experiments. The theory is based on the Lighthill approach to jet noise. Density terms are retained to include the effects of jet heating. The influence of flight on the turbulent flow field is considered by an experimentally supported similarity assumption. The resulting scaling laws for the difference between one-third-octave spectra and the overall sound pressure level compare very well with flyover experiments with a jet engine and with wind tunnel experiments with a heated model jet.

  13. Simulating flow around scaled model of a hypersonic vehicle in wind tunnel

    NASA Astrophysics Data System (ADS)

    Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.

    2016-11-01

    A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.

  14. Monthly streamflow forecasting at varying spatial scales in the Rhine basin

    NASA Astrophysics Data System (ADS)

    Schick, Simon; Rössler, Ole; Weingartner, Rolf

    2018-02-01

    Model output statistics (MOS) methods can be used to empirically relate an environmental variable of interest to predictions from earth system models (ESMs). This variable often belongs to a spatial scale not resolved by the ESM. Here, using the linear model fitted by least squares, we regress monthly mean streamflow of the Rhine River at Lobith and Basel against seasonal predictions of precipitation, surface air temperature, and runoff from the European Centre for Medium-Range Weather Forecasts. To address potential effects of a scale mismatch between the ESM's horizontal grid resolution and the hydrological application, the MOS method is further tested with an experiment conducted at the subcatchment scale. This experiment applies the MOS method to 133 additional gauging stations located within the Rhine basin and combines the forecasts from the subcatchments to predict streamflow at Lobith and Basel. In doing so, the MOS method is tested for catchments areas covering 4 orders of magnitude. Using data from the period 1981-2011, the results show that skill, with respect to climatology, is restricted on average to the first month ahead. This result holds for both the predictor combination that mimics the initial conditions and the predictor combinations that additionally include the dynamical seasonal predictions. The latter, however, reduce the mean absolute error of the former in the range of 5 to 12 %, which is consistently reproduced at the subcatchment scale. An additional experiment conducted for 5-day mean streamflow indicates that the dynamical predictions help to reduce uncertainties up to about 20 days ahead, but it also reveals some shortcomings of the present MOS method.

  15. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final/intermediate products, workflows, sessions, etc.) since everything is managed on the server-side; (v) it complements, extends and interoperates with the ESGF stack; (vi) it provides a "tool" for scientists to run multi-model experiments, and finally; and (vii) it can drastically reduce the time-to-solution for these experiments from weeks to hours. At the time the contribution is being written, the proposed testbed represents the first concrete implementation of a distributed multi-model experiment in the ESGF/CMIP context joining server-side and parallel processing, end-to-end workflow management and cloud computing. As opposed to the current scenario based on search & discovery, data download, and client-based data analysis, the INDIGO-DataCloud architectural solution described in this contribution addresses the scientific computing & analytics requirements by providing a paradigm shift based on server-side and high performance big data frameworks jointly with two-level workflow management systems realized at the PaaS level via a cloud infrastructure.

  16. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; Beckley, Matthew; Abe-Ouchi, Ayako; Aschwanden, Andy; Calov, Reinhard; Gagliardini, Olivier; Gillet-Chaulet, Fabien; Golledge, Nicholas R.; Gregory, Jonathan; Greve, Ralf; Humbert, Angelika; Huybrechts, Philippe; Kennedy, Joseph H.; Larour, Eric; Lipscomb, William H.; Le clec'h, Sébastien; Lee, Victoria; Morlighem, Mathieu; Pattyn, Frank; Payne, Antony J.; Rodehacke, Christian; Rückamp, Martin; Saito, Fuyuki; Schlegel, Nicole; Seroussi, Helene; Shepherd, Andrew; Sun, Sainan; van de Wal, Roderik; Ziemen, Florian A.

    2018-04-01

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. The goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within the Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.

  17. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    DOE PAGES

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; ...

    2018-04-19

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less

  18. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less

  19. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE PAGES

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  20. Application of Multiple Regression and Design of Experiments for Modelling the Effect of Monoethylene Glycol in the Calcium Carbonate Scaling Process.

    PubMed

    Kartnaller, Vinicius; Venâncio, Fabrício; F do Rosário, Francisca; Cajaiba, João

    2018-04-10

    To avoid gas hydrate formation during oil and gas production, companies usually employ thermodynamic inhibitors consisting of hydroxyl compounds, such as monoethylene glycol (MEG). However, these inhibitors may cause other types of fouling during production such as inorganic salt deposits (scale). Calcium carbonate is one of the main scaling salts and is a great concern, especially for the new pre-salt wells being explored in Brazil. Hence, it is important to understand how using inhibitors to control gas hydrate formation may be interacting with the scale formation process. Multiple regression and design of experiments were used to mathematically model the calcium carbonate scaling process and its evolution in the presence of MEG. It was seen that MEG, although inducing the precipitation by increasing the supersaturation ratio, actually works as a scale inhibitor for calcium carbonate in concentrations over 40%. This effect was not due to changes in the viscosity, as suggested in the literature, but possibly to the binding of MEG to the CaCO₃ particles' surface. The interaction of the MEG inhibition effect with the system's variables was also assessed, when temperature' and calcium concentration were more relevant.

  1. Improved understanding of geologic CO{sub 2} storage processes requires risk-driven field experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oldenburg, C.M.

    2011-06-01

    The need for risk-driven field experiments for CO{sub 2} geologic storage processes to complement ongoing pilot-scale demonstrations is discussed. These risk-driven field experiments would be aimed at understanding the circumstances under which things can go wrong with a CO{sub 2} capture and storage (CCS) project and cause it to fail, as distinguished from accomplishing this end using demonstration and industrial scale sites. Such risk-driven tests would complement risk-assessment efforts that have already been carried out by providing opportunities to validate risk models. In addition to experimenting with high-risk scenarios, these controlled field experiments could help validate monitoring approaches to improvemore » performance assessment and guide development of mitigation strategies.« less

  2. Ability of an ensemble of regional climate models to reproduce weather regimes over Europe-Atlantic during the period 1961-2000

    NASA Astrophysics Data System (ADS)

    Sanchez-Gomez, Emilia; Somot, S.; Déqué, M.

    2009-10-01

    One of the main concerns in regional climate modeling is to which extent limited-area regional climate models (RCM) reproduce the large-scale atmospheric conditions of their driving general circulation model (GCM). In this work we investigate the ability of a multi-model ensemble of regional climate simulations to reproduce the large-scale weather regimes of the driving conditions. The ensemble consists of a set of 13 RCMs on a European domain, driven at their lateral boundaries by the ERA40 reanalysis for the time period 1961-2000. Two sets of experiments have been completed with horizontal resolutions of 50 and 25 km, respectively. The spectral nudging technique has been applied to one of the models within the ensemble. The RCMs reproduce the weather regimes behavior in terms of composite pattern, mean frequency of occurrence and persistence reasonably well. The models also simulate well the long-term trends and the inter-annual variability of the frequency of occurrence. However, there is a non-negligible spread among the models which is stronger in summer than in winter. This spread is due to two reasons: (1) we are dealing with different models and (2) each RCM produces an internal variability. As far as the day-to-day weather regime history is concerned, the ensemble shows large discrepancies. At daily time scale, the model spread has also a seasonal dependence, being stronger in summer than in winter. Results also show that the spectral nudging technique improves the model performance in reproducing the large-scale of the driving field. In addition, the impact of increasing the number of grid points has been addressed by comparing the 25 and 50 km experiments. We show that the horizontal resolution does not affect significantly the model performance for large-scale circulation.

  3. Eurodelta-Trends, a Multi-Model Experiment of Air Quality Hindcast in Europe over 1990-2010. Experiment Design and Key Findings

    NASA Astrophysics Data System (ADS)

    Colette, A.; Ciarelli, G.; Otero, N.; Theobald, M.; Solberg, S.; Andersson, C.; Couvidat, F.; Manders-Groot, A.; Mar, K. A.; Mircea, M.; Pay, M. T.; Raffort, V.; Tsyro, S.; Cuvelier, K.; Adani, M.; Bessagnet, B.; Bergstrom, R.; Briganti, G.; Cappelletti, A.; D'isidoro, M.; Fagerli, H.; Ojha, N.; Roustan, Y.; Vivanco, M. G.

    2017-12-01

    The Eurodelta-Trends multi-model chemistry-transport experiment has been designed to better understand the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional scale air quality. The experiment is designed in three tiers with increasing degree of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000 and 2010. Sensitivity analysis for the same three years using various combinations of (i) anthropogenic emissions, (ii) chemical boundary conditions and (iii) meteorology complements it. The most demanding tier consists in two complete time series from 1990 to 2010, simulated using either time varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and six models have completed the 21-year trend simulations. The modelling results are publicly available for further use by the scientific community. We assess the skill of the models in capturing observed air pollution trends for the 1990-2010 time period. The average particulate matter relative trends are well captured by the models, even if they display the usual lower bias in reproducing absolute levels. Ozone trends are also well reproduced, yet slightly overestimated in the 1990s. The attribution study emphasizes the efficiency of mitigation measures in reducing air pollution over Europe, although a strong impact of long range transport is pointed out for ozone trends. Meteorological variability is also an important factor in some regions of Europe. The results of the first health and ecosystem impact studies impacts building upon a regional scale multi-model ensemble over a 20yr time period will also be presented.

  4. Geotechnical Centrifuge Experiments to Evaluate Piping in Foundation Soils

    DTIC Science & Technology

    2014-05-01

    verifiable results. These tests were successful in design , construction, and execution of a realistic simulation of internal erosion leading to failure...possible “scale effects,” “modeling of models” testing protocol should be included in the test program. Also, the model design should minimize the scale...recommendations for improving the centrifuge tests include the following: • Design improved system for reservoir control to provide definitive and

  5. For how long can we predict the weather? - Insights into atmospheric predictability from global convection-allowing simulations

    NASA Astrophysics Data System (ADS)

    Judt, Falko

    2017-04-01

    A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.

  6. Advance and application of the stratigraphic simulation model 2D- SedFlux: From tank experiment to geological scale simulation

    NASA Astrophysics Data System (ADS)

    Kubo, Yu'suke; Syvitski, James P. M.; Hutton, Eric W. H.; Paola, Chris

    2005-07-01

    The stratigraphic simulation model 2D- SedFlux is further developed and applied to a turbidite experiment in a subsiding minibasin. The new module dynamically simulates evolving hyperpycnal flows and their interaction with the basin bed. Comparison between the numerical results and the experimental results verifies the ability of 2D- SedFlux to predict the distribution of the sediments and the possible feedback from subsidence. The model was subsequently applied to geological-scale minibasins such as are located in the Gulf of Mexico. Distance from the sediment source is determined to be more influential than the sediment entrapment in upstream minibasin. The results suggest that efficiency of sediment entrapment by a basin was not influenced by the distance from the sediment source.

  7. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  8. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE PAGES

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.; ...

    2017-06-03

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less

  10. Analytical Solution for Reactive Solute Transport Considering Incomplete Mixing

    NASA Astrophysics Data System (ADS)

    Bellin, A.; Chiogna, G.

    2013-12-01

    The laboratory experiments of Gramling et al. (2002) showed that incomplete mixing at the pore scale exerts a significant impact on transport of reactive solutes and that assuming complete mixing leads to overestimation of product concentration in bimolecular reactions. We consider here the family of equilibrium reactions for which the concentration of the reactants and the product can be expressed as a function of the mixing ratio, the concentration of a fictitious non reactive solute. For this type of reactions we propose, in agreement with previous studies, to model the effect of incomplete mixing at scales smaller than the Darcy scale assuming that the mixing ratio is distributed within an REV according to a Beta distribution. We compute the parameters of the Beta model by imposing that the mean concentration is equal to the value that the concentration assumes at the continuum Darcy scale, while the variance decays with time as a power law. We show that our model reproduces the concentration profiles of the reaction product measured in the Gramling et al. (2002) experiments using the transport parameters obtained from conservative experiments and an instantaneous reaction kinetic. The results are obtained applying analytical solutions both for conservative and for reactive solute transport, thereby providing a method to handle the effect of incomplete mixing on multispecies reactive solute transport, which is simpler than other previously developed methods. Gramling, C. M., C. F. Harvey, and L. C. Meigs (2002), Reactive transport in porous media: A comparison of model prediction with laboratory visualization, Environ. Sci. Technol., 36(11), 2508-2514.

  11. Final Report, University of California Merced: Uranium and strontium fate in waste-weathered sediments: Scaling of molecular processes to predict reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chorover, Jon; Mueller, Karl; O'Day, Peggy Anne

    2016-06-30

    Objectives of the Project: 1. Determine the process coupling that occurs between mineral transformation and contaminant (U and Sr) speciation in acid-uranium waste weathered Hanford sediments. 2. Establish linkages between molecular-scale contaminant speciation and meso-scale contaminant lability, release and reactive transport. 3. Make conjunctive use of molecular- to bench-scale data to constrain the development of a mechanistic, reactive transport model that includes coupling of contaminant sorption-desorption and mineral transformation reactions. Hypotheses Tested: Uranium and strontium speciation in legacy sediments from the U-8 and U-12 Crib sites can be reproduced in bench-scale weathering experiments conducted on unimpacted Hanford sediments from themore » same formations; Reactive transport modeling of future uranium and strontium releases from the vadose zone of acid-waste weathered sediments can be effectively constrained by combining molecular-scale information on contaminant bonding environment with grain-scale information on contaminant phase partitioning, and meso-scale kinetic data on contaminant release from the waste-weathered porous media; Although field contamination and laboratory experiments differ in their diagenetic time scales (decades for field vs. months to years for lab), sediment dissolution, neophase nucleation, and crystal growth reactions that occur during the initial disequilibrium induced by waste-sediment interaction leave a strong imprint that persists over subsequent longer-term equilibration time scales and, therefore, give rise to long-term memory effects. Enabling Capabilities Developed: Our team developed an iterative measure-model approach that is broadly applicable to elucidate the mechanistic underpinnings of reactive contaminant transport in geomedia subject to active weathering.« less

  12. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  13. A hysteretic model considering Stribeck effect for small-scale magnetorheological damper

    NASA Astrophysics Data System (ADS)

    Zhao, Yu-Liang; Xu, Zhao-Dong

    2018-06-01

    Magnetorheological (MR) damper is an ideal semi-active control device for vibration suppression. The mechanical properties of this type of devices show strong nonlinear characteristics, especially the performance of the small-scale dampers. Therefore, developing an ideal model that can accurately describe the nonlinearity of such device is crucial to control design. In this paper, the dynamic characteristics of a small-scale MR damper developed by our research group is tested, and the Stribeck effect is observed in the low velocity region. Then, an improved model based on sigmoid model is proposed to describe this Stribeck effect observed in the experiment. After that, the parameters of this model are identified by genetic algorithms, and the mathematical relationship between these parameters and the input current, excitation frequency and amplitude is regressed. Finally, the predicted forces of the proposed model are validated with the experimental data. The results show that this model can well predict the mechanical properties of the small-scale damper, especially the Stribeck effect in the low velocity region.

  14. Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions revisited and found inadequate

    NASA Astrophysics Data System (ADS)

    Coon, Max; Kwok, Ron; Levy, Gad; Pruis, Matthew; Schreyer, Howard; Sulsky, Deborah

    2007-11-01

    This paper revisits the Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions about pack ice behavior with an eye to modeling sea ice dynamics. The AIDJEX assumptions were that (1) enough leads were present in a 100 km by 100 km region to make the ice isotropic on that scale; (2) the ice had no tensile strength; and (3) the ice behavior could be approximated by an isotropic yield surface. These assumptions were made during the development of the AIDJEX model in the 1970s, and are now found inadequate. The assumptions were made in part because of insufficient large-scale (10 km) deformation and stress data, and in part because of computer capability limitations. Upon reviewing deformation and stress data, it is clear that a model including deformation on discontinuities and an anisotropic failure surface with tension would better describe the behavior of pack ice. A model based on these assumptions is needed to represent the deformation and stress in pack ice on scales from 10 to 100 km, and would need to explicitly resolve discontinuities. Such a model would require a different class of metrics to validate discontinuities against observations.

  15. Evaluation of a scale-model experiment to investigate long-range acoustic propagation

    NASA Technical Reports Server (NTRS)

    Parrott, Tony L.; Mcaninch, Gerry L.; Carlberg, Ingrid A.

    1987-01-01

    Tests were conducted to evaluate the feasibility of using a scale-model experiment situated in an anechoic facility to investigate long-range sound propagation over ground terrain. For a nominal scale factor of 100:1, attenuations along a linear array of six microphones colinear with a continuous-wave type of sound source were measured over a wavelength range from 10 to 160 for a nominal test frequency of 10 kHz. Most tests were made for a hard model surface (plywood), but limited tests were also made for a soft model surface (plywood with felt). For grazing-incidence propagation over the hard surface, measured and predicted attenuation trends were consistent for microphone locations out to between 40 and 80 wavelengths. Beyond 80 wavelengths, significant variability was observed that was caused by disturbances in the propagation medium. Also, there was evidence of extraneous propagation-path contributions to data irregularities at more remote microphones. Sensitivity studies for the hard-surface and microphone indicated a 2.5 dB change in the relative excess attenuation for a systematic error in source and microphone elevations on the order of 1 mm. For the soft-surface model, no comparable sensitivity was found.

  16. Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Williams, Mark L; Wiarda, Dorothea

    2015-01-01

    Neutron cross-section covariance data are essential for many sensitivity/uncertainty and uncertainty quantification assessments performed both within the TSUNAMI suite and more broadly throughout the SCALE code system. The release of ENDF/B-VII.1 included a more complete set of neutron cross-section covariance data: these data form the basis for a new cross-section covariance library to be released in SCALE 6.2. A range of testing is conducted to investigate the properties of these covariance data and ensure that the data are reasonable. These tests include examination of the uncertainty in critical experiment benchmark model k eff values due to nuclear data uncertainties, asmore » well as similarity assessments of irradiated pressurized water reactor (PWR) and boiling water reactor (BWR) fuel with suites of critical experiments. The contents of the new covariance library, the testing performed, and the behavior of the new covariance data are described in this paper. The neutron cross-section covariances can be combined with a sensitivity data file generated using the TSUNAMI suite of codes within SCALE to determine the uncertainty in system k eff caused by nuclear data uncertainties. The Verified, Archived Library of Inputs and Data (VALID) maintained at Oak Ridge National Laboratory (ORNL) contains over 400 critical experiment benchmark models, and sensitivity data are generated for each of these models. The nuclear data uncertainty in k eff is generated for each experiment, and the resulting uncertainties are tabulated and compared to the differences in measured and calculated results. The magnitude of the uncertainty for categories of nuclides (such as actinides, fission products, and structural materials) is calculated for irradiated PWR and BWR fuel to quantify the effect of covariance library changes between the SCALE 6.1 and 6.2 libraries. One of the primary applications of sensitivity/uncertainty methods within SCALE is the assessment of similarities between benchmark experiments and safety applications. This is described by a c k value for each experiment with each application. Several studies have analyzed typical c k values for a range of critical experiments compared with hypothetical irradiated fuel applications. The c k value is sensitive to the cross-section covariance data because the contribution of each nuclide is influenced by its uncertainty; large uncertainties indicate more likely bias sources and are thus given more weight. Changes in c k values resulting from different covariance data can be used to examine and assess underlying data changes. These comparisons are performed for PWR and BWR fuel in storage and transportation systems.« less

  17. Latent constructs of the autobiographical memory questionnaire: a recollection-belief model of autobiographical experience.

    PubMed

    Fitzgerald, Joseph M; Broadbridge, Carissa L

    2013-01-01

    Many researchers employ single-item scales of subjective experiences such as imagery and confidence to assess autobiographical memory. We tested the hypothesis that four latent constructs, recollection, belief, impact, and rehearsal, account for the variance in commonly used scales across four different types of autobiographical memory: earliest childhood memory, cue word memory of personal experience, highly vivid memory, and most stressful memory. Participants rated each memory on scales hypothesised to be indicators of one of four latent constructs. Multi-group confirmatory factor analyses and structural analyses confirmed the similarity of the latent constructs of recollection, belief, impact, and rehearsal, as well as the similarity of the structural relationships among those constructs across memory type. The observed pattern of mean differences between the varieties of autobiographical experiences was consistent with prior research and theory in the study of autobiographical memory.

  18. Terry Turbopump Analytical Modeling Efforts in Fiscal Year 2016 ? Progress Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Douglas; Ross, Kyle; Cardoni, Jeffrey N

    This document details the Fiscal Year 2016 modeling efforts to define the true operating limitations (margins) of the Terry turbopump systems used in the nuclear industry for Milestone 3 (full-scale component experiments) and Milestone 4 (Terry turbopump basic science experiments) experiments. The overall multinational-sponsored program creates the technical basis to: (1) reduce and defer additional utility costs, (2) simplify plant operations, and (3) provide a better understanding of the true margin which could reduce overall risk of operations.

  19. Laplacian scale-space behavior of planar curve corners.

    PubMed

    Zhang, Xiaohong; Qu, Ying; Yang, Dan; Wang, Hongxing; Kymer, Jeff

    2015-11-01

    Scale-space behavior of corners is important for developing an efficient corner detection algorithm. In this paper, we analyze the scale-space behavior with the Laplacian of Gaussian (LoG) operator on a planar curve which constructs Laplacian Scale Space (LSS). The analytical expression of a Laplacian Scale-Space map (LSS map) is obtained, demonstrating the Laplacian Scale-Space behavior of the planar curve corners, based on a newly defined unified corner model. With this formula, some Laplacian Scale-Space behavior is summarized. Although LSS demonstrates some similarities to Curvature Scale Space (CSS), there are still some differences. First, no new extreme points are generated in the LSS. Second, the behavior of different cases of a corner model is consistent and simple. This makes it easy to trace the corner in a scale space. At last, the behavior of LSS is verified in an experiment on a digital curve.

  20. Integrated Tokamak modeling: When physics informs engineering and research planning

    NASA Astrophysics Data System (ADS)

    Poli, Francesca Maria

    2018-05-01

    Modeling tokamaks enables a deeper understanding of how to run and control our experiments and how to design stable and reliable reactors. We model tokamaks to understand the nonlinear dynamics of plasmas embedded in magnetic fields and contained by finite size, conducting structures, and the interplay between turbulence, magneto-hydrodynamic instabilities, and wave propagation. This tutorial guides through the components of a tokamak simulator, highlighting how high-fidelity simulations can guide the development of reduced models that can be used to understand how the dynamics at a small scale and short time scales affects macroscopic transport and global stability of plasmas. It discusses the important role that reduced models have in the modeling of an entire plasma discharge from startup to termination, the limits of these models, and how they can be improved. It discusses the important role that efficient workflows have in the coupling between codes, in the validation of models against experiments and in the verification of theoretical models. Finally, it reviews the status of integrated modeling and addresses the gaps and needs towards predictions of future devices and fusion reactors.

  1. Integrated Tokamak modeling: When physics informs engineering and research planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poli, Francesca Maria

    Modeling tokamaks enables a deeper understanding of how to run and control our experiments and how to design stable and reliable reactors. We model tokamaks to understand the nonlinear dynamics of plasmas embedded in magnetic fields and contained by finite size, conducting structures, and the interplay between turbulence, magneto-hydrodynamic instabilities, and wave propagation. This tutorial guides through the components of a tokamak simulator, highlighting how high-fidelity simulations can guide the development of reduced models that can be used to understand how the dynamics at a small scale and short time scales affects macroscopic transport and global stability of plasmas. Itmore » discusses the important role that reduced models have in the modeling of an entire plasma discharge from startup to termination, the limits of these models, and how they can be improved. It discusses the important role that efficient workflows have in the coupling between codes, in the validation of models against experiments and in the verification of theoretical models. Finally, it reviews the status of integrated modeling and addresses the gaps and needs towards predictions of future devices and fusion reactors.« less

  2. Integrated Tokamak modeling: When physics informs engineering and research planning

    DOE PAGES

    Poli, Francesca Maria

    2018-05-01

    Modeling tokamaks enables a deeper understanding of how to run and control our experiments and how to design stable and reliable reactors. We model tokamaks to understand the nonlinear dynamics of plasmas embedded in magnetic fields and contained by finite size, conducting structures, and the interplay between turbulence, magneto-hydrodynamic instabilities, and wave propagation. This tutorial guides through the components of a tokamak simulator, highlighting how high-fidelity simulations can guide the development of reduced models that can be used to understand how the dynamics at a small scale and short time scales affects macroscopic transport and global stability of plasmas. Itmore » discusses the important role that reduced models have in the modeling of an entire plasma discharge from startup to termination, the limits of these models, and how they can be improved. It discusses the important role that efficient workflows have in the coupling between codes, in the validation of models against experiments and in the verification of theoretical models. Finally, it reviews the status of integrated modeling and addresses the gaps and needs towards predictions of future devices and fusion reactors.« less

  3. The Counseling Competencies Scale: Validation and Refinement

    ERIC Educational Resources Information Center

    Lambie, Glenn W.; Mullen, Patrick R.; Swank, Jacqueline M.; Blount, Ashley

    2018-01-01

    Supervisors evaluated counselors-in-training at multiple points during their practicum experience using the Counseling Competencies Scale (CCS; N = 1,070). The CCS evaluations were randomly split to conduct exploratory factor analysis and confirmatory factor analysis, resulting in a 2-factor model (61.5% of the variance explained).

  4. Simulation of Left Atrial Function Using a Multi-Scale Model of the Cardiovascular System

    PubMed Central

    Pironet, Antoine; Dauby, Pierre C.; Paeme, Sabine; Kosta, Sarah; Chase, J. Geoffrey; Desaive, Thomas

    2013-01-01

    During a full cardiac cycle, the left atrium successively behaves as a reservoir, a conduit and a pump. This complex behavior makes it unrealistic to apply the time-varying elastance theory to characterize the left atrium, first, because this theory has known limitations, and second, because it is still uncertain whether the load independence hypothesis holds. In this study, we aim to bypass this uncertainty by relying on another kind of mathematical model of the cardiac chambers. In the present work, we describe both the left atrium and the left ventricle with a multi-scale model. The multi-scale property of this model comes from the fact that pressure inside a cardiac chamber is derived from a model of the sarcomere behavior. Macroscopic model parameters are identified from reference dog hemodynamic data. The multi-scale model of the cardiovascular system including the left atrium is then simulated to show that the physiological roles of the left atrium are correctly reproduced. This include a biphasic pressure wave and an eight-shaped pressure-volume loop. We also test the validity of our model in non basal conditions by reproducing a preload reduction experiment by inferior vena cava occlusion with the model. We compute the variation of eight indices before and after this experiment and obtain the same variation as experimentally observed for seven out of the eight indices. In summary, the multi-scale mathematical model presented in this work is able to correctly account for the three roles of the left atrium and also exhibits a realistic left atrial pressure-volume loop. Furthermore, the model has been previously presented and validated for the left ventricle. This makes it a proper alternative to the time-varying elastance theory if the focus is set on precisely representing the left atrial and left ventricular behaviors. PMID:23755183

  5. Micromechanics of sea ice gouge in shear zones

    NASA Astrophysics Data System (ADS)

    Sammonds, Peter; Scourfield, Sally; Lishman, Ben

    2015-04-01

    The deformation of sea ice is a key control on the Arctic Ocean dynamics. Shear displacement on all scales is an important deformation process in the sea cover. Shear deformation is a dominant mechanism from the scale of basin-scale shear lineaments, through floe-floe interaction and block sliding in ice ridges through to the micro-scale mechanics. Shear deformation will not only depend on the speed of movement of ice surfaces but also the degree that the surfaces have bonded during thermal consolidation and compaction. Recent observations made during fieldwork in the Barents Sea show that shear produces a gouge similar to a fault gouge in a shear zone in the crust. A range of sizes of gouge are exhibited. The consolidation of these fragments has a profound influence on the shear strength and the rate of the processes involved. We review experimental results in sea ice mechanics from mid-scale experiments, conducted in the Hamburg model ship ice tank, simulating sea ice floe motion and interaction and compare these with laboratory experiments on ice friction done in direct shear, and upscale to field measurement of sea ice friction and gouge deformation made during experiments off Svalbard. We find that consolidation, fragmentation and bridging play important roles in the overall dynamics and fit the model of Sammis and Ben-Zion, developed for understanding the micro-mechanics of rock fault gouge, to the sea ice problem.

  6. Experiences with a high-blockage model tested in the NASA Ames 12-foot pressure wind tunnel

    NASA Technical Reports Server (NTRS)

    Coder, D. W.

    1984-01-01

    Representation of the flow around full-scale ships was sought in the subsonic wind tunnels in order to a Hain Reynolds numbers as high as possible. As part of the quest to attain the largest possible Reynolds number, large models with high blockage are used which result in significant wall interference effects. Some experiences with such a high blockage model tested in the NASA Ames 12-foot pressure wind tunnel are summarized. The main results of the experiment relating to wind tunnel wall interference effects are also presented.

  7. Global Simulations of Dynamo and Magnetorotational Instability in Madison Plasma Experiments and Astrophysical Disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebrahimi, Fatima

    2014-07-31

    Large-scale magnetic fields have been observed in widely different types of astrophysical objects. These magnetic fields are believed to be caused by the so-called dynamo effect. Could a large-scale magnetic field grow out of turbulence (i.e. the alpha dynamo effect)? How could the topological properties and the complexity of magnetic field as a global quantity, the so called magnetic helicity, be important in the dynamo effect? In addition to understanding the dynamo mechanism in astrophysical accretion disks, anomalous angular momentum transport has also been a longstanding problem in accretion disks and laboratory plasmas. To investigate both dynamo and momentum transport,more » we have performed both numerical modeling of laboratory experiments that are intended to simulate nature and modeling of configurations with direct relevance to astrophysical disks. Our simulations use fluid approximations (Magnetohydrodynamics - MHD model), where plasma is treated as a single fluid, or two fluids, in the presence of electromagnetic forces. Our major physics objective is to study the possibility of magnetic field generation (so called MRI small-scale and large-scale dynamos) and its role in Magneto-rotational Instability (MRI) saturation through nonlinear simulations in both MHD and Hall regimes.« less

  8. EPOS-WP16: A Platform for European Multi-scale Laboratories

    NASA Astrophysics Data System (ADS)

    Spiers, Chris; Drury, Martyn; Kan-Parker, Mirjam; Lange, Otto; Willingshofer, Ernst; Funiciello, Francesca; Rosenau, Matthias; Scarlato, Piergiorgio; Sagnotti, Leonardo; W16 Participants

    2016-04-01

    The participant countries in EPOS embody a wide range of world-class laboratory infrastructures ranging from high temperature and pressure experimental facilities, to electron microscopy, micro-beam analysis, analogue modeling and paleomagnetic laboratories. Most data produced by the various laboratory centres and networks are presently available only in limited "final form" in publications. As such many data remain inaccessible and/or poorly preserved. However, the data produced at the participating laboratories are crucial to serving society's need for geo-resources exploration and for protection against geo-hazards. Indeed, to model resource formation and system behaviour during exploitation, we need an understanding from the molecular to the continental scale, based on experimental data. This contribution will describe the work plans that the laboratories community in Europe is making, in the context of EPOS. The main objectives are: - To collect and harmonize available and emerging laboratory data on the properties and processes controlling rock system behaviour at multiple scales, in order to generate products accessible and interoperable through services for supporting research activities. - To co-ordinate the development, integration and trans-national usage of the major solid Earth Science laboratory centres and specialist networks. The length scales encompassed by the infrastructures included range from the nano- and micrometer levels (electron microscopy and micro-beam analysis) to the scale of experiments on centimetre sized samples, and to analogue model experiments simulating the reservoir scale, the basin scale and the plate scale. - To provide products and services supporting research into Geo-resources and Geo-storage, Geo-hazards and Earth System Evolution.

  9. A practical approach for the scale-up of roller compaction process.

    PubMed

    Shi, Weixian; Sprockel, Omar L

    2016-09-01

    An alternative approach for the scale-up of ribbon formation during roller compaction was investigated, which required only one batch at the commercial scale to set the operational conditions. The scale-up of ribbon formation was based on a probability method. It was sufficient in describing the mechanism of ribbon formation at both scales. In this method, a statistical relationship between roller compaction parameters and ribbon attributes (thickness and density) was first defined with DoE using a pilot Alexanderwerk WP120 roller compactor. While the milling speed was included in the design, it has no practical effect on granule properties within the study range despite its statistical significance. The statistical relationship was then adapted to a commercial Alexanderwerk WP200 roller compactor with one experimental run. The experimental run served as a calibration of the statistical model parameters. The proposed transfer method was then confirmed by conducting a mapping study on the Alexanderwerk WP200 using a factorial DoE, which showed a match between the predictions and the verification experiments. The study demonstrates the applicability of the roller compaction transfer method using the statistical model from the development scale calibrated with one experiment point at the commercial scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. The impact of nudging coefficient for the initialization on the atmospheric flow field and the photochemical ozone concentration of Seoul, Korea

    NASA Astrophysics Data System (ADS)

    Choi, Hyun-Jung; Lee, Hwa Woon; Sung, Kyoung-Hee; Kim, Min-Jung; Kim, Yoo-Keun; Jung, Woo-Sik

    In order to incorporate correctly the large or local scale circulation in the model, a nudging term is introduced into the equation of motion. Nudging effects should be included properly in the model to reduce the uncertainties and improve the air flow field. To improve the meteorological components, the nudging coefficient should perform the adequate influence on complex area for the model initialization technique which related to data reliability and error suppression. Several numerical experiments have been undertaken in order to evaluate the effects on air quality modeling by comparing the performance of the meteorological result with variable nudging coefficient experiment. All experiments are calculated by the upper wind conditions (synoptic or asynoptic condition), respectively. Consequently, it is important to examine the model response to nudging effect of wind and mass information. The MM5-CMAQ model was used to assess the ozone differences in each case, during the episode day in Seoul, Korea and we revealed that there were large differences in the ozone concentration for each run. These results suggest that for the appropriate simulation of large or small-scale circulations, nudging considering the synoptic and asynoptic nudging coefficient does have a clear advantage over dynamic initialization, so appropriate limitation of these nudging coefficient values on its upper wind conditions is necessary before making an assessment. The statistical verifications showed that adequate nudging coefficient for both wind and temperature data throughout the model had a consistently positive impact on the atmospheric and air quality field. On the case dominated by large-scale circulation, a large nudging coefficient shows a minor improvement in the atmospheric and air quality field. However, when small-scale convection is present, the large nudging coefficient produces consistent improvement in the atmospheric and air quality field.

  11. Comparison of experiment with calculations using curvature-corrected zero and two equation turbulence models for a two-dimensional U-duct

    NASA Astrophysics Data System (ADS)

    Monson, D. J.; Seegmiller, H. L.; McConnaughey, P. K.

    1990-06-01

    In this paper experimental measurements are compared with Navier-Stokes calculations using seven different turbulence models for the internal flow in a two-dimensional U-duct. The configuration is representative of many internal flows of engineering interst that experience strong curvature. In an effort to improve agreement, this paper tests several versions of the two-equation k-epsilon turbulence model including the standard version, an extended version with a production range time scale, and a version that includes curvature time scales. Each is tested in its high and low Reynolds number formulations. Calculations using these new models and the original mixing length model are compared here with measurements of mean and turbulence velocities, static pressure and skin friction in the U-duct at two Reynolds numbers. The comparisons show that only the low Reynolds number version of the extended k-epsilon model does a reasonable job of predicting the important features of this flow at both Reynolds numbers tested.

  12. Insufficiency of avoided crossings for witnessing large-scale quantum coherence in flux qubits

    NASA Astrophysics Data System (ADS)

    Fröwis, Florian; Yadin, Benjamin; Gisin, Nicolas

    2018-04-01

    Do experiments based on superconducting loops segmented with Josephson junctions (e.g., flux qubits) show macroscopic quantum behavior in the sense of Schrödinger's cat example? Various arguments based on microscopic and phenomenological models were recently adduced in this debate. We approach this problem by adapting (to flux qubits) the framework of large-scale quantum coherence, which was already successfully applied to spin ensembles and photonic systems. We show that contemporary experiments might show quantum coherence more than 100 times larger than experiments in the classical regime. However, we argue that the often-used demonstration of an avoided crossing in the energy spectrum is not sufficient to make a conclusion about the presence of large-scale quantum coherence. Alternative, rigorous witnesses are proposed.

  13. EFEDA - European field experiment in a desertification-threatened area

    NASA Technical Reports Server (NTRS)

    Bolle, H.-J.; Andre, J.-C.; Arrue, J. L.; Barth, H. K.; Bessemoulin, P.; Brasa, A.; De Bruin, H. A. R.; Cruces, J.; Dugdale, G.; Engman, E. T.

    1993-01-01

    During June 1991 more than 30 scientific teams worked in Castilla-La Mancha, Spain, studying the energy and water transfer processes between soil, vegetation, and the atmosphere in semiarid conditions within the coordinated European research project EFEDA (European Field Experiment in Desertification-threatened Areas). Measurements were made from the microscale (e.g., measurements on single plants) up to a scale compatible with the grid size of global models. For this purpose three sites were selected 70 km apart and heavily instrumented at a scale in the order of 30 sq km. Aircraft missions, satellite data, and movable equipment were deployed to provide a bridge to the larger scale. This paper gives a description of the experimental design along with some of the preliminary results of this successful experiment.

  14. Modeling the relaxation of internal DNA segments during genome mapping in nanochannels.

    PubMed

    Jain, Aashish; Sheats, Julian; Reifenberger, Jeffrey G; Cao, Han; Dorfman, Kevin D

    2016-09-01

    We have developed a multi-scale model describing the dynamics of internal segments of DNA in nanochannels used for genome mapping. In addition to the channel geometry, the model takes as its inputs the DNA properties in free solution (persistence length, effective width, molecular weight, and segmental hydrodynamic radius) and buffer properties (temperature and viscosity). Using pruned-enriched Rosenbluth simulations of a discrete wormlike chain model with circa 10 base pair resolution and a numerical solution for the hydrodynamic interactions in confinement, we convert these experimentally available inputs into the necessary parameters for a one-dimensional, Rouse-like model of the confined chain. The resulting coarse-grained model resolves the DNA at a length scale of approximately 6 kilobase pairs in the absence of any global hairpin folds, and is readily studied using a normal-mode analysis or Brownian dynamics simulations. The Rouse-like model successfully reproduces both the trends and order of magnitude of the relaxation time of the distance between labeled segments of DNA obtained in experiments. The model also provides insights that are not readily accessible from experiments, such as the role of the molecular weight of the DNA and location of the labeled segments that impact the statistical models used to construct genome maps from data acquired in nanochannels. The multi-scale approach used here, while focused towards a technologically relevant scenario, is readily adapted to other channel sizes and polymers.

  15. Propulsion simulation for magnetically suspended wind tunnel models

    NASA Technical Reports Server (NTRS)

    Joshi, Prakash B.; Beerman, Henry P.; Chen, James; Krech, Robert H.; Lintz, Andrew L.; Rosen, David I.

    1990-01-01

    The feasibility of simulating propulsion-induced aerodynamic effects on scaled aircraft models in wind tunnels employing Magnetic Suspension and Balance Systems. The investigation concerned itself with techniques of generating exhaust jets of appropriate characteristics. The objectives were to: (1) define thrust and mass flow requirements of jets; (2) evaluate techniques for generating propulsive gas within volume limitations imposed by magnetically-suspended models; (3) conduct simple diagnostic experiments for techniques involving new concepts; and (4) recommend experiments for demonstration of propulsion simulation techniques. Various techniques of generating exhaust jets of appropriate characteristics were evaluated on scaled aircraft models in wind tunnels with MSBS. Four concepts of remotely-operated propulsion simulators were examined. Three conceptual designs involving innovative adaptation of convenient technologies (compressed gas cylinders, liquid, and solid propellants) were developed. The fourth innovative concept, namely, the laser-assisted thruster, which can potentially simulate both inlet and exhaust flows, was found to require very high power levels for small thrust levels.

  16. PIC Simulations of Hypersonic Plasma Instabilities

    NASA Astrophysics Data System (ADS)

    Niehoff, D.; Ashour-Abdalla, M.; Niemann, C.; Decyk, V.; Schriver, D.; Clark, E.

    2013-12-01

    The plasma sheaths formed around hypersonic aircraft (Mach number, M > 10) are relatively unexplored and of interest today to both further the development of new technologies and solve long-standing engineering problems. Both laboratory experiments and analytical/numerical modeling are required to advance the understanding of these systems; it is advantageous to perform these tasks in tandem. There has already been some work done to study these plasmas by experiments that create a rapidly expanding plasma through ablation of a target with a laser. In combination with a preformed magnetic field, this configuration leads to a magnetic "bubble" formed behind the front as particles travel at about Mach 30 away from the target. Furthermore, the experiment was able to show the generation of fast electrons which could be due to instabilities on electron scales. To explore this, future experiments will have more accurate diagnostics capable of observing time- and length-scales below typical ion scales, but simulations are a useful tool to explore these plasma conditions theoretically. Particle in Cell (PIC) simulations are necessary when phenomena are expected to be observed at these scales, and also have the advantage of being fully kinetic with no fluid approximations. However, if the scales of the problem are not significantly below the ion scales, then the initialization of the PIC simulation must be very carefully engineered to avoid unnecessary computation and to select the minimum window where structures of interest can be studied. One method of doing this is to seed the simulation with either experiment or ion-scale simulation results. Previous experiments suggest that a useful configuration for studying hypersonic plasma configurations is a ring of particles rapidly expanding transverse to an external magnetic field, which has been simulated on the ion scale with an ion-hybrid code. This suggests that the PIC simulation should have an equivalent configuration; however, modeling a plasma expanding radially in every direction is computationally expensive. In order to reduce the computational expense, we use a radial density profile from the hybrid simulation results to seed a self-consistent PIC simulation in one direction (x), while creating a current in the direction (y) transverse to both the drift velocity and the magnetic field (z) to create the magnetic bubble observed in experiment. The simulation will be run in two spatial dimensions but retain three velocity dimensions, and the results will be used to explore the growth of micro-instabilities present in hypersonic plasmas in the high-density region as it moves through the simulation box. This will still require a significantly large box in order to compare with experiment, as the experiments are being performed over distances of 104 λDe and durations of 105 ωpe-1.

  17. Multi-scale interactions affecting transport, storage, and processing of solutes and sediments in stream corridors (Invited)

    NASA Astrophysics Data System (ADS)

    Harvey, J. W.; Packman, A. I.

    2010-12-01

    Surface water and groundwater flow interact with the channel geomorphology and sediments in ways that determine how material is transported, stored, and transformed in stream corridors. Solute and sediment transport affect important ecological processes such as carbon and nutrient dynamics and stream metabolism, processes that are fundamental to stream health and function. Many individual mechanisms of transport and storage of solute and sediment have been studied, including surface water exchange between the main channel and side pools, hyporheic flow through shallow and deep subsurface flow paths, and sediment transport during both baseflow and floods. A significant challenge arises from non-linear and scale-dependent transport resulting from natural, fractal fluvial topography and associated broad, multi-scale hydrologic interactions. Connections between processes and linkages across scales are not well understood, imposing significant limitations on system predictability. The whole-stream tracer experimental approach is popular because of the spatial averaging of heterogeneous processes; however the tracer results, implemented alone and analyzed using typical models, cannot usually predict transport beyond the very specific conditions of the experiment. Furthermore, the results of whole stream tracer experiments tend to be biased due to unavoidable limitations associated with sampling frequency, measurement sensitivity, and experiment duration. We recommend that whole-stream tracer additions be augmented with hydraulic and topographic measurements and also with additional tracer measurements made directly in storage zones. We present examples of measurements that encompass interactions across spatial and temporal scales and models that are transferable to a wide range of flow and geomorphic conditions. These results show how the competitive effects between the different forces driving hyporheic flow, operating at different spatial scales, creates a situation where hyporheic fluxes cannot be accurately estimated without considering multi-scale effects. Our modeling captures the dominance of small-scale features such as bedforms that drive the majority of hyporheic flow, but it also captures how hyporheic flow is substantially modified by relatively small changes in streamflow or groundwater flow. The additional field measurements add sensitivity and power to whole stream tracer additions by improving resolution of the relative importance of storage at different scales (e.g. bar-scale versus bedform-scale). This information is critical in identifying hot spots where important biogeochemical reactions occur. In summary, interpreting multi-scale interactions in streams requires models that are physically based and that incorporate non-linear process dynamics. Such models can take advantage of increasingly comprehensive field data to integrate transport processes across spatially variable flow and geomorphic conditions. The most useful field and modeling approaches will be those that are simple enough to be easily implemented by users from various disciplines but comprehensive enough to produce meaningful predictions for a wide range of flow and geomorphic scenarios. This capability is needed to support improved strategies for protecting stream ecological health in the face of accelerating land use and climate change.

  18. Cluster-cluster clustering

    NASA Technical Reports Server (NTRS)

    Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C. S.

    1985-01-01

    The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales.

  19. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less

  20. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    DOE PAGES

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-29

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less

  1. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    NASA Astrophysics Data System (ADS)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-01

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.

  2. Construction and Setup of a Bench-scale Algal Photosynthetic Bioreactor with Temperature, Light, and pH Monitoring for Kinetic Growth Tests.

    PubMed

    Karam, Amanda L; McMillan, Catherine C; Lai, Yi-Chun; de Los Reyes, Francis L; Sederoff, Heike W; Grunden, Amy M; Ranjithan, Ranji S; Levis, James W; Ducoste, Joel J

    2017-06-14

    The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software.

  3. Construction and Setup of a Bench-scale Algal Photosynthetic Bioreactor with Temperature, Light, and pH Monitoring for Kinetic Growth Tests

    PubMed Central

    Karam, Amanda L.; McMillan, Catherine C.; Lai, Yi-Chun; de los Reyes, Francis L.; Sederoff, Heike W.; Grunden, Amy M.; Ranjithan, Ranji S.; Levis, James W.; Ducoste, Joel J.

    2017-01-01

    The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software. PMID:28654054

  4. Limiting majoron self-interactions from gravitational wave experiments

    NASA Astrophysics Data System (ADS)

    Addazi, Andrea; Marcianò, Antonino

    2018-01-01

    We show how majoron models may be tested/limited in gravitational wave experiments. In particular, the majoron self-interaction potential may induce a first order phase transition, producing gravitational waves from bubble collisions. We dub such a new scenario the violent majoron model, because it would be associated with a violent phase transition in the early Universe. Sphaleron constraints can be avoided if the global U{(1)}B-L is broken at scales lower than the electroweak scale, provided that the B-L spontaneously breaking scale is lower than 10 TeV in order to satisfy the cosmological mass density bound. The possibility of a sub-electroweak phase transition is practically unconstrained by cosmological bounds and it may be detected within the sensitivity of the next generation of gravitational wave experiments: eLISA, DECIGO and BBO. We also comment on its possible detection in the next generation of electron-positron colliders, where majoron production can be observed from the Higgs portals in missing transverse energy channels. Supported by the Shanghai Municipality, through the grant No. KBH1512299, and by Fudan University, through the grant No. JJH1512105

  5. Microfluidic Experiments Studying Pore Scale Interactions of Microbes and Geochemistry

    NASA Astrophysics Data System (ADS)

    Chen, M.; Kocar, B. D.

    2016-12-01

    Understanding how physical phenomena, chemical reactions, and microbial behavior interact at the pore-scale is crucial to understanding larger scale trends in groundwater chemistry. Recent studies illustrate the utility of microfluidic devices for illuminating pore-scale physical-biogeochemical processes and their control(s) on the cycling of iron, uranium, and other important elements 1-3. These experimental systems are ideal for examining geochemical reactions mediated by microbes, which include processes governed by complex biological phenomenon (e.g. biofilm formation, etc.)4. We present results of microfluidic experiments using a model metal reducing bacteria and varying pore geometries, exploring the limitations of the microorganisms' ability to access tight pore spaces, and examining coupled biogeochemical-physical controls on the cycling of redox sensitive metals. Experimental results will provide an enhanced understanding of coupled physical-biogeochemical processes transpiring at the pore-scale, and will constrain and compliment continuum models used to predict and describe the subsurface cycling of redox-sensitive elements5. 1. Vrionis, H. A. et al. Microbiological and geochemical heterogeneity in an in situ uranium bioremediation field site. Appl. Environ. Microbiol. 71, 6308-6318 (2005). 2. Pearce, C. I. et al. Pore-scale characterization of biogeochemical controls on iron and uranium speciation under flow conditions. Environ. Sci. Technol. 46, 7992-8000 (2012). 3. Zhang, C., Liu, C. & Shi, Z. Micromodel investigation of transport effect on the kinetics of reductive dissolution of hematite. Environ. Sci. Technol. 47, 4131-4139 (2013). 4. Ginn, T. R. et al. Processes in microbial transport in the natural subsurface. Adv. Water Resour. 25, 1017-1042 (2002). 5. Scheibe, T. D. et al. Coupling a genome-scale metabolic model with a reactive transport model to describe in situ uranium bioremediation. Microb. Biotechnol. 2, 274-286 (2009).

  6. Laser absorption, power transfer, and radiation symmetry during the first shock of inertial confinement fusion gas-filled hohlraum experiments

    NASA Astrophysics Data System (ADS)

    Pak, A.; Dewald, E. L.; Landen, O. L.; Milovich, J.; Strozzi, D. J.; Berzak Hopkins, L. F.; Bradley, D. K.; Divol, L.; Ho, D. D.; MacKinnon, A. J.; Meezan, N. B.; Michel, P.; Moody, J. D.; Moore, A. S.; Schneider, M. B.; Town, R. P. J.; Hsing, W. W.; Edwards, M. J.

    2015-12-01

    Temporally resolved measurements of the hohlraum radiation flux asymmetry incident onto a bismuth coated surrogate capsule have been made over the first two nanoseconds of ignition relevant laser pulses. Specifically, we study the P2 asymmetry of the incoming flux as a function of cone fraction, defined as the inner-to-total laser beam power ratio, for a variety of hohlraums with different scales and gas fills. This work was performed to understand the relevance of recent experiments, conducted in new reduced-scale neopentane gas filled hohlraums, to full scale helium filled ignition targets. Experimental measurements, matched by 3D view factor calculations, are used to infer differences in symmetry, relative beam absorption, and cross beam energy transfer (CBET), employing an analytic model. Despite differences in hohlraum dimensions and gas fill, as well as in laser beam pointing and power, we find that laser absorption, CBET, and the cone fraction, at which a symmetric flux is achieved, are similar to within 25% between experiments conducted in the reduced and full scale hohlraums. This work demonstrates a close surrogacy in the dynamics during the first shock between reduced-scale and full scale implosion experiments and is an important step in enabling the increased rate of study for physics associated with inertial confinement fusion.

  7. Scaling of Sediment Dynamics in a Reach-Scale Laboratory Model of a Sand-Bed Stream with Riparian Vegetation

    NASA Astrophysics Data System (ADS)

    Gorrick, S.; Rodriguez, J. F.

    2011-12-01

    A movable bed physical model was designed in a laboratory flume to simulate both bed and suspended load transport in a mildly sinuous sand-bed stream. Model simulations investigated the impact of different vegetation arrangements along the outer bank to evaluate rehabilitation options. Preserving similitude in the 1:16 laboratory model was very important. In this presentation the scaling approach, as well as the successes and challenges of the strategy are outlined. Firstly a near-bankfull flow event was chosen for laboratory simulation. In nature, bankfull events at the field site deposit new in-channel features but cause only small amounts of bank erosion. Thus the fixed banks in the model were not a drastic simplification. Next, and as in other studies, the flow velocity and turbulence measurements were collected in separate fixed bed experiments. The scaling of flow in these experiments was simply maintained by matching the Froude number and roughness levels. The subsequent movable bed experiments were then conducted under similar hydrodynamic conditions. In nature, the sand-bed stream is fairly typical; in high flows most sediment transport occurs in suspension and migrating dunes cover the bed. To achieve similar dynamics in the model equivalent values of the dimensionless bed shear stress and the particle Reynolds number were important. Close values of the two dimensionless numbers were achieved with lightweight sediments (R=0.3) including coal and apricot pips with a particle size distribution similar to that of the field site. Overall the moveable bed experiments were able to replicate the dominant sediment dynamics present in the stream during a bankfull flow and yielded relevant information for the analysis of the effects of riparian vegetation. There was a potential conflict in the strategy, in that grain roughness was exaggerated with respect to nature. The advantage of this strategy is that although grain roughness is exaggerated, the similarity of bedforms and resulting drag can return similar levels of roughness to those in the field site.

  8. Grain-scale investigations of deformation heterogeneities in aluminum alloys

    NASA Astrophysics Data System (ADS)

    Güler, Baran; Şimşek, Ülke; Yalçınkaya, Tuncay; Efe, Mert

    2018-05-01

    The anisotropic deformation of Aluminum alloys at micron scale exhibits localized deformation, which has negative implications on the macroscale mechanical and forming behavior. The scope of this work is twofold. Firstly, micro-scale deformation heterogeneities affecting forming behavior of aluminum alloys is investigated through experimental microstructure analysis at large strains and various strain paths. The effects of initial texture, local grain misorientation, and strain paths on the strain localizations are established. In addition to uniaxial tension condition, deformation heterogeneities are also investigated under equibiaxial tension condition to determine the strain path effects on the localization behavior. Secondly, the morphology and the crystallographic data obtained from the experiments is transferred to Abaqus software, in order to predict both macroscopic response and the microstructure evolution though crystal plasticity finite element simulations. The model parameters are identified through the comparison with experiments and the capability of the model to capture real material response is discussed as well.

  9. Uranium Hydride Nucleation and Growth Model FY'16 ESC Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, Mary Ann; Richards, Andrew Walter; Holby, Edward F.

    2016-12-20

    Uranium hydride corrosion is of great interest to the nuclear industry. Uranium reacts with water and/or hydrogen to form uranium hydride which adversely affects material performance. Hydride nucleation is influenced by thermal history, mechanical defects, oxide thickness, and chemical defects. Information has been gathered from past hydride experiments to formulate a uranium hydride model to be used in a Canned Subassembly (CSA) lifetime prediction model. This multi-scale computer modeling effort started in FY’13, and the fourth generation model is now complete. Additional high-resolution experiments will be run to further test the model.

  10. The Full Scale Seal Experiment - A Seal Industrial Prototype for Cigeo - 13106

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebon, P.; Bosgiraud, J.M.; Foin, R.

    2013-07-01

    The Full Scale Seal (FSS) Experiment is one of various experiments implemented by Andra, within the frame of the Cigeo (the French Deep Geological Repository) Project development, to demonstrate the technical construction feasibility and performance of seals to be constructed, at time of Repository components (shafts, ramps, drifts, disposal vaults) progressive closure. FSS is built inside a drift model fabricated on surface for the purpose. Prior to the scale 1:1 seal construction test, various design tasks are scheduled. They include the engineering work on the drift model to make it fit with the experimental needs, on the various work sequencesmore » anticipated for the swelling clay core emplacement and the concrete containment plugs construction, on the specialized handling tools (and installation equipment) manufactured and delivered for the purpose, and of course on the various swelling clay materials and low pH (below 11) concrete formulations developed for the application. The engineering of the 'seal-as-built' commissioning means (tools and methodology) must also be dealt with. The FSS construction experiment is a technological demonstrator, thus it is not focused on the phenomenological survey (and by consequence, on the performance and behaviour forecast). As such, no hydration (forced or natural) is planned. However, the FSS implementation (in particular via the construction and commissioning activities carried out) is a key milestone in view of comforting phenomenological extrapolation in time and scale. The FSS experiment also allows for qualifying the commissioning methods of a real sealing system in the Repository, as built, at time of industrial operations. (authors)« less

  11. Bounds on low scale gravity from RICE data and cosmogenic neutrino flux models

    NASA Astrophysics Data System (ADS)

    Hussain, Shahid; McKay, Douglas W.

    2006-03-01

    We explore limits on low scale gravity models set by results from the Radio Ice Cherenkov Experiment's (RICE) ongoing search for cosmic ray neutrinos in the cosmogenic, or GZK, energy range. The bound on M, the fundamental scale of gravity, depends upon cosmogenic flux model, black hole formation and decay treatments, inclusion of graviton mediated elastic neutrino processes, and the number of large extra dimensions, d. Assuming proton-based cosmogenic flux models that cover a broad range of flux possibilities, we find bounds in the interval 0.9 TeV

  12. A test of the hierarchical model of litter decomposition.

    PubMed

    Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H

    2017-12-01

    Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.

  13. A Galilean and tensorial invariant k-epsilon model for near wall turbulence

    NASA Technical Reports Server (NTRS)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bounded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation rate equation is reformulated using this time scale and no singularity exists at the wall. A new parameter R = k/S(nu) is introduced to characterize the damping function in the eddy viscosity. This parameter is determined by local properties of both the mean and the turbulent flow fields and is free from any geometry parameter. The proposed model is then Galilean and tensorial invariant. The model constants used are the same as in the high Reynolds number Standard k-epsilon Model. Thus, the proposed model will also be suitable for flows far from the wall. Turbulent channel flows and turbulent boundary layer flows with and without pressure gradients are calculated. Comparisons with the data from direct numerical simulations and experiments show that the model predictions are excellent for turbulent channel flows and turbulent boundary layers with favorable pressure gradients, good for turbulent boundary layers with zero pressure gradients, and fair for turbulent boundary layer with adverse pressure gradients.

  14. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  15. Standardization of a spinal cord lesion model and neurologic evaluation using mice

    PubMed Central

    Borges, Paulo Alvim; Cristante, Alexandre Fogaça; de Barros-Filho, Tarcísio Eloy Pessoa; Natalino, Renato Jose Mendonça; dos Santos, Gustavo Bispo; Marcon, Raphael Marcus

    2018-01-01

    OBJECTIVE: To standardize a spinal cord lesion mouse model. METHODS: Thirty BALB/c mice were divided into five groups: four experimental groups and one control group (sham). The experimental groups were subjected to spinal cord lesion by a weight drop from different heights after laminectomy whereas the sham group only underwent laminectomy. Mice were observed for six weeks, and functional behavior scales were applied. The mice were then euthanized, and histological investigations were performed to confirm and score spinal cord lesion. The findings were evaluated to prove whether the method of administering spinal cord lesion was effective and different among the groups. Additionally, we correlated the results of the functional scales with the results from the histology evaluations to identify which scale is more reliable. RESULTS: One mouse presented autophagia, and six mice died during the experiment. Because four of the mice that died were in Group 5, Group 5 was excluded from the study. All the functional scales assessed proved to be significantly different from each other, and mice presented functional evolution during the experiment. Spinal cord lesion was confirmed by histology, and the results showed a high correlation between the Basso, Beattie, Bresnahan Locomotor Rating Scale and the Basso Mouse Scale. The mouse function scale showed a moderate to high correlation with the histological findings, and the horizontal ladder test had a high correlation with neurologic degeneration but no correlation with the other histological parameters evaluated. CONCLUSION: This spinal cord lesion mouse model proved to be effective and reliable with exception of lesions caused by a 10-g drop from 50 mm, which resulted in unacceptable mortality. The Basso, Beattie, Bresnahan Locomotor Rating Scale and Basso Mouse Scale are the most reliable functional assessments, and but the horizontal ladder test is not recommended. PMID:29561931

  16. Laboratory and theoretical models of planetary-scale instabilities and waves

    NASA Technical Reports Server (NTRS)

    Hart, John E.; Toomre, Juri

    1990-01-01

    Meteorologists and planetary astronomers interested in large-scale planetary and solar circulations recognize the importance of rotation and stratification in determining the character of these flows. In the past it has been impossible to accurately model the effects of sphericity on these motions in the laboratory because of the invariant relationship between the uni-directional terrestrial gravity and the rotation axis of an experiment. Researchers studied motions of rotating convecting liquids in spherical shells using electrohydrodynamic polarization forces to generate radial gravity, and hence centrally directed buoyancy forces, in the laboratory. The Geophysical Fluid Flow Cell (GFFC) experiments performed on Spacelab 3 in 1985 were analyzed. Recent efforts at interpretation led to numerical models of rotating convection with an aim to understand the possible generation of zonal banding on Jupiter and the fate of banana cells in rapidly rotating convection as the heating is made strongly supercritical. In addition, efforts to pose baroclinic wave experiments for future space missions using a modified version of the 1985 instrument led to theoretical and numerical models of baroclinic instability. Rather surprising properties were discovered, which may be useful in generating rational (rather than artificially truncated) models for nonlinear baroclinic instability and baroclinic chaos.

  17. Cosmology in Mirror Twin Higgs and neutrino masses

    NASA Astrophysics Data System (ADS)

    Chacko, Zackaria; Craig, Nathaniel; Fox, Patrick J.; Harnik, Roni

    2017-07-01

    We explore a simple solution to the cosmological challenges of the original Mirror Twin Higgs (MTH) model that leads to interesting implications for experiment. We consider theories in which both the standard model and mirror neutrinos acquire masses through the familiar seesaw mechanism, but with a low right-handed neutrino mass scale of order a few GeV. In these νMTH models, the right-handed neutrinos leave the thermal bath while still relativistic. As the universe expands, these particles eventually become nonrelativistic, and come to dominate the energy density of the universe before decaying. Decays to standard model states are preferred, with the result that the visible sector is left at a higher temperature than the twin sector. Consequently the contribution of the twin sector to the radiation density in the early universe is suppressed, allowing the current bounds on this scenario to be satisfied. However, the energy density in twin radiation remains large enough to be discovered in future cosmic microwave background experiments. In addition, the twin neutrinos are significantly heavier than their standard model counterparts, resulting in a sizable contribution to the overall mass density in neutrinos that can be detected in upcoming experiments designed to probe the large scale structure of the universe.

  18. Assessing the value of variational assimilation of streamflow data into distributed hydrologic models for improved streamflow monitoring and prediction at ungauged and gauged locations in the catchment

    NASA Astrophysics Data System (ADS)

    Lee, Hak Su; Seo, Dong-Jun; Liu, Yuqiong; McKee, Paul; Corby, Robert

    2010-05-01

    State updating of distributed hydrologic models via assimilation of streamflow data is subject to "overfitting" because large dimensionality of the state space of the model may render the assimilation problem seriously underdetermined. To examine the issue in the context of operational hydrology, we carried out a set of real-world experiments in which we assimilate streamflow data at interior and/or outlet locations into gridded SAC and kinematic-wave routing models of the U.S. National Weather Service (NWS) Research Distributed Hydrologic Model (RDHM). We used for the experiments nine basins in the southern plains of the U.S. The experiments consist of selectively assimilating streamflow at different gauge locations, outlet and/or interior, and carrying out both dependent and independent validation. To assess the sensitivity of the quality of assimilation-aided streamflow simulation to the reduced dimensionality of the state space, we carried out data assimilation at spatially semi-distributed or lumped scale and by adjusting biases in precipitation and potential evaporation at a 6-hourly or larger scale. In this talk, we present the results and findings.

  19. Computational fluid dynamics analysis of cyclist aerodynamics: performance of different turbulence-modelling and boundary-layer modelling approaches.

    PubMed

    Defraeye, Thijs; Blocken, Bert; Koninckx, Erwin; Hespel, Peter; Carmeliet, Jan

    2010-08-26

    This study aims at assessing the accuracy of computational fluid dynamics (CFD) for applications in sports aerodynamics, for example for drag predictions of swimmers, cyclists or skiers, by evaluating the applied numerical modelling techniques by means of detailed validation experiments. In this study, a wind-tunnel experiment on a scale model of a cyclist (scale 1:2) is presented. Apart from three-component forces and moments, also high-resolution surface pressure measurements on the scale model's surface, i.e. at 115 locations, are performed to provide detailed information on the flow field. These data are used to compare the performance of different turbulence-modelling techniques, such as steady Reynolds-averaged Navier-Stokes (RANS), with several k-epsilon and k-omega turbulence models, and unsteady large-eddy simulation (LES), and also boundary-layer modelling techniques, namely wall functions and low-Reynolds number modelling (LRNM). The commercial CFD code Fluent 6.3 is used for the simulations. The RANS shear-stress transport (SST) k-omega model shows the best overall performance, followed by the more computationally expensive LES. Furthermore, LRNM is clearly preferred over wall functions to model the boundary layer. This study showed that there are more accurate alternatives for evaluating flow around bluff bodies with CFD than the standard k-epsilon model combined with wall functions, which is often used in CFD studies in sports. 2010 Elsevier Ltd. All rights reserved.

  20. Measurement of unsteady loading and power output variability in a micro wind farm model in a wind tunnel

    NASA Astrophysics Data System (ADS)

    Bossuyt, Juliaan; Howland, Michael F.; Meneveau, Charles; Meyers, Johan

    2017-01-01

    Unsteady loading and spatiotemporal characteristics of power output are measured in a wind tunnel experiment of a microscale wind farm model with 100 porous disk models. The model wind farm is placed in a scaled turbulent boundary layer, and six different layouts, varied from aligned to staggered, are considered. The measurements are done by making use of a specially designed small-scale porous disk model, instrumented with strain gages. The frequency response of the measurements goes up to the natural frequency of the model, which corresponds to a reduced frequency of 0.6 when normalized by the diameter and the mean hub height velocity. The equivalent range of timescales, scaled to field-scale values, is 15 s and longer. The accuracy and limitations of the acquisition technique are documented and verified with hot-wire measurements. The spatiotemporal measurement capabilities of the experimental setup are used to study the cross-correlation in the power output of various porous disk models of wind turbines. A significant correlation is confirmed between streamwise aligned models, while staggered models show an anti-correlation.

  1. Chaotic Lagrangian models for turbulent relative dispersion.

    PubMed

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  2. Chaotic Lagrangian models for turbulent relative dispersion

    NASA Astrophysics Data System (ADS)

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  3. Late Cretaceous climate simulations with different CO2 levels and subarctic gateway configurations: A model-data comparison

    NASA Astrophysics Data System (ADS)

    Niezgodzki, Igor; Knorr, Gregor; Lohmann, Gerrit; Tyszka, Jarosław; Markwick, Paul J.

    2017-09-01

    We investigate the impact of different CO2 levels and different subarctic gateway configurations on the surface temperatures during the latest Cretaceous using the Earth System Model COSMOS. The simulated temperatures are compared with the surface temperature reconstructions based on a recent compilation of the latest Cretaceous proxies. In our numerical experiments, the CO2 level ranges from 1 to 6 times the preindustrial (PI) CO2 level of 280 ppm. On a global scale, the most reasonable match between modeling and proxy data is obtained for the experiments with 3 to 5 × PI CO2 concentrations. However, the simulated low- (high-) latitude temperatures are too high (low) as compared to the proxy data. The moderate CO2 levels scenarios might be more realistic, if we take into account proxy data and the dead zone effect criterion. Furthermore, we test if the model-data discrepancies can be caused by too simplistic proxy-data interpretations. This is distinctly seen at high latitudes, where most proxies are biased toward summer temperatures. Additional sensitivity experiments with different ocean gateway configurations and constant CO2 level indicate only minor surface temperatures changes (< 1°C) on a global scale, with higher values (up to 8°C) on a regional scale. These findings imply that modeled and reconstructed temperature gradients are to a large degree only qualitatively comparable, providing challenges for the interpretation of proxy data and/or model sensitivity. With respect to the latter, our results suggest that an assessment of greenhouse worlds is best constrained by temperatures in the midlatitudes.

  4. Multi-scale computational modeling of developmental biology.

    PubMed

    Setty, Yaki

    2012-08-01

    Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.

  5. Hands-On Exercise in Environmental Structural Geology Using a Fracture Block Model.

    ERIC Educational Resources Information Center

    Gates, Alexander E.

    2001-01-01

    Describes the use of a scale analog model of an actual fractured rock reservoir to replace paper copies of fracture maps in the structural geology curriculum. Discusses the merits of the model in enabling students to gain experience performing standard structural analyses. (DDR)

  6. Stratospheric controlled perturbation experiment: a small-scale experiment to improve understanding of the risks of solar geoengineering.

    PubMed

    Dykema, John A; Keith, David W; Anderson, James G; Weisenstein, Debra

    2014-12-28

    Although solar radiation management (SRM) through stratospheric aerosol methods has the potential to mitigate impacts of climate change, our current knowledge of stratospheric processes suggests that these methods may entail significant risks. In addition to the risks associated with current knowledge, the possibility of 'unknown unknowns' exists that could significantly alter the risk assessment relative to our current understanding. While laboratory experimentation can improve the current state of knowledge and atmospheric models can assess large-scale climate response, they cannot capture possible unknown chemistry or represent the full range of interactive atmospheric chemical physics. Small-scale, in situ experimentation under well-regulated circumstances can begin to remove some of these uncertainties. This experiment-provisionally titled the stratospheric controlled perturbation experiment-is under development and will only proceed with transparent and predominantly governmental funding and independent risk assessment. We describe the scientific and technical foundation for performing, under external oversight, small-scale experiments to quantify the risks posed by SRM to activation of halogen species and subsequent erosion of stratospheric ozone. The paper's scope includes selection of the measurement platform, relevant aspects of stratospheric meteorology, operational considerations and instrument design and engineering.

  7. Cloud computing and validation of expandable in silico livers

    PubMed Central

    2010-01-01

    Background In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. Results The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. Conclusions The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware. PMID:21129207

  8. Atmospheric-like rotating annulus experiment: gravity wave emission from baroclinic jets

    NASA Astrophysics Data System (ADS)

    Rodda, Costanza; Borcia, Ion; Harlander, Uwe

    2017-04-01

    Large-scale balanced flows can spontaneously radiate meso-scale inertia-gravity waves (IGWs) and are thus in fact unbalanced. While flow-dependent parameterizations for the radiation of IGWs from orographic and convective sources do exist, the situation is less developed for spontaneously emitted IGWs. Observations identify increased IGW activity in the vicinity of jet exit regions. A direct interpretation of those based on geostrophic adjustment might be tempting. However, directly applying this concept to the parameterization of spontaneous imbalance is difficult since the dynamics itself is continuously re-establishing an unbalanced flow which then sheds imbalances by GW radiation. Examining spontaneous IGW emission in the atmosphere and validating parameterization schemes confronts the scientist with particular challenges. Due to its extreme complexity, GW emission will always be embedded in the interaction of a multitude of interdependent processes, many of which are hardly detectable from analysis or campaign data. The benefits of repeated and more detailed measurements, while representing the only source of information about the real atmosphere, are limited by the non-repeatability of an atmospheric situation. The same event never occurs twice. This argues for complementary laboratory experiments, which can provide a more focused dialogue between experiment and theory. Indeed, life cycles are also examined in rotating- annulus laboratory experiments. Thus, these experiments might form a useful empirical benchmark for theoretical and modelling work that is also independent of any sort of subgrid model. In addition, the more direct correspondence between experimental and model data and the data reproducibility makes lab experiments a powerful testbed for parameterizations. Joint laboratory experiment and numerical simulation have been conducted. The comparison between the data obtained from the experiment and the numerical simulations shows a very good agreement for the large scale baroclinic wave regime. Moreover, in both cases a clear signal of horizontal divergence, embedded in the baroclinic wave front, appears suggesting IGWs emission.

  9. Earthquake source properties from instrumented laboratory stick-slip

    USGS Publications Warehouse

    Kilgore, Brian D.; McGarr, Arthur F.; Beeler, Nicholas M.; Lockner, David A.; Thomas, Marion Y.; Mitchell, Thomas M.; Bhat, Harsha S.

    2017-01-01

    Stick-slip experiments were performed to determine the influence of the testing apparatus on source properties, develop methods to relate stick-slip to natural earthquakes and examine the hypothesis of McGarr [2012] that the product of stiffness, k, and slip duration, Δt, is scale-independent and the same order as for earthquakes. The experiments use the double-direct shear geometry, Sierra White granite at 2 MPa normal stress and a remote slip rate of 0.2 µm/sec. To determine apparatus effects, disc springs were added to the loading column to vary k. Duration, slip, slip rate, and stress drop decrease with increasing k, consistent with a spring-block slider model. However, neither for the data nor model is kΔt constant; this results from varying stiffness at fixed scale.In contrast, additional analysis of laboratory stick-slip studies from a range of standard testing apparatuses is consistent with McGarr's hypothesis. kΔt is scale-independent, similar to that of earthquakes, equivalent to the ratio of static stress drop to average slip velocity, and similar to the ratio of shear modulus to wavespeed of rock. These properties result from conducting experiments over a range of sample sizes, using rock samples with the same elastic properties as the Earth, and scale-independent design practices.

  10. Physically based modeling in catchment hydrology at 50: Survey and outlook

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Putti, Mario

    2015-09-01

    Integrated, process-based numerical models in hydrology are rapidly evolving, spurred by novel theories in mathematical physics, advances in computational methods, insights from laboratory and field experiments, and the need to better understand and predict the potential impacts of population, land use, and climate change on our water resources. At the catchment scale, these simulation models are commonly based on conservation principles for surface and subsurface water flow and solute transport (e.g., the Richards, shallow water, and advection-dispersion equations), and they require robust numerical techniques for their resolution. Traditional (and still open) challenges in developing reliable and efficient models are associated with heterogeneity and variability in parameters and state variables; nonlinearities and scale effects in process dynamics; and complex or poorly known boundary conditions and initial system states. As catchment modeling enters a highly interdisciplinary era, new challenges arise from the need to maintain physical and numerical consistency in the description of multiple processes that interact over a range of scales and across different compartments of an overall system. This paper first gives an historical overview (past 50 years) of some of the key developments in physically based hydrological modeling, emphasizing how the interplay between theory, experiments, and modeling has contributed to advancing the state of the art. The second part of the paper examines some outstanding problems in integrated catchment modeling from the perspective of recent developments in mathematical and computational science.

  11. Global Scale Atmospheric Processes Research Program Review

    NASA Technical Reports Server (NTRS)

    Worley, B. A. (Editor); Peslen, C. A. (Editor)

    1984-01-01

    Global modeling; satellite data assimilation and initialization; simulation of future observing systems; model and observed energetics; dynamics of planetary waves; First Global Atmospheric Research Program Global Experiment (FGGE) diagnosis studies; and National Research Council Research Associateship Program are discussed.

  12. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  13. Multi-scale hydrometeorological observation and modelling for flash flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-09-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2), where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2), where the river routing and flooding processes become important. These observations are part of the HyMeX (HYdrological cycle in the Mediterranean EXperiment) enhanced observation period (EOP), which will last 4 years (2012-2015). In terms of hydrological modelling, the objective is to set up regional-scale models, while addressing small and generally ungauged catchments, which represent the scale of interest for flood risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set-up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes on various scales.

  14. Multi-scale hydrometeorological observation and modelling for flash-flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-02-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2) where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2) where the river routing and flooding processes become important. These observations are part of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) Enhanced Observation Period (EOP) and lasts four years (2012-2015). In terms of hydrological modelling the objective is to set up models at the regional scale, while addressing small and generally ungauged catchments, which is the scale of interest for flooding risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses, in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes at various scales.

  15. Field-aligned currents and large-scale magnetospheric electric fields

    NASA Technical Reports Server (NTRS)

    Dangelo, N.

    1979-01-01

    The existence of field-aligned currents (FAC) at northern and southern high latitudes was confirmed by a number of observations, most clearly by experiments on the TRIAD and ISIS 2 satellites. The high-latitude FAC system is used to relate what is presently known about the large-scale pattern of high-latitude ionospheric electric fields and their relation to solar wind parameters. Recently a simplified model was presented for polar cap electric fields. The model is of considerable help in visualizing the large-scale features of FAC systems. A summary of the FAC observations is given. The simplified model is used to visualize how the FAC systems are driven by their generators.

  16. Validating a two-high-threshold measurement model for confidence rating data in recognition.

    PubMed

    Bröder, Arndt; Kellen, David; Schütz, Julia; Rohrmeier, Constanze

    2013-01-01

    Signal Detection models as well as the Two-High-Threshold model (2HTM) have been used successfully as measurement models in recognition tasks to disentangle memory performance and response biases. A popular method in recognition memory is to elicit confidence judgements about the presumed old/new status of an item, allowing for the easy construction of ROCs. Since the 2HTM assumes fewer latent memory states than response options are available in confidence ratings, the 2HTM has to be extended by a mapping function which models individual rating scale usage. Unpublished data from 2 experiments in Bröder and Schütz (2009) validate the core memory parameters of the model, and 3 new experiments show that the response mapping parameters are selectively affected by manipulations intended to affect rating scale use, and this is independent of overall old/new bias. Comparisons with SDT show that both models behave similarly, a case that highlights the notion that both modelling approaches can be valuable (and complementary) elements in a researcher's toolbox.

  17. Compaction of North-sea chalk by pore-failure and pressure solution in a producing reservoir

    NASA Astrophysics Data System (ADS)

    Keszthelyi, Daniel; Dysthe, Dag; Jamtveit, Bjorn

    2016-02-01

    The Ekofisk field, Norwegian North sea,is an example of compacting chalk reservoir with considerable subsequent seafloor subsidence due to petroleum production. Previously, a number of models were created to predict the compaction using different phenomenological approaches. Here we present a different approach, we use a new creep model based on microscopic mechanisms with no fitting parameters to predict strain rate at core scale and at reservoir scale. The model is able to reproduce creep experiments and the magnitude of the observed subsidence making it the first microstructural model which can explain the Ekofisk compaction.

  18. Local Climate Changes Forced by Changes in Land Use and topography in the Aburrá Valley, Colombia.

    NASA Astrophysics Data System (ADS)

    Zapata Henao, M. Z.; Hoyos Ortiz, C. D.

    2017-12-01

    One of the challenges in the numerical weather models is the adequate representation of soil-vegetation-atmosphere interaction at different spatial scales, including scenarios with heterogeneous land cover and complex mountainous terrain. The interaction determines the energy, mass and momentum exchange at the surface and could affect different variables including precipitation, temperature and wind. In order to quantify the long-term climate impact of changes in local land use and to assess the role of topography, two numerical experiments were examined. The first experiment allows assessing the continuous growth of urban areas within the Aburrá Valley, a complex terrain region located in Colombian Andes. The Weather Research Forecast model (WRF) is used as the basis of the experiment. The basic setup involves two nested domains, one representing the continental scale (18 km) and the other the regional scale (2 km). The second experiment allows drastic topography modification, including changing the valley configuration to a plateau. The control run for both experiments corresponds to a climatological scenario. In both experiments the boundary conditions correspond to the climatological continental domain output. Surface temperature, surface winds and precipitation are used as the main variables to compare both experiments relative to the control run. The results of the first experiment show a strong relationship between land cover and the variables, specially for surface temperature and wind speed, due to the strong forcing land cover imposes on the albedo, heat capacity and surface roughness, changing temperature and wind speed magnitudes. The second experiment removes the winds spatial variability related with hill slopes, the direction and magnitude are modulated only by the trade winds and roughness of land cover.

  19. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.

  20. Models of chromatin spatial organisation in the cell nucleus

    NASA Astrophysics Data System (ADS)

    Nicodemi, Mario

    2014-03-01

    In the cell nucleus chromosomes have a complex architecture serving vital functional purposes. Recent experiments have started unveiling the interaction map of DNA sites genome-wide, revealing different levels of organisation at different scales. The principles, though, which orchestrate such a complex 3D structure remain still mysterious. I will overview the scenario emerging from some classical polymer physics models of the general aspect of chromatin spatial organisation. The available experimental data, which can be rationalised in a single framework, support a picture where chromatin is a complex mixture of differently folded regions, self-organised across spatial scales according to basic physical mechanisms. I will also discuss applications to specific DNA loci, e.g. the HoxB locus, where models informed with biological details, and tested against targeted experiments, can help identifying the determinants of folding.

  1. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  2. Toward a more efficient and scalable checkpoint/restart mechanism in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine

    2015-04-01

    The number of cores (both CPU as well as accelerator) in large-scale systems has been increasing rapidly over the past several years. In 2008, there were only 5 systems in the Top500 list that had over 100,000 total cores (including accelerator cores) whereas the number of system with such capability has jumped to 31 in Nov 2014. This growth however has also increased the risk of hardware failure rates, necessitating the implementation of fault tolerance mechanism in applications. The checkpoint and restart (C/R) approach is commonly used to save the state of the application and restart at a later time either after failure or to continue execution of experiments. The implementation of an efficient C/R mechanism will make it more affordable to output the necessary C/R files more frequently. The availability of larger systems (more nodes, memory and cores) has also facilitated the scaling of applications. Nowadays, it is more common to conduct coupled global climate simulation experiments at 1 deg horizontal resolution (atmosphere), often requiring about 103 cores. At the same time, a few climate modeling teams that have access to a dedicated cluster and/or large scale systems are involved in modeling experiments at 0.25 deg horizontal resolution (atmosphere) and 0.1 deg resolution for the ocean. These ultrascale configurations require the order of 104 to 105 cores. It is not only necessary for the numerical algorithms to scale efficiently but the input/output (IO) mechanism must also scale accordingly. An ongoing series of ultrascale climate simulations, using the Titan supercomputer at the Oak Ridge Leadership Computing Facility (ORNL), is based on the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), which is a component of the Community Earth System Model and the DOE Accelerated Climate Model for Energy (ACME). The CAM-SE dynamical core for a 0.25 deg configuration has been shown to scale efficiently across 100,000 cpu cores. At this scale, there is an increased risk that the simulation could be terminated due to hardware failures, resulting in a loss that could be as high as 105 - 106 titan core hours. Increasing the frequency of the output of C/R files could mitigate this loss but at the cost of additional C/R overhead. We are testing a more efficient C/R mechanism in CAM-SE. Our early implementation has demonstrated a nearly 3X performance improvement for a 1 deg CAM-SE (with CAM5 physics and MOZART chemistry) configuration using nearly 103 cores. We are in the process of scaling our implementation to 105 cores. This would allow us to run ultra scale simulations with more sophisticated physics and chemistry options while making better utilization of resources.

  3. Constructing an everywhere and locally relevant predictive model of the West-African critical zone

    NASA Astrophysics Data System (ADS)

    Hector, B.; Cohard, J. M.; Pellarin, T.; Maxwell, R. M.; Cappelaere, B.; Demarty, J.; Grippa, M.; Kergoat, L.; Lebel, T.; Mamadou, O.; Mougin, E.; Panthou, G.; Peugeot, C.; Vandervaere, J. P.; Vischel, T.; Vouillamoz, J. M.

    2017-12-01

    Considering water resources and hydrologic hazards, West Africa is among the most vulnerable regions to face both climatic (e.g. with the observed intensification of precipitation) and anthropogenic changes. With +3% of demographic rate, the region experiences rapid land use changes and increased pressure on surface and groundwater resources with observed consequences on the hydrological cycle (water table rise result of the sahelian paradox, increase in flood occurrence, etc.) Managing large hydrosystems (such as transboundary aquifers or rivers basins as the Niger river) requires anticipation of such changes. However, the region significantly lacks observations, for constructing and validating critical zone (CZ) models able to predict future hydrologic regime, but also comprises hydrosystems which encompass strong environmental gradients (e.g. geological, climatic, ecological) with highly different dominating hydrological processes. We address these issues by constructing a high resolution (1 km²) regional scale physically-based model using ParFlow-CLM which allows modeling a wide range of processes without prior knowledge on their relative dominance. Our approach combines multiple scale modeling from local to meso and regional scales within the same theoretical framework. Local and meso-scale models are evaluated thanks to the rich AMMA-CATCH CZ observation database which covers 3 supersites with contrasted environments in Benin (Lat.: 9.8°N), Niger (Lat.: 13.3°N) and Mali (Lat.: 15.3°N). At the regional scale the lack of relevant map of soil hydrodynamic parameters is addressed using remote sensing data assimilation. Our first results show the model's ability to reproduce the known dominant hydrological processes (runoff generation, ET, groundwater recharge…) across the major West-African regions and allow us to conduct virtual experiments to explore the impact of global changes on the hydrosystems. This approach is a first step toward the construction of a reference model to study regional CZ sensitivity to global changes and will help to identify prior parameters required and to construct meta-models for deeper investigations of interactions within the CZ.

  4. Predictive Model for Particle Residence Time Distributions in Riser Reactors. Part 1: Model Development and Validation

    DOE PAGES

    Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...

    2017-02-28

    Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less

  5. Rate and state dependent processes in sea ice deformation

    NASA Astrophysics Data System (ADS)

    Sammonds, P. R.; Scourfield, S.; Lishman, B.

    2014-12-01

    Realistic models of sea ice processes and properties are needed to assess sea ice thickness, extent and concentration and, when run within GCMs, provide prediction of climate change. The deformation of sea ice is a key control on the Arctic Ocean dynamics. But the deformation of sea ice is dependent not only on the rate of the processes involved but also the state of the sea ice and particular in terms of its evolution with time and temperature. Shear deformation is a dominant mechanism from the scale of basin-scale shear lineaments, through floe-floe interaction to block sliding in ice ridges. The shear deformation will not only depend on the speed of movement of ice surfaces but also the degree that the surfaces have bonded during thermal consolidation and compaction. Frictional resistance to sliding can vary by more than two orders of magnitude depending on the state of the interface. But this in turn is dependent upon both imposed conditions and sea ice properties such as size distribution of interfacial broken ice, angularity, porosity, salinity, etc. We review experimental results in sea ice mechanics from mid-scale experiments, conducted in the Hamburg model ship ice tank, simulating sea ice floe motion and interaction and compare these with laboratory experiments on ice friction done in direct shear from which a rate and state constitutive relation for shear deformation is derived. Finally we apply this to field measurement of sea ice friction made during experiments in the Barents Sea to assess the other environmental factors, the state terms, that need to be modelled in order to up-scale to Arctic Ocean-scale dynamics.

  6. No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.

    PubMed

    Liu, Tsung-Jung; Liu, Kuan-Hsien

    2018-03-01

    A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.

  7. Three-Dimensional Multiscale Modeling of Dendritic Spacing Selection During Al-Si Directional Solidification

    NASA Astrophysics Data System (ADS)

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain

    2015-08-01

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.

  8. Flow and contaminant transport in an airliner cabin induced by a moving body: Model experiments and CFD predictions

    NASA Astrophysics Data System (ADS)

    Poussou, Stephane B.; Mazumdar, Sagnik; Plesniak, Michael W.; Sojka, Paul E.; Chen, Qingyan

    2010-08-01

    The effects of a moving human body on flow and contaminant transport inside an aircraft cabin were investigated. Experiments were performed in a one-tenth scale, water-based model. The flow field and contaminant transport were measured using the Particle Image Velocimetry (PIV) and Planar Laser-Induced Fluorescence (PLIF) techniques, respectively. Measurements were obtained with (ventilation case) and without (baseline case) the cabin environmental control system (ECS). The PIV measurements show strong intermittency in the instantaneous near-wake flow. A symmetric downwash flow was observed along the vertical centerline of the moving body in the baseline case. The evolution of this flow pattern is profoundly perturbed by the flow from the ECS. Furthermore, a contaminant originating from the moving body is observed to convect to higher vertical locations in the presence of ventilation. These experimental data were used to validate a Computational Fluid Dynamic (CFD) model. The CFD model can effectively capture the characteristic flow features and contaminant transport observed in the small-scale model.

  9. Combining experimentalist knowledge with modelling approaches to evaluate a controlled herbicide application experiment in an agricultural headwater catchment

    NASA Astrophysics Data System (ADS)

    Ammann, Lorenz; Fenicia, Fabrizio; Doppler, Tobias; Reichert, Peter; Stamm, Christian

    2017-04-01

    Although only a small fraction of the herbicide mass sprayed on agricultural fields reaches the stream in usual conditions, concentrations in streams may reach levels proven to affect organisms. Therefore, diffuse pollution of water bodies by herbicides in catchments dominated by agricultural land-use is a major concern. The process of herbicide wash off has been studied through experiments at lab and field scales. Fewer studies are available at the scales of small catchments and larger watersheds, as the lack of spatial measurements at these scales hinders model parameterization and evaluation. Even fewer studies make explicit use of the combined knowledge of experimentalists and modellers. As a result, the dynamics and interactions of processes responsible for herbicide mobilization and transport at the catchment scale are insufficiently understood. In this work, we integrate preexisting experimentalist knowledge aquired in a large controlled herbicide application experiment into the model development process. The experimental site was a small (1.2 km2) agricultural catchment with subdued topography (423 to 473 m a.s.l.), typical for the Swiss Plateau. The experiment consisted of an application of multiple herbicides, distributed in-stream concentration measurements at high temporal resolution as well as soil and ponding water samples. The measurements revealed considerable spatio-temporal variation in herbicide loss rates. The objective of our study is to better understand the processes that caused this variation. In an iterative dialogue between modellers and experimentalists, we constructed a simple hydrological model structure with multiple reservoirs, considering degradation and sorption of herbicides. Spatial heterogeneity was accounted for through Hydrological Response Units (HRUs). Different model structures were used for dinstinct HRUs to account for spatial variability in the perceived dominant processes. Some parameters were linked between HRUs to constrain the parameter space and facilitate inference. The Superflex hydrological modelling framework provided the flexibility needed for the distributed iterative approach. The model was jointly calibrated to streamflow data and time series of herbicide concentrations. Our preliminary results indicate that herbicide loss rates are generally higher for soils which are prone to saturation or when maximum rainfall intensity is high. While a very simple model is sufficient to characterize the hydrological response of the catchment, considerable extensions are needed to include the major conceptual herbicide transport paths in a physically reasonable way. With the current model we are able to reproduce streamflow dynamics, whereas identifying generalizable mechanisms that drive the wash off dynamics of different herbicides from different fields is challenging.

  10. Downscaling of global climate change estimates to regional scales: An application to Iberian rainfall in wintertime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Storch, H.; Zorita, E.; Cubasch, U.

    A statistical strategy to deduct regional-scale features from climate general circulation model (GCM) simulations has been designed and tested. The main idea is to interrelate the characteristic patterns of observed simultaneous variations of regional climate parameters and of large-scale atmospheric flow using the canonical correlation technique. The large-scale North Atlantic sea level pressure (SLP) is related to the regional, variable, winter (DJF) mean Iberian Peninsula rainfall. The skill of the resulting statistical model is shown by reproducing, to a good approximation, the winter mean Iberian rainfall from 1900 to present from the observed North Atlantic mean SLP distributions. It ismore » shown that this observed relationship between these two variables is not well reproduced in the output of a general circulation model (GCM). The implications for Iberian rainfall changes as the response to increasing atmospheric greenhouse-gas concentrations simulated by two GCM experiments are examined with the proposed statistical model. In an instantaneous [open quotes]2 CO[sub 2][close quotes] doubling experiment, using the simulated change of the mean North Atlantic SLP field to predict Iberian rainfall yields, there is an insignificant increase of area-averaged rainfall of I mm/month, with maximum values of 4 mm/month in the northwest of the peninsula. In contrast, for the four GCM grid points representing the lberian Peninsula, the change is - 10 mm/month, with a minimum of - 19 mm/month in the southwest. In the second experiment, with the IPCC scenario A ([open quotes]business as usual[close quotes]) increase of CO[sub 2], the statistical-model results partially differ from the directly simulated rainfall changes: in the experimental range of 100 years, the area-averaged rainfall decreases by 7 mm/month (statistical model), and by 9 mm/month (GCM); at the same time the amplitude of the interdecadal variability is quite different. 17 refs., 10 figs.« less

  11. The Variance of Intraclass Correlations in Three- and Four-Level Models

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Hedberg, E. C.; Kuyper, Arend M.

    2012-01-01

    Intraclass correlations are used to summarize the variance decomposition in populations with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…

  12. Complementarity of Symmetry Tests at the Energy and Intensity Frontiers

    NASA Astrophysics Data System (ADS)

    Peng, Tao

    We studied several symmetries and interactions beyond the Standard Model and their phenomenology in both high energy colliders and low energy experiments. The lepton number conservation is not a fundamental symmetry in Standard Model (SM). The nature of the neutrino depends on whether or not lepton number is violated. Leptogenesis also requires lepton number violation (LNV). So we want to know whether lepton number is a good symmetry or not, and we want to compare the sensitivity of high energy collider and low energy neutrinoless double-beta decay (0nubetabeta) experiments. To do this, We included the QCD running effects, the background analysis, and the long-distance contributions to nuclear matrix elements. Our result shows that the reach of future tonne-scale 0nubetabeta decay experiments generally exceeds the reach of the 14 TeV LHC for a class of simplified models. For a range of heavy particle masses at the TeV scale, the high luminosity 14 TeV LHC and tonne-scale 0nubetabeta decay experiments may provide complementary probles. The 100 TeV collider with a luminosity of 30 ab-1 exceeds the reach of the tonne-scale 0nubetabeta experiments for most of the range of the heavy particle masses at the TeV scale. We considered a non-Abelian kinetic mixing between the Standard Model gauge bosons and a U(1)' gauge group dark photon, with the existence of an SU(2)L scalar triplet. The coupling constant between the dark photon and the SM gauge bosons epsilon is determined by the triplet vacuum expectation value (vev), the scale of the effective theory Lambda, and the effective operator Wiloson coefficient. The triplet vev is constrained to ≤ 4 GeV. By taking the effective operator Wiloson coefficient to be O(1) and Lambda > 1 TeV, we will have a small value of epsilon which is consistent with the experimental constraint. We outlined the possible LHC signatures and recasted the current ATLAS dark photon experimental results into our non-Abelian mixing scenario. We analyzed the QCD corrections to dark matter (DM) interactions with SM quarks and gluons. Because we like to know the new physics at high scale and the effect of the direct detection of DM at low scale, we studied the QCD running for a list of dark matter effective operators. These corrections are important in precision DM physics. Currently little is known about the short-distance physics of DM. We find that the short-distance QCD corrections generate a finite matching correction when integrating out the electroweak gauge bosons. The high precision measurements of electroweak precision observables can provide crucial input in the search for supersymmetry (SUSY) and play an important role in testing the universality of the SM charged current interaction. We studied the SUSY corrections to such observables DeltaCKM and Deltae/mu, with the experimental constraints on the parameter space. Their corrections are generally of order O(10 -4). Future experiments need to reach this precision to search for SUSY using these observables.

  13. PIV study of flow through porous structure using refractive index matching

    NASA Astrophysics Data System (ADS)

    Häfeli, Richard; Altheimer, Marco; Butscher, Denis; Rudolf von Rohr, Philipp

    2014-05-01

    An aqueous solution of sodium iodide and zinc iodide is proposed as a fluid that matches the refractive index of a solid manufactured by rapid prototyping. This enabled optical measurements in single-phase flow through porous structures. Experiments were also done with an organic index-matching fluid (anisole) in porous structures of different dimensions. To compare experiments with different viscosities and dimensions, we employed Reynolds similarity to deduce the scaling laws. One of the target quantities of our investigation was the dissipation rate of turbulent kinetic energy. Different models for the dissipation rate estimation were evaluated by comparing isotropy ratios. As in many other studies also, our experiments were not capable of resolving the velocity field down to the Kolmogorov length scale, and therefore, the dissipation rate has to be considered as underestimated. This is visible in experiments of different relative resolutions. However, being near the Kolmogorov scale allows estimating a reproducible, yet underestimated spatial distribution of dissipation rate inside the porous structure. Based on these results, the model was used to estimate the turbulent diffusivity. Comparing it to the dispersion coefficient obtained in the same porous structure, we conclude that even at the turbulent diffusivity makes up only a small part of mass transfer in axial direction. The main part is therefore attributed to Taylor dispersion.

  14. The generation and amplification of intergalactic magnetic fields in analogue laboratory experiments with high power lasers

    NASA Astrophysics Data System (ADS)

    Gregori, G.; Reville, B.; Miniati, F.

    2015-11-01

    The advent of high-power laser facilities has, in the past two decades, opened a new field of research where astrophysical environments can be scaled down to laboratory dimensions, while preserving the essential physics. This is due to the invariance of the equations of magneto-hydrodynamics to a class of similarity transformations. Here we review the relevant scaling relations and their application in laboratory astrophysics experiments with a focus on the generation and amplification of magnetic fields in cosmic environment. The standard model for the origin of magnetic fields is a multi stage process whereby a vanishing magnetic seed is first generated by a rotational electric field and is then amplified by turbulent dynamo action to the characteristic values observed in astronomical bodies. We thus discuss the relevant seed generation mechanisms in cosmic environment including resistive mechanism, collision-less and fluid instabilities, as well as novel laboratory experiments using high power laser systems aimed at investigating the amplification of magnetic energy by magneto-hydrodynamic (MHD) turbulence. Future directions, including efforts to model in the laboratory the process of diffusive shock acceleration are also discussed, with an emphasis on the potential of laboratory experiments to further our understanding of plasma physics on cosmic scales.

  15. Cold dark matter confronts the cosmic microwave background - Large-angular-scale anisotropies in Omega sub 0 + lambda 1 models

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola

    1992-01-01

    A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.

  16. Farmland Drought Evaluation Based on the Assimilation of Multi-Temporal Multi-Source Remote Sensing Data into AquaCrop Model

    NASA Astrophysics Data System (ADS)

    Yang, Guijun; Yang, Hao; Jin, Xiuliang; Pignatti, Stefano; Casa, Faffaele; Silverstro, Paolo Cosmo

    2016-08-01

    Drought is the most costly natural disasters in China and all over the world. It is very important to evaluate the drought-induced crop yield losses and further improve water use efficiency at regional scale. Firstly, crop biomass was estimated by the combined use of Synthetic Aperture Radar (SAR) and optical remote sensing data. Then the estimated biophysical variable was assimilated into crop growth model (FAO AquaCrop) by the Particle Swarm Optimization (PSO) method from farmland scale to regional scale.At farmland scale, the most important crop parameters of AquaCrop model were determined to reduce the used parameters in assimilation procedure. The Extended Fourier Amplitude Sensitivity Test (EFAST) method was used for assessing the contribution of different crop parameters to model output. Moreover, the AquaCrop model was calibrated using the experiment data in Xiaotangshan, Beijing.At regional scale, spatial application of our methods were carried out and validated in the rural area of Yangling, Shaanxi Province, in 2014. This study will provide guideline to make irrigation decision of balancing of water consumption and yield loss.

  17. Effects of large-scale wind driven turbulence on sound propagation

    NASA Technical Reports Server (NTRS)

    Noble, John M.; Bass, Henry E.; Raspet, Richard

    1990-01-01

    Acoustic measurements made in the atmosphere have shown significant fluctuations in amplitude and phase resulting from the interaction with time varying meteorological conditions. The observed variations appear to have short term and long term (1 to 5 minutes) variations at least in the phase of the acoustic signal. One possible way to account for this long term variation is the use of a large scale wind driven turbulence model. From a Fourier analysis of the phase variations, the outer scales for the large scale turbulence is 200 meters and greater, which corresponds to turbulence in the energy-containing subrange. The large scale turbulence is assumed to be elongated longitudinal vortex pairs roughly aligned with the mean wind. Due to the size of the vortex pair compared to the scale of the present experiment, the effect of the vortex pair on the acoustic field can be modeled as the sound speed of the atmosphere varying with time. The model provides results with the same trends and variations in phase observed experimentally.

  18. Reactive transport of CO2-rich fluids in simulated wellbore interfaces: Experiments and models exploring behaviour on length scales of 1 to 6 m

    NASA Astrophysics Data System (ADS)

    Wolterbeek, T. K. T.; Raoof, A.; Peach, C. J.; Spiers, C. J.

    2016-12-01

    Defects present at casing-cement interfaces in wellbores constitute potential pathways for CO2 to migrate from geological storage systems. It is essential to understand how the transport properties of such pathways evolve when penetrated by CO2-rich fluids. While numerous studies have explored this problem at the decimetre length-scale, the 1-10-100 m scales relevant for real wellbores have received little attention. The present work addresses the effects of long-range reactive transport on a length scale of 1-6 m. This is done by means of a combined experimental and modelling study. The experimental work consisted of flow-through tests, performed on cement-filled steel tubes, 1-6 m in length, containing artificially debonded cement-interfaces. Four tests were performed, at 60-80 °C, imposing flow-through of CO2-rich fluid at mean pressures of 10-15 MPa, controlling the pressure difference at 0.12-4.8 MPa, while measuring flow-rate. In the modelling work, we developed a numerical model to explore reactive transport in CO2-exposed defects on a similar length scale. The formulation adopted incorporates fluid flow, advective and diffusive solute transport, and CO2-cement chemical reactions. Our results show that long-range reactive transport strongly affects the permeability evolution of CO2-exposed defects. In the experiments, sample permeability decreased by 2-4 orders, which microstructural observations revealed was associated with downstream precipitation of carbonates, possibly aided by migration of fines. The model simulations show precipitation in initially open defects produces a sharp decrease in flow rate, causing a transition from advection to diffusion-dominated reactive transport. While the modelling results broadly reproduce the experimental observations, it is further demonstrated that non-uniformity in initial defect aperture has a profound impact on self-sealing behaviour and system permeability evolution on the metre scale. The implication is that future reactive transport models and wellbore scale analyses must include defects with variable aperture in order to obtain reliable upscaling relations.

  19. Fluid dynamic mechanisms and interactions within separated flows and their effects on missile aerodynamics

    NASA Astrophysics Data System (ADS)

    Addy, A. L.; Chow, W. L.; Korst, H. H.; White, R. A.

    1983-05-01

    Significant data and detailed results of a joint research effort investigating the fluid dynamic mechanisms and interactions within separated flows are presented. The results were obtained through analytical, experimental, and computational investigations of base flow related configurations. The research objectives focus on understanding the component mechanisms and interactions which establish and maintain separated flow regions. Flow models and theoretical analyses were developed to describe the base flowfield. The research approach has been to conduct extensive small-scale experiments on base flow configurations and to analyze these flows by component models and finite-difference techniques. The modeling of base flows of missiles (both powered and unpowered) for transonic and supersonic freestreams has been successful by component models. Research on plume effects and plume modeling indicated the need to match initial plume slope and plume surface curvature for valid wind tunnel simulation of an actual rocket plume. The assembly and development of a state-of-the-art laser Doppler velocimeter (LDV) system for experiments with two-dimensional small-scale models has been completed and detailed velocity and turbulence measurements are underway. The LDV experiments include the entire range of base flowfield mechanisms - shear layer development, recompression/reattachment, shock-induced separation, and plume-induced separation.

  20. Experimental measurements of hydrodynamic instabilities on NOVA of relevance to astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budil, K S; Cherfils, C; Drake, R P

    1998-09-11

    Large lasers such as Nova allow the possibility of achieving regimes of high energy densities in plasmas of millimeter spatial scales and nanosecond time scales. In those plasmas where thermal conductivity and viscosity do not play a significant role, the hydrodynamic evolution is suitable for benchmarking hydrodynamics modeling in astrophysical codes. Several experiments on Nova examine hydrodynamically unstable interfaces. A typical Nova experiment uses a gold millimeter-scale hohlraum to convert the laser energy to a 200 eV blackbody source lasting about a nanosecond. The x-rays ablate a planar target, generating a series of shocks and accelerating the target. The evolvingmore » area1 density is diagnosed by time-resolved radiography, using a second x-ray source. Data from several experiments are presented and diagnostic techniques are discussed.« less

  1. COMMUNITY SCALE AIR TOXICS MODELING WITH CMAQ

    EPA Science Inventory

    Consideration and movement for an urban air toxics control strategy is toward a community, exposure and risk-based modeling approach, with emphasis on assessments of areas that experience high air toxic concentration levels, the so-called "hot spots". This strategy will requir...

  2. Variational assimilation of streamflow into operational distributed hydrologic models: effect of spatiotemporal adjustment scale

    NASA Astrophysics Data System (ADS)

    Lee, H.; Seo, D.-J.; Liu, Y.; Koren, V.; McKee, P.; Corby, R.

    2012-01-01

    State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA) and kinematic-wave routing models of the US National Weather Service (NWS) Research Distributed Hydrologic Model (RDHM) with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE) are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation, interior flow assimilation at any adjustment scale produces streamflow predictions with a spatial correlation structure more consistent with that of streamflow observations. We also describe diagnosing the complexity of the assimilation problem using the spatial correlation information associated with the streamflow process, and discuss the effect of timing errors in a simulated hydrograph on the performance of the data assimilation procedure.

  3. Experimental Plan for Crystal Accumulation Studies in the WTP Melter Riser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.; Fowley, M.

    2015-04-28

    This experimental plan defines crystal settling experiments to be in support of the U.S. Department of Energy – Office of River Protection crystal tolerant glass program. The road map for development of crystal-tolerant high level waste glasses recommends that fluid dynamic modeling be used to better understand the accumulation of crystals in the melter riser and mechanisms of removal. A full-scale version of the Hanford Waste Treatment and Immobilization Plant (WTP) melter riser constructed with transparent material will be used to provide data in support of model development. The system will also provide a platform to demonstrate mitigation or recoverymore » strategies in off-normal events where crystal accumulation impedes melter operation. Test conditions and material properties will be chosen to provide results over a variety of parameters, which can be used to guide validation experiments with the Research Scale Melter at the Pacific Northwest National Laboratory, and that will ultimately lead to the development of a process control strategy for the full scale WTP melter. The experiments described in this plan are divided into two phases. Bench scale tests will be used in Phase 1 (using the appropriate solid and fluid simulants to represent molten glass and spinel crystals) to verify the detection methods and analytical measurements prior to their use in a larger scale system. In Phase 2, a full scale, room temperature mockup of the WTP melter riser will be fabricated. The mockup will provide dynamic measurements of flow conditions, including resistance to pouring, as well as allow visual observation of crystal accumulation behavior.« less

  4. Prospects for discovering the Higgs-like pseudo-Nambu-Goldstone boson of the classical scale symmetry

    NASA Astrophysics Data System (ADS)

    Farzinnia, Arsham

    2015-11-01

    We examine the impact of the expected reach of the LHC and the XENON1T experiments on the parameter space of the minimal classically scale invariant extension of the standard model (SM), where all the mass scales are induced dynamically by means of the Coleman-Weinberg mechanism. In this framework, the SM content is enlarged by the addition of one complex gauge-singlet scalar with a scale invariant and C P -symmetric potential. The massive pseudoscalar component, protected by the C P symmetry, forms a viable dark matter candidate, and three flavors of the right-handed Majorana neutrinos are included to account for the nonzero masses of the SM neutrinos via the seesaw mechanism. The projected constraints on the parameter space arise by applying the ATLAS heavy Higgs discovery prospects, with an integrated luminosity of 300 and 3000 fb-1 at √{s }=14 TeV , to the pseudo-Nambu-Goldstone boson of the (approximate) scale symmetry, as well as by utilizing the expected reach of the XENON1T direct detection experiment for the discovery of the pseudoscalar dark matter candidate. A null-signal discovery by these future experiments implies that vast regions of the model's parameter space can be thoroughly explored; the combined projections are expected to confine a mixing between the SM and the singlet sector to very small values while probing the viability of the TeV scale pseudoscalar's thermal relic abundance as the dominant dark matter component in the Universe. Furthermore, the vacuum stability and triviality requirements of the framework up to the Planck scale are studied, and the viable region of the parameter space is identified. The results are summarized in extensive exclusion plots, incorporating additionally the prior theoretical and experimental bounds for comparison.

  5. Nonlocal transport in the presence of transport barriers

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, D.

    2013-10-01

    There is experimental, numerical, and theoretical evidence that transport in plasmas can, under certain circumstances, depart from the standard local, diffusive description. Examples include fast pulse propagation phenomena in perturbative experiments, non-diffusive scaling in L-mode plasmas, and non-Gaussian statistics of fluctuations. From the theoretical perspective, non-diffusive transport descriptions follow from the relaxation of the restrictive assumptions (locality, scale separation, and Gaussian/Markovian statistics) at the foundation of diffusive models. We discuss an alternative class of models able to capture some of the observed non-diffusive transport phenomenology. The models are based on a class of nonlocal, integro-differential operators that provide a unifying framework to describe non- Fickian scale-free transport, and non-Markovian (memory) effects. We study the interplay between nonlocality and internal transport barriers (ITBs) in perturbative transport including cold edge pulses and power modulation. Of particular interest in the nonlocal ``tunnelling'' of perturbations through ITBs. Also, flux-gradient diagrams are discussed as diagnostics to detect nonlocal transport processes in numerical simulations and experiments. Work supported by the US Department of Energy.

  6. A scaled-ionic-charge simulation model that reproduces enhanced and suppressed water diffusion in aqueous salt solutions.

    PubMed

    Kann, Z R; Skinner, J L

    2014-09-14

    Non-polarizable models for ions and water quantitatively and qualitatively misrepresent the salt concentration dependence of water diffusion in electrolyte solutions. In particular, experiment shows that the water diffusion coefficient increases in the presence of salts of low charge density (e.g., CsI), whereas the results of simulations with non-polarizable models show a decrease of the water diffusion coefficient in all alkali halide solutions. We present a simple charge-scaling method based on the ratio of the solvent dielectric constants from simulation and experiment. Using an ion model that was developed independently of a solvent, i.e., in the crystalline solid, this method improves the water diffusion trends across a range of water models. When used with a good-quality water model, e.g., TIP4P/2005 or E3B, this method recovers the qualitative behaviour of the water diffusion trends. The model and method used were also shown to give good results for other structural and dynamic properties including solution density, radial distribution functions, and ion diffusion coefficients.

  7. Multi-Scale Experiments to Evaluate Mobility Control Methods for Enhancing the Sweep Efficiency of Injected Subsurface Remediation Amendments

    DTIC Science & Technology

    2010-08-01

    petroleum industry. Moreover, heterogeneity control strategies can be applied to improve the efficiency of a variety of in situ remediation technologies...conditions that differ significantly from those found in environmental systems . Therefore many of the design criteria used by the petroleum industry for...were helpful in constructing numerical models in up-scaled systems (2-D tanks). The UTCHEM model was able to successfully simulate 2-D experimental

  8. The Numerical Studies Program for the Atmospheric General Circulation Experiment (AGCE) for Spacelab Flights

    NASA Technical Reports Server (NTRS)

    Fowlis, W. W. (Editor); Davis, M. H. (Editor)

    1981-01-01

    The atmospheric general circulation experiment (AGCE) numerical design for Spacelab flights was studied. A spherical baroclinic flow experiment which models the large scale circulations of the Earth's atmosphere was proposed. Gravity is simulated by a radial dielectric body force. The major objective of the AGCE is to study nonlinear baroclinic wave flows in spherical geometry. Numerical models must be developed which accurately predict the basic axisymmetric states and the stability of nonlinear baroclinic wave flows. A three dimensional, fully nonlinear, numerical model and the AGCE based on the complete set of equations is required. Progress in the AGCE numerical design studies program is reported.

  9. Moving Contact Lines: Linking Molecular Dynamics and Continuum-Scale Modeling.

    PubMed

    Smith, Edward R; Theodorakis, Panagiotis E; Craster, Richard V; Matar, Omar K

    2018-05-17

    Despite decades of research, the modeling of moving contact lines has remained a formidable challenge in fluid dynamics whose resolution will impact numerous industrial, biological, and daily life applications. On the one hand, molecular dynamics (MD) simulation has the ability to provide unique insight into the microscopic details that determine the dynamic behavior of the contact line, which is not possible with either continuum-scale simulations or experiments. On the other hand, continuum-based models provide a link to the macroscopic description of the system. In this Feature Article, we explore the complex range of physical factors, including the presence of surfactants, which governs the contact line motion through MD simulations. We also discuss links between continuum- and molecular-scale modeling and highlight the opportunities for future developments in this area.

  10. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Towards Year-round Estimation of Terrestrial Water Storage over Snow-Covered Terrain via Multi-sensor Assimilation of GRACE/GRACE-FO and AMSR-E/AMSR-2.

    NASA Astrophysics Data System (ADS)

    Wang, J.; Xue, Y.; Forman, B. A.; Girotto, M.; Reichle, R. H.

    2017-12-01

    The Gravity and Recovery Climate Experiment (GRACE) has revolutionized large-scale remote sensing of the Earth's terrestrial hydrologic cycle and has provided an unprecedented observational constraint for global land surface models. However, the coarse-scale (in space and time), vertically-integrated measure of terrestrial water storage (TWS) limits GRACE's applicability to smaller scale hydrologic applications. In order to enhance model-based estimates of TWS while effectively adding resolution (in space and time) to the coarse-scale TWS retrievals, a multi-variate, multi-sensor data assimilation framework is presented here that simultaneously assimilates gravimetric retrievals of TWS in conjunction with passive microwave (PMW) brightness temperature (Tb) observations over snow-covered terrain. The framework uses the NASA Catchment Land Surface Model (Catchment) and an ensemble Kalman filter (EnKF). A synthetic assimilation experiment is presented for the Volga river basin in Russia. The skill of the output from the assimilation of synthetic observations is compared with that of model estimates generated without the benefit of assimilating the synthetic observations. It is shown that the EnKF framework improves modeled estimates of TWS, snow depth, and snow mass (a.k.a. snow water equivalent). The data assimilation routine produces a conditioned (updated) estimate that is more accurate and contains less uncertainty during both the snow accumulation phase of the snow season as well as during the snow ablation season.

  12. Fast proton decay

    NASA Astrophysics Data System (ADS)

    Li, Tianjun; Nanopoulos, Dimitri V.; Walker, Joel W.

    2010-10-01

    We consider proton decay in the testable flipped SU(5)×U(1)X models with TeV-scale vector-like particles which can be realized in free fermionic string constructions and F-theory model building. We significantly improve upon the determination of light threshold effects from prior studies, and perform a fresh calculation of the second loop for the process p→eπ from the heavy gauge boson exchange. The cumulative result is comparatively fast proton decay, with a majority of the most plausible parameter space within reach of the future Hyper-Kamiokande and DUSEL experiments. Because the TeV-scale vector-like particles can be produced at the LHC, we predict a strong correlation between the most exciting particle physics experiments of the coming decade.

  13. Scalability of the muscular action in a parametric 3D model of the index finger.

    PubMed

    Sancho-Bru, Joaquín L; Vergara, Margarita; Rodríguez-Cervantes, Pablo-Jesús; Giurintano, David J; Pérez-González, Antonio

    2008-01-01

    A method for scaling the muscle action is proposed and used to achieve a 3D inverse dynamic model of the human finger with all its components scalable. This method is based on scaling the physiological cross-sectional area (PCSA) in a Hill muscle model. Different anthropometric parameters and maximal grip force data have been measured and their correlations have been analyzed and used for scaling the PCSA of each muscle. A linear relationship between the normalized PCSA and the product of the length and breadth of the hand has been finally used for scaling, with a slope of 0.01315 cm(-2), with the length and breadth of the hand expressed in centimeters. The parametric muscle model has been included in a parametric finger model previously developed by the authors, and it has been validated reproducing the results of an experiment in which subjects from different population groups exerted maximal voluntary forces with their index finger in a controlled posture.

  14. DEVELOPMENT AND OPTIMIZATION OF GAS-ASSISTED GRAVITY DRAINAGE (GAGD) PROCESS FOR IMPROVED LIGHT OIL RECOVERY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dandina N. Rao; Subhash C. Ayirala; Madhav M. Kulkarni

    This report describes the progress of the project ''Development And Optimization of Gas-Assisted Gravity Drainage (GAGD) Process for Improved Light Oil Recovery'' for the duration of the thirteenth project quarter (Oct 1, 2005 to Dec 30, 2005). There are three main tasks in this research project. Task 1 is a scaled physical model study of the GAGD process. Task 2 is further development of a vanishing interfacial tension (VIT) technique for miscibility determination. Task 3 is determination of multiphase displacement characteristics in reservoir rocks. Section I reports experimental work designed to investigate wettability effects of porous medium, on secondary andmore » tertiary mode GAGD performance. The experiments showed a significant improvement of oil recovery in the oil-wet experiments versus the water-wet runs, both in secondary as well as tertiary mode. When comparing experiments conducted in secondary mode to those run in tertiary mode an improvement in oil recovery was also evident. Additionally, this section summarizes progress made with regard to the scaled physical model construction and experimentation. The purpose of building a scaled physical model, which attempts to include various multiphase mechanics and fluid dynamic parameters operational in the field scale, was to incorporate visual verification of the gas front for viscous instabilities, capillary fingering, and stable displacement. Preliminary experimentation suggested that construction of the 2-D model from sintered glass beads was a feasible alternative. During this reporting quarter, several sintered glass mini-models were prepared and some preliminary experiments designed to visualize gas bubble development were completed. In Section II, the gas-oil interfacial tensions measured in decane-CO{sub 2} system at 100 F and live decane consisting of 25 mole% methane, 30 mole% n-butane and 45 mole% n-decane against CO{sub 2} gas at 160 F have been modeled using the Parachor and newly proposed mechanistic Parachor models. In the decane-CO{sub 2} binary system, Parachor model was found to be sufficient for interfacial tension calculations. The predicted miscibility from the Parachor model deviated only by about 2.5% from the measured VIT miscibility. However, in multicomponent live decane-CO{sub 2} system, the performance of the Parachor model was poor, while good match of interfacial tension predictions has been obtained experimentally using the proposed mechanistic Parachor model. The predicted miscibility from the mechanistic Parachor model accurately matched with the measured VIT miscibility in live decane-CO2 system, which indicates the suitability of this model to predict miscibility in complex multicomponent hydrocarbon systems. In the previous reports to the DOE (15323R07, Oct 2004; 15323R08, Jan 2005; 15323R09, Apr 2005; 15323R10, July 2005 and 154323, Oct 2005), the 1-D experimental results from dimensionally scaled GAGD and WAG corefloods were reported for Section III. Additionally, since Section I reports the experimental results from 2-D physical model experiments; this section attempts to extend this 2-D GAGD study to 3-D (4-phase) flow through porous media and evaluate the performance of these processes using reservoir simulation. Section IV includes the technology transfer efforts undertaken during the quarter. This research work resulted in one international paper presentation in Tulsa, OK; one journal publication; three pending abstracts for SCA 2006 Annual Conference and an invitation to present at the Independents Day session at the IOR Symposium 2006.« less

  15. Characterization of convective heating in full scale wildland fires

    Treesearch

    Bret Butler

    2010-01-01

    Data collected in the International Crown Fire modeling Experiment during 1999 are evaluated to characterize the magnitude and duration of convective energy heating in full scale crown fires. To accomplish this objective data on total and radiant incident heat flux, air temperature, and horizontal and vertical gas velocities were evaluated. Total and radiant energy...

  16. Investigating the Mercalli Intensity Scale through "Lived Experience"

    ERIC Educational Resources Information Center

    Jones, Richard

    2012-01-01

    The modified Mercalli (MM) intensity scale is composed of 12 increasing levels of intensity that range from imperceptible shaking to catastrophic destruction and is designated by Roman numerals I through XII. Although qualitative in nature, it can provide a more concrete model for middle and high school students striving to understand the dynamics…

  17. Gravitational waves at interferometer scales and primordial black holes in axion inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Bellido, Juan; Peloso, Marco; Unal, Caner, E-mail: juan.garciabellido@uam.es, E-mail: peloso@physics.umn.edu, E-mail: unal@physics.umn.edu

    We study the prospects of detection at terrestrial and space interferometers, as well as at pulsar timing array experiments, of a stochastic gravitational wave background which can be produced in models of axion inflation. This potential signal, and the development of these experiments, open a new window on inflation on scales much smaller than those currently probed with Cosmic Microwave Background and Large Scale Structure measurements. The sourced signal generated in axion inflation is an ideal candidate for such searches, since it naturally grows at small scales, and it has specific properties (chirality and non-gaussianity) that can distinguish it frommore » an astrophysical background. We study under which conditions such a signal can be produced at an observable level, without the simultaneous overproduction of scalar perturbations in excess of what is allowed by the primordial black hole limits. We also explore the possibility that scalar perturbations generated in a modified version of this model may provide a distribution of primordial black holes compatible with the current bounds, that can act as a seeds of the present black holes in the universe.« less

  18. Why do lab-scale experiments ever resemble geological scale patterning?

    NASA Astrophysics Data System (ADS)

    Ferdowsi, Behrooz; Jones, Brandon C.; Stein, Jeremy L.; Shinbrot, Troy

    2017-11-01

    The Earth and other planets are abundant with curious and poorly understood surface patterns. Examples include sand dunes, periodic and aperiodic ridges and valleys, and networks of river and submarine channels. We make the minimalist proposition that the dominant mechanism governing these varied patterns is mass conservation: notwithstanding detailed particulars, the universal rule is mass conservation and there are only a finite number of surface patterns that can result from this process. To test this minimalist proposition, we perform experiments in a vertically vibrated bed of fine grains, and we show that every one of a wide variety of patterns seen in the laboratory is also seen in recorded geomorphologies. We explore a range of experimental driving frequencies and amplitudes, and we complement these experimental results with a non-local cellular automata model that reproduces the surface patterns seen using a minimalist approach that allows a free surface to deform subject to mass conservation and simple known forces such as gravity. These results suggest a common cause for the effectiveness of lab-scale models for geological scale patterning that otherwise ought to have no reasonable correspondence.

  19. Scaling of ion expansion energy with laser flux in moderate-Z plasmas produced by lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, P.D.; Goel, S.K.; Uppal, J.S.

    1982-09-01

    Ion expansion energy measurements in plasmas created by focusing 1-GW, 5-nsec Nd:glass laser on plane solid targets of polythene, carbon, and aluminum are reported. It is observed that the scaling of ion expansion energy with laser flux Phi varies between Phi/sup 0.28/ and Phi/sup 0.66/ for polythene, Phi/sup 0.28/ and Phi/sup 0.70/ for carbon, and Phi/sup 0.51/ and Phi/sup 0.44/ for aluminum in the flux range 5 x 10/sup 10/--5 x 10/sup 12/ W/cm/sup 2/ of our experiment. The scaling is either much slower or faster than a scaling of Phi/sup 4/9/ expected from a self-regulating model for plasmas createdmore » in the low flux range. It is shown that this behavior, as well as results of experiments on similar plasmas reported by other authors, can be explained when radiation losses and the energy spent in ionization are also considered in the self-regulating model.« less

  20. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish

    2015-09-14

    Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less

  1. Basin-scale estimates of oceanic primary production by remote sensing - The North Atlantic

    NASA Technical Reports Server (NTRS)

    Platt, Trevor; Caverhill, Carla; Sathyendranath, Shubha

    1991-01-01

    The monthly averaged CZCS data for 1979 are used to estimate annual primary production at ocean basin scales in the North Atlantic. The principal supplementary data used were 873 vertical profiles of chlorophyll and 248 sets of parameters derived from photosynthesis-light experiments. Four different procedures were tested for calculation of primary production. The spectral model with nonuniform biomass was considered as the benchmark for comparison against the other three models. The less complete models gave results that differed by as much as 50 percent from the benchmark. Vertically uniform models tended to underestimate primary production by about 20 percent compared to the nonuniform models. At horizontal scale, the differences between spectral and nonspectral models were negligible. The linear correlation between biomass and estimated production was poor outside the tropics, suggesting caution against the indiscriminate use of biomass as a proxy variable for primary production.

  2. Genome-to-Watershed Predictive Understanding of Terrestrial Environments

    NASA Astrophysics Data System (ADS)

    Hubbard, S. S.; Agarwal, D.; Banfield, J. F.; Beller, H. R.; Brodie, E.; Long, P.; Nico, P. S.; Steefel, C. I.; Tokunaga, T. K.; Williams, K. H.

    2014-12-01

    Although terrestrial environments play a critical role in cycling water, greenhouse gasses, and other life-critical elements, the complexity of interactions among component microbes, plants, minerals, migrating fluids and dissolved constituents hinders predictive understanding of system behavior. The 'Sustainable Systems 2.0' project is developing genome-to-watershed scale predictive capabilities to quantify how the microbiome affects biogeochemical watershed functioning, how watershed-scale hydro-biogeochemical processes affect microbial functioning, and how these interactions co-evolve with climate and land-use changes. Development of such predictive capabilities is critical for guiding the optimal management of water resources, contaminant remediation, carbon stabilization, and agricultural sustainability - now and with global change. Initial investigations are focused on floodplains in the Colorado River Basin, and include iterative model development, experiments and observations with an early emphasis on subsurface aspects. Field experiments include local-scale experiments at Rifle CO to quantify spatiotemporal metabolic and geochemical responses to O2and nitrate amendments as well as floodplain-scale monitoring to quantify genomic and biogeochemical response to natural hydrological perturbations. Information obtained from such experiments are represented within GEWaSC, a Genome-Enabled Watershed Simulation Capability, which is being developed to allow mechanistic interrogation of how genomic information stored in a subsurface microbiome affects biogeochemical cycling. This presentation will describe the genome-to-watershed scale approach as well as early highlights associated with the project. Highlights include: first insights into the diversity of the subsurface microbiome and metabolic roles of organisms involved in subsurface nitrogen, sulfur and hydrogen and carbon cycling; the extreme variability of subsurface DOC and hydrological controls on carbon and nitrogen cycling; geophysical identification of floodplain hotspots that are useful for model parameterization; and GEWaSC demonstration of how incorporation of identified microbial metabolic processes improves prediction of the larger system biogeochemical behavior.

  3. Numerical modeling of the simulated gas hydrate production test at Mallik 2L-38 in the pilot scale pressure reservoir LARS - Applying the "foamy oil" model

    NASA Astrophysics Data System (ADS)

    Abendroth, Sven; Thaler, Jan; Klump, Jens; Schicks, Judith; Uddin, Mafiz

    2014-05-01

    In the context of the German joint project SUGAR (Submarine Gas Hydrate Reservoirs: exploration, extraction and transport) we conducted a series of experiments in the LArge Reservoir Simulator (LARS) at the German Research Centre of Geosciences Potsdam. These experiments allow us to investigate the formation and dissociation of hydrates at large scale laboratory conditions. We performed an experiment similar to the field-test conditions of the production test in the Mallik gas hydrate field (Mallik 2L-38) in the Beaufort Mackenzie Delta of the Canadian Arctic. The aim of this experiment was to study the transport behavior of fluids in gas hydrate reservoirs during depressurization (see also Heeschen et al. and Priegnitz et al., this volume). The experimental results from LARS are used to provide details about processes inside the pressure vessel, to validate the models through history matching, and to feed back into the design of future experiments. In experiments in LARS the amount of methane produced from gas hydrates was much lower than expected. Previously published models predict a methane production rate higher than the one observed in experiments and field studies (Uddin et al. 2010; Wright et al. 2011). The authors of the aforementioned studies point out that the current modeling approach overestimates the gas production rate when modeling gas production by depressurization. They suggest that trapping of gas bubbles inside the porous medium is responsible for the reduced gas production rate. They point out that this behavior of multi-phase flow is not well explained by a "residual oil" model, but rather resembles a "foamy oil" model. Our study applies Uddin's (2010) "foamy oil" model and combines it with history matches of our experiments in LARS. Our results indicate a better agreement between experimental and model results when using the "foamy oil" model instead of conventional models of gas flow in water. References Uddin M., Wright J.F. and Coombe D. (2010) - Numerical Study of gas evolution and transport behaviors in natural gas hydrate reservoirs; CSUG/SPE 137439. Wright J.F., Uddin M., Dallimore S.R. and Coombe D. (2011) - Mechanisms of gas evolution and transport in a producing gas hydrate reservoir: an unconventional basis for successful history matching of observed production flow data; International Conference on Gas Hydrates (ICGH 2011).

  4. Physico-chemo-mechanical coupling mechanisms in soil behavior

    NASA Astrophysics Data System (ADS)

    Hu, Liangbo

    Many processes in geomechanics or geotechnical/geomechanical system engineering involve phenomena that are physical and/or chemical in nature, the understanding of which is crucial to modeling the mechanical responses of soils to various loads. Such physico-chemo-mechanical coupling mechanisms are prevalent in two different types of geomechanical processes studied in this dissertation: long-term soil/sediments compaction & desiccation cracking. Most commonly the underlying physical and chemical phenomena are explained, formulated and quantified at microscopic level. In addition to the necessity of capturing the coupling mechanisms, another common thread that emerges in formulating their respective mathematical model is the necessity of linking phenomena occurring at different scales with a theory to be formulated at a macroscopic continuum level. Part I of this dissertation is focused on the subject of long-term compaction behavior of soils and sediments. The interest in this subject arises from the need to evaluate reservoir compaction and land subsidence that may result from oil/gas extraction in petroleum engineering. First, a damage-enhanced reactive chemo-plasticity model is developed to simulate creep of saturated geomaterials, a long-term strain developed at constant stress. Both open and closed systems are studied. The deformation at a constant load in a closed system exhibits most of the characteristics of the classical creep. Primary, secondary and tertiary creep can be interpreted in terms of dominant mechanisms in each phase, emphasizing the role of the rates of dissolution and precipitation, variable reaction areas and chemical softening intensity. The rest of Part I is devoted to the study of soil aging, an effect of a localized mineral dissolution related creep strain and subsequent material stiffening. A three-scale mathematical model is developed to numerically simulate the scenarios proposed based on macroscopic experiments and geochemical evidence. These scale are: micro-scale for intra-grain dissolution, meso-scale for processes within grain assembly and macro-scale of a granular continuum. This model makes it possible to predict the porosity evolution starting from a very simple grain assembly under different pressures at the rneso-scale and evaluate the evolution of the stiffness as a function of the aging duration and the associated stress at the macro-scale. The results are qualitative but reproduce well the main phenomena and tendencies. Subsequently, this model is further examined to study the feedback mechanisms in multi-scale phenomena of sediment compaction and their role in chemo-hydro-geomechanical modeling. Part II of this dissertation deals with desiccation cracking of soils. Presence of cracks is a major cause for the deteriorated and compromised engineering properties of soils in earth works, such as dramatical increase in permeability or decrease of strength. Desiccation cracking is first addressed in an experimental study of shrinkage and cracking of a soil slab with water removed by isothermal drying. This study is followed by a numerical simulation of a solid phase continuum based on hygro-elastic theory. The experiments confirm that a substantial part of shrinkage occurs in the saturated phase and the kinematic boundary constraints play the crucial role in generating tensile stress and eventually cracks. Subsequently a novel experimental parametric study is performed using different liquids for the pore fluids in our experiment to further investigate the role of solid-fluid-gas interaction. Biot's theory is employed to perform a numerical parametric study. The amount of shrinkage depends mainly on the soil compressibility, on the other hand, the rate of fluid removal and rate of shrinkage are found to be controlled by evaporative and permeability properties. Additionally, microscopic experimental and phenomenological study is also performed to link the engineering properties and macroscopic variables to the phenomena occurring at the pore scale. Mercury Intrusion Porosimetry (MIP) technique is used to reveal the evolution of the pore sizes. The large pores are found to be mainly responsible for the shrinking deformation. A microscopic model is developed to simulate the possible scenarios during the entire desaturated phase. A possible quantitative comparison with MIP results and macroscopic experiments is made with using the averaging method to upscale the variables obtained at the micro-scale. The main characteristics of shrinkage behavior observed in macroscopic experiments are generally reproduced.

  5. A Prescriptive, Intergenerational-Tension Ageism Scale: Succession, Identity, and Consumption (SIC)

    PubMed Central

    North, Michael S.; Fiske, Susan T.

    2014-01-01

    We introduce a novel ageism scale, focusing on prescriptive beliefs concerning potential intergenerational tensions: active, envied resource Succession, symbolic Identity avoidance, and passive, shared-resource Consumption (SIC). Four studies (2,010 total participants) developed the scale. EFA formed an initial 20-item, three-factor solution (Study 1). The scale converges appropriately with other prejudice measures and diverges from other social control measures (Study 2). It diverges from anti-youth ageism (Study 3). Study 4’s experiment yielded both predictive and divergent validity apropos another ageism measure. Structural equation modeling confirmed model fit across all studies. Per an intergenerational-tension focus, younger people consistently scored the highest. As generational equity issues intensify, the scale provides a contemporary tool for current and future ageism research. PMID:23544391

  6. The effect of latent heat release on synoptic-to-planetary wave interactions and its implication for satellite observations: Theoretical modeling

    NASA Technical Reports Server (NTRS)

    Branscome, Lee E.; Bleck, Rainer; Obrien, Enda

    1990-01-01

    The project objectives are to develop process models to investigate the interaction of planetary and synoptic-scale waves including the effects of latent heat release (precipitation), nonlinear dynamics, physical and boundary-layer processes, and large-scale topography; to determine the importance of latent heat release for temporal variability and time-mean behavior of planetary and synoptic-scale waves; to compare the model results with available observations of planetary and synoptic wave variability; and to assess the implications of the results for monitoring precipitation in oceanic-storm tracks by satellite observing systems. Researchers have utilized two different models for this project: a two-level quasi-geostrophic model to study intraseasonal variability, anomalous circulations and the seasonal cycle, and a 10-level, multi-wave primitive equation model to validate the two-level Q-G model and examine effects of convection, surface processes, and spherical geometry. It explicitly resolves several planetary and synoptic waves and includes specific humidity (as a predicted variable), moist convection, and large-scale precipitation. In the past year researchers have concentrated on experiments with the multi-level primitive equation model. The dynamical part of that model is similar to the spectral model used by the National Meteorological Center for medium-range forecasts. The model includes parameterizations of large-scale condensation and moist convection. To test the validity of results regarding the influence of convective precipitation, researchers can use either one of two different convective schemes in the model, a Kuo convective scheme or a modified Arakawa-Schubert scheme which includes downdrafts. By choosing one or the other scheme, they can evaluate the impact of the convective parameterization on the circulation. In the past year researchers performed a variety of initial-value experiments with the primitive-equation model. Using initial conditions typical of climatological winter conditions, they examined the behavior of synoptic and planetary waves growing in moist and dry environments. Surface conditions were representative of a zonally averaged ocean. They found that moist convection associated with baroclinic wave development was confined to the subtropics.

  7. A sprinkling experiment to quantify celerity-velocity differences at the hillslope scale

    EPA Science Inventory

    The difference between celerity and velocity of hillslope water flow is poorly understood. We assessed these differences by combining a 24-day hillslope sprinkling experiment with a spatially explicit hydrologic model analysis. We focused our work at Watershed 10 at the H.J. And...

  8. Psychometric Properties of Work-Related Behavior and Experience Patterns (AVEM) Scale

    ERIC Educational Resources Information Center

    Gencer, R. Timucin; Boyacioglu, Hayal; Kiremitci, Olcay; Dogan, Birol

    2010-01-01

    "Work-Related Behaviour and Experience Patterns" (AVEM) has been developed with the intention of determining the occupation related behaviour and lifestyle models of professionals. This study has been conducted to test the validity and reliability of MEDYAM, the abbreviated Turkish equivalent of AVEM. 373 teachers from 10 different…

  9. A Simple Boyle's Law Experiment.

    ERIC Educational Resources Information Center

    Lewis, Don L.

    1997-01-01

    Describes an experiment to demonstrate Boyle's law that provides pressure measurements in a familiar unit (psi) and makes no assumptions concerning atmospheric pressure. Items needed include bathroom scales and a 60-ml syringe, castor oil, disposable 3-ml syringe and needle, modeling clay, pliers, and a wooden block. Commercial devices use a…

  10. Friction in debris flows: inferences from large-scale flume experiments

    USGS Publications Warehouse

    Iverson, Richard M.; LaHusen, Richard G.; ,

    1993-01-01

    A recently constructed flume, 95 m long and 2 m wide, permits systematic experimentation with unsteady, nonuniform flows of poorly sorted geological debris. Preliminary experiments with water-saturated mixtures of sand and gravel show that they flow in a manner consistent with Coulomb frictional behavior. The Coulomb flow model of Savage and Hutter (1989, 1991), modified to include quasi-static pore-pressure effects, predicts flow-front velocities and flow depths reasonably well. Moreover, simple scaling analyses show that grain friction, rather than liquid viscosity or grain collisions, probably dominates shear resistance and momentum transport in the experimental flows. The same scaling indicates that grain friction is also important in many natural debris flows.

  11. Approximate similarity principle for a full-scale STOVL ejector

    NASA Astrophysics Data System (ADS)

    Barankiewicz, Wendy S.; Perusek, Gail P.; Ibrahim, Mounir B.

    1994-03-01

    Full-scale ejector experiments are expensive and difficult to implement at engine exhaust temperatures. For this reason the utility of using similarity principles, in particular the Munk and prim principle for isentropic flow, was explored. Static performance test data for a full-scale thrust augmenting ejector were analyzed for primary flow temperature up to 1560 R. At different primary temperatures, exit pressure contours were compared for similarity. A nondimensional flow parameter is then used to eliminate primary nozzle temperature dependence and verify similarity between the hot and cold flow experiments. Under the assumption that an appropriate similarity principle can be established, properly chosen performance parameters were found to be similar for both flow and cold flow model tests.

  12. Apparent Dependence of Rate- and State-Dependent Friction Parameters on Loading Velocity and Cumulative Displacement Inferred from Large-Scale Biaxial Friction Experiments

    NASA Astrophysics Data System (ADS)

    Urata, Yumi; Yamashita, Futoshi; Fukuyama, Eiichi; Noda, Hiroyuki; Mizoguchi, Kazuo

    2017-06-01

    We investigated the constitutive parameters in the rate- and state-dependent friction (RSF) law by conducting numerical simulations, using the friction data from large-scale biaxial rock friction experiments for Indian metagabbro. The sliding surface area was 1.5 m long and 0.5 m wide, slid for 400 s under a normal stress of 1.33 MPa at a loading velocity of either 0.1 or 1.0 mm/s. During the experiments, many stick-slips were observed and those features were as follows. (1) The friction drop and recurrence time of the stick-slip events increased with cumulative slip displacement in an experiment before which the gouges on the surface were removed, but they became almost constant throughout an experiment conducted after several experiments without gouge removal. (2) The friction drop was larger and the recurrence time was shorter in the experiments with faster loading velocity. We applied a one-degree-of-freedom spring-slider model with mass to estimate the RSF parameters by fitting the stick-slip intervals and slip-weakening curves measured based on spring force and acceleration of the specimens. We developed an efficient algorithm for the numerical time integration, and we conducted forward modeling for evolution parameters ( b) and the state-evolution distances (L_{{c}}), keeping the direct effect parameter ( a) constant. We then identified the confident range of b and L_{{c}} values. Comparison between the results of the experiments and our simulations suggests that both b and L_{{c}} increase as the cumulative slip displacement increases, and b increases and L_{{c}} decreases as the loading velocity increases. Conventional RSF laws could not explain the large-scale friction data, and more complex state evolution laws are needed.

  13. Enrichment scale determines herbivore control of primary producers.

    PubMed

    Gil, Michael A; Jiao, Jing; Osenberg, Craig W

    2016-03-01

    Anthropogenic nutrient enrichment stimulates primary production and threatens natural communities worldwide. Herbivores may counteract deleterious effects of enrichment by increasing their consumption of primary producers. However, field tests of herbivore control are often done by adding nutrients at small (e.g., sub-meter) scales, while enrichment in real systems often occurs at much larger scales (e.g., kilometers). Therefore, experimental results may be driven by processes that are not relevant at larger scales. Using a mathematical model, we show that herbivores can control primary producer biomass in experiments by concentrating their foraging in small enriched plots; however, at larger, realistic scales, the same mechanism may not lead to herbivore control of primary producers. Instead, other demographic mechanisms are required, but these are not examined in most field studies (and may not operate in many systems). This mismatch between experiments and natural processes suggests that many ecosystems may be less resilient to degradation via enrichment than previously believed.

  14. On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Jan, A.; Painter, S. L.; Coon, E. T.

    2017-12-01

    Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the DOE Office of Science.

  15. Fast inertial particle manipulation in oscillating flows

    NASA Astrophysics Data System (ADS)

    Thameem, Raqeeb; Rallabandi, Bhargav; Hilgenfeldt, Sascha

    2017-05-01

    It is demonstrated that micron-sized particles suspended in fluid near oscillating interfaces experience strong inertial displacements above and beyond the fluid streaming. Experiments with oscillating bubbles show rectified particle lift over extraordinarily short (millisecond) times. A quantitative model on both the oscillatory and the steady time scales describes the particle displacement relative to the fluid motion. The formalism yields analytical predictions confirming the observed scaling behavior with particle size and experimental control parameters. It applies to a large class of oscillatory flows with applications from particle trapping to size sorting.

  16. Comparisons of the Impact Responses of a 1/5-Scale Model and a Full-Scale Crashworthy Composite Fuselage Section

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Lyle, Karen H.

    2003-01-01

    A 25-fps vertical drop test of a 1/5-scale model composite fuselage section was conducted to replicate a previous test of a full-scale fuselage section. The purpose of the test was to obtain experimental data characterizing the impact response of the 1/5-scale model fuselage section for comparison with the corresponding full-scale data. This comparison is performed to assess the scaling procedures and to determine if scaling effects are present. For the drop test, the 1/5-scale model fuselage section was configured in a similar manner as the full-scale section, with lead masses attached to the floor through simulated seat rails. Scaled acceleration and velocity responses are compared and a general assessment of structural damage is made. To further quantify the data correlation, comparisons of the average acceleration data are made as a function of floor location and longitudinal position. Also, the percentage differences in the velocity change (area under the acceleration curve) and the velocity change squared (proportional to kinetic energy) are compared as a function of floor location. Finally, correlation coefficients are calculated for corresponding 1/5- and full-scale data channels and these values are plotted versus floor location. From a scaling perspective, the differences between the 1/5- and full-scale tests are relatively small, indicating that appropriate scaling procedures were used in fabricating the test specimens and in conducting the experiments. The small differences in the scaled test data are attributed to minor scaling anomalies in mass, potential energy, and impact attitude.

  17. Laser absorption, power transfer, and radiation symmetry during the first shock of inertial confinement fusion gas-filled hohlraum experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pak, A.; Dewald, E. L.; Landen, O. L.

    2015-12-15

    Temporally resolved measurements of the hohlraum radiation flux asymmetry incident onto a bismuth coated surrogate capsule have been made over the first two nanoseconds of ignition relevant laser pulses. Specifically, we study the P2 asymmetry of the incoming flux as a function of cone fraction, defined as the inner-to-total laser beam power ratio, for a variety of hohlraums with different scales and gas fills. This work was performed to understand the relevance of recent experiments, conducted in new reduced-scale neopentane gas filled hohlraums, to full scale helium filled ignition targets. Experimental measurements, matched by 3D view factor calculations, are usedmore » to infer differences in symmetry, relative beam absorption, and cross beam energy transfer (CBET), employing an analytic model. Despite differences in hohlraum dimensions and gas fill, as well as in laser beam pointing and power, we find that laser absorption, CBET, and the cone fraction, at which a symmetric flux is achieved, are similar to within 25% between experiments conducted in the reduced and full scale hohlraums. This work demonstrates a close surrogacy in the dynamics during the first shock between reduced-scale and full scale implosion experiments and is an important step in enabling the increased rate of study for physics associated with inertial confinement fusion.« less

  18. Investigation of multi-scale flash-weakening of rock surfaces during high speed slip

    NASA Astrophysics Data System (ADS)

    Barbery, M. R.; Saber, O.; Chester, F. M.; Chester, J. S.

    2017-12-01

    A significant reduction in the coefficient of friction of rock can occur if sliding velocity approaches seismic rates as a consequence of weakening of microscopic sliding contacts by flash heating. Using a high-acceleration and -speed biaxial apparatus equipped with a high-speed Infra-Red (IR) camera to capture thermographs of the sliding surface, we have documented the heterogeneous distribution of temperature on flash-heated decimetric surfaces characterized by linear arrays of high-temperature, mm-size spots, and streaks. Numerical models that are informed by the character of flash heated surfaces and that consider the coupling of changes in temperature and changes in the friction of contacts, supports the hypothesis that independent mechanisms of flash weakening operate at different contact scales. Here, we report on new experiments that provide additional constraints on the life-times and rest-times of populations of millimeter-scale contacts. Rock friction experiments conducted on Westerly granite samples in a double-direct shear configuration achieve velocity steps from 1 mm/s to 900 mm/s at 100g accelerations over 2 mm of displacement with normal stresses of 22-36 MPa and 30 mm of displacement during sustained high-speed sliding. Sliding surfaces are machined to roughness similar to natural fault surfaces and that allow us to control the characteristics of millimeter-scale contact populations. Thermographs of the sliding surface show temperatures up to 200 C on millimeter-scale contacts, in agreement with 1-D heat conduction model estimates of 180 C. Preliminary comparison of thermal modeling results and experiment observations demonstrate that we can distinguish the different life-times and rest-times of contacts in thermographs and the corresponding frictional weakening behaviors. Continued work on machined surfaces that lead to different contact population characteristics will be used to test the multi-scale and multi-mechanism hypothesis for flash weakening during seismic slip on rough fault surfaces.

  19. Direct comparison of elastic incoherent neutron scattering experiments with molecular dynamics simulations of DMPC phase transitions.

    PubMed

    Aoun, Bachir; Pellegrini, Eric; Trapp, Marcus; Natali, Francesca; Cantù, Laura; Brocca, Paola; Gerelli, Yuri; Demé, Bruno; Marek Koza, Michael; Johnson, Mark; Peters, Judith

    2016-04-01

    Neutron scattering techniques have been employed to investigate 1,2-dimyristoyl-sn -glycero-3-phosphocholine (DMPC) membranes in the form of multilamellar vesicles (MLVs) and deposited, stacked multilamellar-bilayers (MLBs), covering transitions from the gel to the liquid phase. Neutron diffraction was used to characterise the samples in terms of transition temperatures, whereas elastic incoherent neutron scattering (EINS) demonstrates that the dynamics on the sub-macromolecular length-scale and pico- to nano-second time-scale are correlated with the structural transitions through a discontinuity in the observed elastic intensities and the derived mean square displacements. Molecular dynamics simulations have been performed in parallel focussing on the length-, time- and temperature-scales of the neutron experiments. They correctly reproduce the structural features of the main gel-liquid phase transition. Particular emphasis is placed on the dynamical amplitudes derived from experiment and simulations. Two methods are used to analyse the experimental data and mean square displacements. They agree within a factor of 2 irrespective of the probed time-scale, i.e. the instrument utilized. Mean square displacements computed from simulations show a comparable level of agreement with the experimental values, albeit, the best match with the two methods varies for the two instruments. Consequently, experiments and simulations together give a consistent picture of the structural and dynamical aspects of the main lipid transition and provide a basis for future, theoretical modelling of dynamics and phase behaviour in membranes. The need for more detailed analytical models is pointed out by the remaining variation of the dynamical amplitudes derived in two different ways from experiments on the one hand and simulations on the other.

  20. Extracting Models in Single Molecule Experiments

    NASA Astrophysics Data System (ADS)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  1. Comparison of theory and experiment for NAPL dissolution in porous media

    NASA Astrophysics Data System (ADS)

    Bahar, T.; Golfier, F.; Oltéan, C.; Lefevre, E.; Lorgeoux, C.

    2018-04-01

    Contamination of groundwater resources by an immiscible organic phase commonly called NAPL (Non Aqueous Phase Liquid) represents a major scientific challenge considering the residence time of such a pollutant. This contamination leads to the formation of NAPL blobs trapped in the soil and impact of this residual saturation cannot be ignored for correct predictions of the contaminant fate. In this paper, we present results of micromodel experiments on the dissolution of pure hydrocarbon phase (toluene). They were conducted for two values of the Péclet number. These experiments provide data for comparison and validation of a two-phase non-equilibrium theoretical model developed by Quintard and Whitaker (1994) using the volume averaging method. The model was directly upscaled from the averaged pore-scale mass balance equations. The effective properties of the macroscopic model were calculated over periodic unit cells designed from images of the experimental flow cell. Comparison of experimental and numerical results shows that the transport model predicts correctly - with no fitting parameters - the main mechanisms of NAPL mass transfer. The study highlights the crucial need of having a fair recovery of pore-scale characteristic lengths to predict the mass transfer coefficient with accuracy.

  2. An Overview of the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE)

    NASA Astrophysics Data System (ADS)

    Sellers, P. J.; Hall, F. G.; Asrar, G.; Strebel, D. E.; Murphy, R. E.

    1992-11-01

    In the summer of 1983 a group of scientists working in the fields of meteorology, biology, and remote sensing met to discuss methods for modeling and observing land-surface—atmosphere interactions on regional and global scales. They concluded, first, that the existing climate models contained poor representations of the processes controlling the exchanges of energy, water, heat, and carbon between the land surface and the atmosphere and, second, that satellite remote sensing had been underutilized as a means of specifying global fields of the governing biophysical parameters. Accordingly, a multiscale, multidisciplinary experiment, FIFE, was initiated to address these two issues. The objectives of FIFE were specified as follows: (1) Upscale integration of models: The experiment was designed to test the soil-plant-atmosphere models developed by biometeorologists for small-scale applications (millimeters to meters) and to develop methods to apply them at the larger scales (kilometers) appropriate to atmospheric models and satellite remote sensing. (2) Application of satellite remote sensing: Even if the first goal were achieved to yield a "perfect" model of vegetation-atmosphere exchanges, it would have very limited applications without a global observing system for initialization and validation. As a result, the experiment was tasked with exploring methods for using satellite data to quantify important biophysical states and rates for model input. The experiment was centered on a 15 × 15 km grassland site near Manhattan, Kansas. This area became the focus for an extended monitoring program of satellite, meteorological, biophysical, and hydrological data acquisition from early 1987 through October 1989 and a series of 12- to 20-day intensive field campaigns (IFCs), four in 1987 and one in 1989. During the IFCs the fluxes of heat, moisture, carbon dioxide, and radiation were measured with surface and airborne equipment in coordination with measurements of surface and atmospheric parameters and satellite overpasses. The resulting data are held in a single integrated data base and continue to be analyzed by the participating scientists and others. The first two sections of this paper recount the history and scientific background leading up to FIFE; the third and fourth sections review the experiment design, the scientific teams and equipment involved, and the actual execution of the experiment; the fifth section provides an overview of the contents of this special issue; the sixth section summarizes the management and resources of the project; and the last section lists the acknowledgments.

  3. Nuclear Power Plant Mechanical Component Flooding Fragility Experiments Status

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, C. L.; Savage, B.; Johnson, B.

    This report describes progress on Nuclear Power Plant mechanical component flooding fragility experiments and supporting research. The progress includes execution of full scale fragility experiments using hollow-core doors, design of improvements to the Portal Evaluation Tank, equipment procurement and initial installation of PET improvements, designation of experiments exploiting the improved PET capabilities, fragility mathematical model development, Smoothed Particle Hydrodynamic simulations, wave impact simulation device research, and pipe rupture mechanics research.

  4. Internal structure of the Community Assessment of Psychic Experiences-Positive (CAPE-P15) scale: Evidence for a general factor.

    PubMed

    Núñez, D; Arias, V; Vogel, E; Gómez, L

    2015-07-01

    Psychotic-like experiences (PLEs) are prevalent in the general population and are associated with poor mental health and a higher risk of psychiatric disorders. The Community Assessment of Psychic Experiences-Positive (CAPE-P15) scale is a self-screening questionnaire to address subclinical positive psychotic symptoms (PPEs) in community contexts. Although its psychometric properties seem to be adequate to screen PLEs, further research is needed to evaluate certain validity aspects, particularly its internal structure and its functioning in different populations. To uncover the optimal factor structure of the CAPE-P15 scale in adolescents aged 13 to 18 years using factorial analysis methods suitable to manage categorical variables. A sample of 727 students from six secondary public schools and 245 university students completed the CAPE-P15. The dimensionality of the CAPE-P15 was tested through exploratory structural equation models (ESEMs). Based on the ESEM results, we conducted a confirmatory factor analysis (CFA) to contrast two factorial structures that potentially underlie the symptoms described by the scale: a) three correlated factors and b) a hierarchical model composed of a general PLE factor plus three specific factors (persecutory ideation, bizarre experiences, and perceptual abnormalities). The underlying structure of PLEs assessed by the CAPE-P15 is consistent with both multidimensional and hierarchical solutions. However, the latter show the best fit. Our findings reveal the existence of a strong general factor underlying scale scores. Compared with the specific factors, the general factor explains most of the common variance observed in subjects' responses. The findings suggest that the factor structure of subthreshold psychotic experiences addressed by the CAPE-P15 can be adequately represented by a general factor and three separable specific traits, supporting the hypothesis according to which there might be a common source underlying PLEs. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Collisionless coupling of a high- β expansion to an ambient, magnetized plasma. I. Rayleigh model and scaling

    NASA Astrophysics Data System (ADS)

    Bonde, Jeffrey

    2018-04-01

    The dynamics of a magnetized, expanding plasma with a high ratio of kinetic energy density to ambient magnetic field energy density, or β, are examined by adapting a model of gaseous bubbles expanding in liquids as developed by Lord Rayleigh. New features include scale magnitudes and evolution of the electric fields in the system. The collisionless coupling between the expanding and ambient plasma due to these fields is described as well as the relevant scaling relations. Several different responses of the ambient plasma to the expansion are identified in this model, and for most laboratory experiments, ambient ions should be pulled inward, against the expansion due to the dominance of the electrostatic field.

  6. Development of a structure-validated Sexual Dream Experience Questionnaire (SDEQ) in Chinese university students.

    PubMed

    Chen, Wanzhen; Qin, Ke; Su, Weiwei; Zhao, Jialian; Zhu, Zhouyu; Fang, Xiangming; Wang, Wei

    2015-01-01

    Sexual dreams reflect the waking-day life, social problems and ethical concerns. The related experience includes different people and settings, and brings various feelings, but there is no systematic measure available to date. We have developed a statement-matrix measuring the sexual dream experience and trialed it in a sample of 390 young Chinese university students who had a life-long sexual dream. After both exploratory and confirmatory factor analyses, we have established a satisfactory model of four-factor (32 items). Together with an item measuring the sexual dream frequency, we developed a Sexual Dream Experience Questionnaire (SDEQ) based on the 32 items, and subsequently named four factors (scales) as joyfulness, aversion, familiarity and bizarreness. No gender differences were found on the four scale scores, and no correlations were found between the four scales and the sexual dream frequency or the sexual experience in real life. The SDEQ might help to characterize the sexual dreams in the healthy people and psychiatric patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  8. Modelling polymeric deformable granular materials - from experimental data to numerical models at the grain scale

    NASA Astrophysics Data System (ADS)

    Teil, Maxime; Harthong, Barthélémy; Imbault, Didier; Peyroux, Robert

    2017-06-01

    Polymeric deformable granular materials are widely used in industry and the understanding and the modelling of their shaping process is a point of interest. This kind of materials often presents a viscoelasticplastic behaviour and the present study promotes a joint approach between numerical simulations and experiments in order to derive the behaviour law of such granular material. The experiment is conducted on a polystyrene powder on which a confining pressure of 7MPa and an axial pressure reaching 30MPa are applied. Between different steps of the in-situ test, the sample is scanned in an X-rays microtomograph in order to know the structure of the material depending on the density. From the tomographic images and by using specific algorithms to improve the images quality, grains are automatically identified, separated and a finite element mesh is generated. The long-term objective of this study is to derive a representative sample directly from the experiments in order to run numerical simulations using a viscoelactic or viscoelastic-plastic constitutive law and compare numerical and experimental results at the particle scale.

  9. Scaling properties of ballistic nano-transistors

    PubMed Central

    2011-01-01

    Recently, we have suggested a scale-invariant model for a nano-transistor. In agreement with experiments a close-to-linear thresh-old trace was found in the calculated ID - VD-traces separating the regimes of classically allowed transport and tunneling transport. In this conference contribution, the relevant physical quantities in our model and its range of applicability are discussed in more detail. Extending the temperature range of our studies it is shown that a close-to-linear thresh-old trace results at room temperatures as well. In qualitative agreement with the experiments the ID - VG-traces for small drain voltages show thermally activated transport below the threshold gate voltage. In contrast, at large drain voltages the gate-voltage dependence is weaker. As can be expected in our relatively simple model, the theoretical drain current is larger than the experimental one by a little less than a decade. PMID:21711899

  10. Impact of Aquifer Heterogeneities on Autotrophic Denitrification.

    NASA Astrophysics Data System (ADS)

    McCarthy, A.; Roques, C.; Selker, J. S.; Istok, J. D.; Pett-Ridge, J. C.

    2015-12-01

    Nitrate contamination in groundwater is a big challenge that will need to be addressed by hydrogeologists throughout the world. With a drinking water standard of 10mg/L of NO3-, innovative techniques will need to be pursued to ensure a decrease in drinking water nitrate concentration. At the pumping site scale, the influence and relationship between heterogeneous flow, mixing, and reactivity is not well understood. The purpose of this project is to incorporate both physical and chemical modeling techniques to better understand the effect of aquifer heterogeneities on autotrophic denitrification. We will investigate the link between heterogeneous hydraulic properties, transport, and the rate of autotrophic denitrification. Data collected in previous studies in laboratory experiments and pumping site scale experiments will be used to validate the models. The ultimate objective of this project is to develop a model in which such coupled processes are better understood resulting in best management practices of groundwater.

  11. An Intercomparison and Evaluation of Aircraft-Derived and Simulated CO from Seven Chemical Transport Models During the TRACE-P Experiment

    NASA Technical Reports Server (NTRS)

    Kiley, C. M.; Fuelberg, Henry E.; Palmer, P. I.; Allen, D. J.; Carmichael, G. R.; Jacob, D. J.; Mari, C.; Pierce, R. B.; Pickering, K. E.; Tang, Y.

    2002-01-01

    Four global scale and three regional scale chemical transport models are intercompared and evaluated during NASA's TRACE-P experiment. Model simulated and measured CO are statistically analyzed along aircraft flight tracks. Results for the combination of eleven flights show an overall negative bias in simulated CO. Biases are most pronounced during large CO events. Statistical agreements vary greatly among the individual flights. Those flights with the greatest range of CO values tend to be the worst simulated. However, for each given flight, the models generally provide similar relative results. The models exhibit difficulties simulating intense CO plumes. CO error is found to be greatest in the lower troposphere. Convective mass flux is shown to be very important, particularly near emissions source regions. Occasionally meteorological lift associated with excessive model-calculated mass fluxes leads to an overestimation of mid- and upper- tropospheric mixing ratios. Planetary Boundary Layer (PBL) depth is found to play an important role in simulating intense CO plumes. PBL depth is shown to cap plumes, confining heavy pollution to the very lowest levels.

  12. User-experience surveys with maternity services: a randomized comparison of two data collection models.

    PubMed

    Bjertnaes, Oyvind Andresen; Iversen, Hilde Hestad

    2012-08-01

    To compare two ways of combining postal and electronic data collection for a maternity services user-experience survey. Cross-sectional survey. Maternity services in Norway. All women who gave birth at a university hospital in Norway between 1 June and 27 July 2010. Patients were randomized into the following groups (n= 752): Group A, who were posted questionnaires with both electronic and paper response options for both the initial and reminder postal requests; and Group B, who were posted questionnaires with an electronic response option for the initial request, and both electronic and paper response options for the reminder postal request. Response rate, the amount of difference in background variables between respondents and non-respondents, main study results and estimated cost-effectiveness. The final response rate was significantly higher in Group A (51.9%) than Group B (41.1%). None of the background variables differed significantly between the respondents and non-respondents in Group A, while two variables differed significantly between the respondents and non-respondents in Group B. None of the 11 user-experience scales differed significantly between Groups A and B. The estimated costs per response for the forthcoming national survey was €11.7 for data collection Model A and €9.0 for Model B. The model with electronic-only response option in the first request had lowest response rate. However, this model performed equal to the other model on non-response bias and better on estimated cost-effectiveness, and is the better of the two models in large-scale user experiences surveys with maternity services.

  13. Statistical analysis of vegetation and stormwater runoff in an urban watershed during summer and winter storms in Portland, Oregon, U.S

    Treesearch

    Geoffrey H. Donovan; David T. Butry; Megan Y. Mao

    2016-01-01

    Past research has examined the effect of urban trees, and other vegetation, on stormwater runoff using hydrological models or small-scale experiments. However, there has been no statistical analysis of the influence of vegetation on runoff in an intact urban watershed, and it is not clear how results from small-scale studies scale up to the city level. Researchers...

  14. Quantitative In Vivo Imaging of Breast Tumor Extracellular Matrix

    DTIC Science & Technology

    2010-05-01

    dermis from mouse models of Osteogenesis Imperfecta (OIM) [1–5,7]. The F/B ratio revealed the length scale of ordering in the fibers. In these...imaging of the diseased state osteogenesis imperfecta : experiment and simulation,” Biophys. J. 94(11), 4504–4514 (2008). 3. O. Nadiarnykh, R. B. Lacomb...breast cancer, and dermis from mouse models of Osteogenesis Imperfecta (OIM) [1–5,7]. The F/B ratio revealed the length scale of ordering in the fibers

  15. A study on assimilating potential vorticity data

    NASA Astrophysics Data System (ADS)

    Li, Yong; Ménard, Richard; Riishøjgaard, Lars Peter; Cohn, Stephen E.; Rood, Richard B.

    1998-08-01

    The correlation that exists between the potential vorticity (PV) field and the distribution of chemical tracers such as ozone suggests the possibility of using tracer observations as proxy PV data in atmospheric data assimilation systems. Especially in the stratosphere, there are plentiful tracer observations but a general lack of reliable wind observations, and the correlation is most pronounced. The issue investigated in this study is how model dynamics would respond to the assimilation of PV data. First, numerical experiments of identical-twin type were conducted with a simple univariate nuding algorithm and a global shallow water model based on PV and divergence (PV-D model). All model fields are successfully reconstructed through the insertion of complete PV data alone if an appropriate value for the nudging coefficient is used. A simple linear analysis suggests that slow modes are recovered rapidly, at a rate nearly independent of spatial scale. In a more realistic experiment, appropriately scaled total ozone data from the NIMBUS-7 TOMS instrument were assimilated as proxy PV data into the PV-D model over a 10-day period. The resulting model PV field matches the observed total ozone field relatively well on large spatial scales, and the PV, geopotential and divergence fields are dynamically consistent. These results indicate the potential usefulness that tracer observations, as proxy PV data, may offer in a data assimilation system.

  16. Model studies on the role of moist convection as a mechanism for interaction between the mesoscales

    NASA Technical Reports Server (NTRS)

    Waight, Kenneth T., III; Song, J. Aaron; Zack, John W.; Price, Pamela E.

    1991-01-01

    A three year research effort is described which had as its goal the development of techniques to improve the numerical prediction of cumulus convection on the meso-beta and meso-gamma scales. Two MESO models are used, the MASS (mesoscale) and TASS (cloud scale) models. The primary meteorological situation studied is the 28-29 Jun. 1986 Cooperative Huntsville Meteorological Experiment (COHMEX) study area on a day with relatively weak large scale forcing. The problem of determining where and when convection should be initiated is considered to be a major problem of current approaches. Assimilation of moisture data from satellite, radar, and surface data is shown to significantly improve mesoscale simulations. The TASS model is shown to reproduce some observed mesoscale features when initialized with 3-D observational data. Convection evolution studies center on comparison of the Kuo and Fritsch-Chappell cumulus parameterization schemes to each other, and to cloud model results. The Fritsch-Chappell scheme is found to be superior at about 30 km resolution, while the Kuo scheme does surprisingly well in simulating convection down to 10 km in cases where convergence features are well-resolved by the model grid. Results from MASS-TASS interaction experiments are presented and discussed. A discussion of the future of convective simulation is given, with the conclusion that significant progress is possible on several fronts in the next few years.

  17. Uranium plume persistence impacted by hydrologic and geochemical heterogeneity in the groundwater and river water interaction zone of Hanford site

    NASA Astrophysics Data System (ADS)

    Chen, X.; Zachara, J. M.; Vermeul, V. R.; Freshley, M.; Hammond, G. E.

    2015-12-01

    The behavior of a persistent uranium plume in an extended groundwater- river water (GW-SW) interaction zone at the DOE Hanford site is dominantly controlled by river stage fluctuations in the adjacent Columbia River. The plume behavior is further complicated by substantial heterogeneity in physical and geochemical properties of the host aquifer sediments. Multi-scale field and laboratory experiments and reactive transport modeling were integrated to understand the complex plume behavior influenced by highly variable hydrologic and geochemical conditions in time and space. In this presentation we (1) describe multiple data sets from field-scale uranium adsorption and desorption experiments performed at our experimental well-field, (2) develop a reactive transport model that incorporates hydrologic and geochemical heterogeneities characterized from multi-scale and multi-type datasets and a surface complexation reaction network based on laboratory studies, and (3) compare the modeling and observation results to provide insights on how to refine the conceptual model and reduce prediction uncertainties. The experimental results revealed significant spatial variability in uranium adsorption/desorption behavior, while modeling demonstrated that ambient hydrologic and geochemical conditions and heterogeneities in sediment physical and chemical properties both contributed to complex plume behavior and its persistence. Our analysis provides important insights into the characterization, understanding, modeling, and remediation of groundwater contaminant plumes influenced by surface water and groundwater interactions.

  18. Cosmic microwave background anisotropies in cold dark matter models with cosmological constant: The intermediate versus large angular scales

    NASA Technical Reports Server (NTRS)

    Stompor, Radoslaw; Gorski, Krzysztof M.

    1994-01-01

    We obtain predictions for cosmic microwave background anisotropies at angular scales near 1 deg in the context of cold dark matter models with a nonzero cosmological constant, normalized to the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) detection. The results are compared to those computed in the matter-dominated models. We show that the coherence length of the Cosmic Microwave Background (CMB) anisotropy is almost insensitive to cosmological parameters, and the rms amplitude of the anisotropy increases moderately with decreasing total matter density, while being most sensitive to the baryon abundance. We apply these results in the statistical analysis of the published data from the UCSB South Pole (SP) experiment (Gaier et al. 1992; Schuster et al. 1993). We reject most of the Cold Dark Matter (CDM)-Lambda models at the 95% confidence level when both SP scans are simulated together (although the combined data set renders less stringent limits than the Gaier et al. data alone). However, the Schuster et al. data considered alone as well as the results of some other recent experiments (MAX, MSAM, Saskatoon), suggest that typical temperature fluctuations on degree scales may be larger than is indicated by the Gaier et al. scan. If so, CDM-Lambda models may indeed provide, from a point of view of CMB anisotropies, an acceptable alternative to flat CDM models.

  19. Integrated multiscale biomaterials experiment and modelling: a perspective

    PubMed Central

    Buehler, Markus J.; Genin, Guy M.

    2016-01-01

    Advances in multiscale models and computational power have enabled a broad toolset to predict how molecules, cells, tissues and organs behave and develop. A key theme in biological systems is the emergence of macroscale behaviour from collective behaviours across a range of length and timescales, and a key element of these models is therefore hierarchical simulation. However, this predictive capacity has far outstripped our ability to validate predictions experimentally, particularly when multiple hierarchical levels are involved. The state of the art represents careful integration of multiscale experiment and modelling, and yields not only validation, but also insights into deformation and relaxation mechanisms across scales. We present here a sampling of key results that highlight both challenges and opportunities for integrated multiscale experiment and modelling in biological systems. PMID:28981126

  20. Three-dimensional multiscale modeling of dendritic spacing selection during Al-Si directional solidification

    DOE PAGES

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...

    2015-05-27

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less

  1. Test of phi(sup 2) model predictions near the (sup 3)He liquid-gas critical point

    NASA Technical Reports Server (NTRS)

    Barmatz, M.; Zhong, F.; Hahn, I.

    2000-01-01

    NASA is supporting the development of an experiment called MISTE (Microgravity Scaling Theory Experiment) for future International Space Station mission. The main objective of this flight experiment is to perform in-situ PVT, heat capacity at constant volume, C(sub v) and chi(sub tau), measurements in the asymptotic region near the (sup 3)He liquid-gas critical point.

  2. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  3. Application of conditional simulation of heterogeneous rock properties to seismic scattering and attenuation analysis in gas hydrate reservoirs

    NASA Astrophysics Data System (ADS)

    Huang, Jun-Wei; Bellefleur, Gilles; Milkereit, Bernd

    2012-02-01

    We present a conditional simulation algorithm to parameterize three-dimensional heterogeneities and construct heterogeneous petrophysical reservoir models. The models match the data at borehole locations, simulate heterogeneities at the same resolution as borehole logging data elsewhere in the model space, and simultaneously honor the correlations among multiple rock properties. The model provides a heterogeneous environment in which a variety of geophysical experiments can be simulated. This includes the estimation of petrophysical properties and the study of geophysical response to the heterogeneities. As an example, we model the elastic properties of a gas hydrate accumulation located at Mallik, Northwest Territories, Canada. The modeled properties include compressional and shear-wave velocities that primarily depend on the saturation of hydrate in the pore space of the subsurface lithologies. We introduce the conditional heterogeneous petrophysical models into a finite difference modeling program to study seismic scattering and attenuation due to multi-scale heterogeneity. Similarities between resonance scattering analysis of synthetic and field Vertical Seismic Profile data reveal heterogeneity with a horizontal-scale of approximately 50 m in the shallow part of the gas hydrate interval. A cross-borehole numerical experiment demonstrates that apparent seismic energy loss can occur in a pure elastic medium without any intrinsic attenuation of hydrate-bearing sediments. This apparent attenuation is largely attributed to attenuative leaky mode propagation of seismic waves through large-scale gas hydrate occurrence as well as scattering from patchy distribution of gas hydrate.

  4. Lab and Pore-Scale Study of Low Permeable Soils Diffusional Tortuosity

    NASA Astrophysics Data System (ADS)

    Lekhov, V.; Pozdniakov, S. P.; Denisova, L.

    2016-12-01

    Diffusion plays important role in contaminant spreading in low permeable units. The effective diffusion coefficient of saturated porous medium depends on this coefficient in water, porosity and structural parameter of porous space - tortuosity. Theoretical models of relationship between porosity and diffusional tortuosity are usually derived for conceptual granular models of medium filled by solid particles of simple geometry. These models usually do not represent soils with complex microstructure. The empirical models, like as Archie's law, based on the experimental electrical conductivity data are mostly useful for practical applications. Such models contain empirical parameters that should be defined experimentally for given soil type. In this work, we compared tortuosity values obtained in lab-scale diffusional experiments and pore scale diffusion simulation for the studied soil microstructure and exanimated relationship between tortuosity and porosity. Samples for the study were taken from borehole cores of low-permeable silt-clay formation. Using the samples of 50 cm3 we performed lab scale diffusional experiments and estimated the lab-scale tortuosity. Next using these samples we studied the microstructure with X-ray microtomograph. Shooting performed on undisturbed microsamples of size 1,53 mm with a resolution ×300 (10243 vox). After binarization of each obtained 3-D structure, its spatial correlation analysis was performed. This analysis showed that the spatial correlation scale of the indicator variogram is considerably smaller than microsample length. Then there was the numerical simulation of the Laplace equation with binary coefficients for each microsamples. The total number of simulations at the finite-difference grid of 1753 cells was 3500. As a result the effective diffusion coefficient, tortuosity and porosity values were obtained for all studied microsamples. The results were analyzed in the form of graph of tortuosity versus porosity. The 6 experimental tortuosity values well agree with pore-scale simulations falling in the general pattern that shows nonlinear decreasing of tortuosity with decreasing of porosity. Fitting this graph by Archie model we found exponent value in the range between 1,8 and 2,4. This work was supported by RFBR via grant 14-05-00409.

  5. A Rasch scaling validation of a 'core' near-death experience.

    PubMed

    Lange, Rense; Greyson, Bruce; Houran, James

    2004-05-01

    For those with true near-death experiences (NDEs), Greyson's (1983, 1990) NDE Scale satisfactorily fits the Rasch rating scale model, thus yielding a unidimensional measure with interval-level scaling properties. With increasing intensity, NDEs reflect peace, joy and harmony, followed by insight and mystical or religious experiences, while the most intense NDEs involve an awareness of things occurring in a different place or time. The semantics of this variable are invariant across True-NDErs' gender, current age, age at time of NDE, and latency and intensity of the NDE, thus identifying NDEs as 'core' experiences whose meaning is unaffected by external variables, regardless of variations in NDEs' intensity. Significant qualitative and quantitative differences were observed between True-NDErs and other respondent groups, mostly revolving around the differential emphasis on paranormal/mystical/religious experiences vs. standard reactions to threat. The findings further suggest that False-Positive respondents reinterpret other profound psychological states as NDEs. Accordingly, the Rasch validation of the typology proposed by Greyson (1983) also provides new insights into previous research, including the possibility of embellishment over time (as indicated by the finding of positive, as well as negative, latency effects) and the potential roles of religious affiliation and religiosity (as indicated by the qualitative differences surrounding paranormal/mystical/religious issues).

  6. Scaling and modeling of three-dimensional, end-wall, turbulent boundary layers. Ph.D. Thesis - Final Report

    NASA Technical Reports Server (NTRS)

    Goldberg, U. C.; Reshotko, E.

    1984-01-01

    The method of matched asymptotic expansion was employed to identify the various subregions in three dimensional, turbomachinery end wall turbulent boundary layers, and to determine the proper scaling of these regions. The two parts of the b.l. investigated are the 3D pressure driven part over the endwall, and the 3D part located at the blade/end wall juncture. Models are proposed for the 3d law of the wall and law of the wake. These models and the data of van den Berg and Elsenaar and of Mueller are compared and show good agreement between models and experiments.

  7. Scientific management and implementation of the geophysical fluid flow cell for Spacelab missions

    NASA Technical Reports Server (NTRS)

    Hart, J.; Toomre, J.

    1980-01-01

    Scientific support for the spherical convection experiment to be flown on Spacelab 3 was developed. This experiment takes advantage of the zero gravity environment of the orbiting space laboratory to conduct fundamental fluid flow studies concerned with thermally driven motions inside a rotating spherical shell with radial gravity. Such a system is a laboratory analog of large scale atmospheric and solar circulations. The radial body force necessary to model gravity correctly is obtained by using dielectric polarization forces in a radially varying electric field to produce radial accelerations proportional to temperature. This experiment will answer fundamental questions concerned with establishing the preferred modes of large scale motion in planetary and stellar atmospheres.

  8. Simulating Small-Scale Experiments of In-Tunnel Airblast Using STUN and ALE3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neuscamman, Stephanie; Glenn, Lewis; Schebler, Gregory

    2011-09-12

    This report details continuing validation efforts for the Sphere and Tunnel (STUN) and ALE3D codes. STUN has been validated previously for blast propagation through tunnels using several sets of experimental data with varying charge sizes and tunnel configurations, including the MARVEL nuclear driven shock tube experiment (Glenn, 2001). The DHS-funded STUNTool version is compared to experimental data and the LLNL ALE3D hydrocode. In this particular study, we compare the performance of the STUN and ALE3D codes in modeling an in-tunnel airblast to experimental results obtained by Lunderman and Ohrt in a series of small-scale high explosive experiments (1997).

  9. MODELS FOR SUBMARINE OUTFALL - VALIDATION AND PREDICTION UNCERTAINTIES

    EPA Science Inventory

    This address reports on some efforts to verify and validate dilution models, including those found in Visual Plumes. This is done in the context of problem experience: a range of problems, including different pollutants such as bacteria; scales, including near-field and far-field...

  10. AIR QUALITY MODELING AT COARSE-TO-FINE SCALES IN URBAN AREAS

    EPA Science Inventory

    Urban air toxics control strategies are moving towards a community based modeling approach, with an emphasis on assessing those areas that experience high air toxic concentration levels, the so-called "hot spots". This approach will require information that accurately maps and...

  11. a Structure of Experienced Time

    NASA Astrophysics Data System (ADS)

    Havel, Ivan M.

    2005-10-01

    The subjective experience of time will be taken as a primary motivation for an alternative, essentially discontinuous conception of time. Two types of such experience will be discussed, one based on personal episodic memory, the other on the theoretical fine texture of experienced time below the threshold of phenomenal awareness. The former case implies a discrete structure of temporal episodes on a large scale, while the latter case suggests endowing psychological time with a granular structure on a small scale, i.e. interpreting it as a semi-ordered flow of smeared (not point-like) subliminal time grains. Only on an intermediate temporal scale would the subjectively felt continuity and fluency of time emerge. Consequently, there is no locally smooth mapping of phenomenal time onto the real number continuum. Such a model has certain advantages; for instance, it avoids counterintuitive interpretations of some neuropsychological experiments (e.g. Libet's measurement) in which the temporal order of events is crucial.

  12. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P A; Fried, L E; Howard, W M

    2011-07-21

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. They use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. They term their model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonationmore » wave and calculates EOS values based on the concentrations. A HE-validation suite of model simulations compared to experiments at ambient, hot, and cold temperatures has been developed. They present here a new rate model and comparison with experimental data.« less

  13. Use of d-3He proton spectroscopy as a diagnostic of shell rho r in capsule implosion experiments with approximately 0.2 NIF scale high temperature Hohlraums at Omega.

    PubMed

    Delamater, N D; Wilson, D C; Kyrala, G A; Seifter, A; Hoffman, N M; Dodd, E; Singleton, R; Glebov, V; Stoeckl, C; Li, C K; Petrasso, R; Frenje, J

    2008-10-01

    We present the calculations and preliminary results from experiments on the Omega laser facility using d-(3)He filled plastic capsule implosions in gold Hohlraums. These experiments aim to develop a technique to measure shell rho r and capsule unablated mass with proton spectroscopy and will be applied to future National Ignition Facility (NIF) experiments with ignition scale capsules. The Omega Hohlraums are 1900 microm length x 1200 microm diameter and have a 70% laser entrance hole. This is approximately a 0.2 NIF scale ignition Hohlraum and reaches temperatures of 265-275 eV similar to those during the peak of the NIF drive. These capsules can be used as a diagnostic of shell rho r, since the d-(3)He gas fill produces 14.7 MeV protons in the implosion, which escape through the shell and produce a proton spectrum that depends on the integrated rho r of the remaining shell mass. The neutron yield, proton yield, and spectra change with capsule shell thickness as the unablated mass or remaining capsule rho r changes. Proton stopping models are used to infer shell unablated mass and shell rho r from the proton spectra measured with different filter thicknesses. The experiment is well modeled with respect to Hohlraum energetics, neutron yields, and x-ray imploded core image size, but there are discrepancies between the observed and simulated proton spectra.

  14. Flow Quality Measurements in an Aerodynamic Model of NASA Lewis' Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Canacci, Victor A.; Gonsalez, Jose C.

    1999-01-01

    As part of an ongoing effort to improve the aerodynamic flow characteristics of the Icing Research Tunnel (IRT), a modular scale model of the facility was fabricated. This 1/10th-scale model was used to gain further understanding of the flow characteristics in the IRT. The model was outfitted with instrumentation and data acquisition systems to determine pressures, velocities, and flow angles in the settling chamber and test section. Parametric flow quality studies involving the insertion and removal of a model of the IRT's distinctive heat exchanger (cooler) and/or of a honeycomb in the settling chamber were performed. These experiments illustrate the resulting improvement or degradation in flow quality.

  15. Estimation of the sea surface's two-scale backscatter parameters

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1978-01-01

    The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.

  16. Computational modeling of electrostatic charge and fields produced by hypervelocity impact

    DOE PAGES

    Crawford, David A.

    2015-05-19

    Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effectmore » at larger scales, higher impact velocities, or both.« less

  17. Unifying Pore Network Modeling, Continuous Time Random Walk Theory and Experiment - Accomplishments and Future Directions

    NASA Astrophysics Data System (ADS)

    Bijeljic, B.

    2008-05-01

    This talk will describe and highlight the advantages offered by a methodology that unifies pore network modeling, CTRW theory and experiment in description of solute dispersion in porous media. Solute transport in a porous medium is characterized by the interplay of advection and diffusion (described by Peclet number, Pe) that cause spreading of solute particles. This spreading is traditionally described by dispersion coefficients, D, defined by σ 2 = 2Dt, where σ 2 is the variance of the solute position and t is the time. Using a pore-scale network model based on particle tracking, the rich Peclet- number dependence of dispersion coefficient is predicted from first principles and is shown to compare well with experimental data for restricted diffusion, transition, power-law and mechanical dispersion regimes in the asymptotic limit. In the asymptotic limit D is constant and can be used in an averaged advection-dispersion equation. However, it is highly important to recognize that, until the velocity field is fully sampled, the particle transport is non-Gaussian and D possesses temporal or spatial variation. Furthermore, temporal probability density functions (PDF) of tracer particles are studied in pore networks and an excellent agreement for the spectrum of transition times for particles from pore to pore is obtained between network model results and CTRW theory. Based on the truncated power-law interpretation of PDF-s, the physical origin of the power-law scaling of dispersion coefficient vs. Peclet number has been explained for unconsolidated porous media, sands and a number of sandstones, arriving at the same conclusion from numerical network modelling, analytic CTRW theory and experiment. Future directions for further applications of the methodology presented are discussed in relation to the scale- dependent solute dispersion and reactive transport. Significance of pre-asymptotic dispersion in porous media is addressed from pore-scale upwards and the impact of heterogeneity is discussed. The length traveled by solute plumes before Gaussian behaviour is reached increases with an increase in heterogeneity and/or Pe. This opens up the question on the nature of dispersion in natural systems where the heterogeneities at the larger scales will profoundly increase the range of velocities in the aquifer, thus considerably delaying the asymptotic approach to Gaussian behaviour. As a consequence, the asymptotic behaviour might not be reached at the field scale.

  18. Category Rating Is Based on Prototypes and Not Instances: Evidence from Feedback-Dependent Context Effects

    ERIC Educational Resources Information Center

    Petrov, Alexander A.

    2011-01-01

    Context effects in category rating on a 7-point scale are shown to reverse direction depending on feedback. Context (skewed stimulus frequencies) was manipulated between and feedback within subjects in two experiments. The diverging predictions of prototype- and exemplar-based scaling theories were tested using two representative models: ANCHOR…

  19. The Use of Experiments and Modeling to Evaluate ...

    EPA Pesticide Factsheets

    Symposium Paper This paper reports on a study to examine the thermal decomposition of surrogate CWAs (in this case, Malathion) in a laboratory reactor, analysis of the results using reactor design theory, and subsequent scale-up of the results to a computersimulation of a full-scale commercial hazardous waste incinerator processing ceiling tile contaminated with residual Malathion.

  20. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests. The first flight (Saffire-1) is scheduled for July 2015 with the other two following at six-month intervals. A computer modeling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the first examination of fire behavior on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation.

  1. Information processing occurs via critical avalanches in a model of the primary visual cortex

    NASA Astrophysics Data System (ADS)

    Bortolotto, G. S.; Girardi-Schappo, M.; Gonsalves, J. J.; Pinto, L. T.; Tragtenberg, M. H. R.

    2016-01-01

    We study a new biologically motivated model for the Macaque monkey primary visual cortex which presents power-law avalanches after a visual stimulus. The signal propagates through all the layers of the model via avalanches that depend on network structure and synaptic parameter. We identify four different avalanche profiles as a function of the excitatory postsynaptic potential. The avalanches follow a size-duration scaling relation and present critical exponents that match experiments. The structure of the network gives rise to a regime of two characteristic spatial scales, one of which vanishes in the thermodynamic limit.

  2. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  3. Inertial particle manipulation in microscale oscillatory flows

    NASA Astrophysics Data System (ADS)

    Agarwal, Siddhansh; Rallabandi, Bhargav; Raju, David; Hilgenfeldt, Sascha

    2017-11-01

    Recent work has shown that inertial effects in oscillating flows can be exploited for simultaneous transport and differential displacement of microparticles, enabling size sorting of such particles on extraordinarily short time scales. Generalizing previous theory efforts, we here derive a two-dimensional time-averaged version of the Maxey-Riley equation that includes the effect of an oscillating interface to model particle dynamics in such flows. Separating the steady transport time scale from the oscillatory time scale results in a simple and computationally efficient reduced model that preserves all slow-time features of the full unsteady Maxey-Riley simulations, including inertial particle displacement. Comparison is made not only to full simulations, but also to experiments using oscillating bubbles as the driving interfaces. In this case, the theory predicts either an attraction to or a repulsion from the bubble interface due to inertial effects, so that versatile particle manipulation is possible using differences in particle size, particle/fluid density contrast and streaming strength. We also demonstrate that these predictions are in agreement with experiments.

  4. Modeling Transport of Cesium in Grimsel Granodiorite With Micrometer Scale Heterogeneities and Dynamic Update of Kd

    NASA Astrophysics Data System (ADS)

    Voutilainen, Mikko; Kekäläinen, Pekka; Siitari-Kauppi, Marja; Sardini, Paul; Muuri, Eveliina; Timonen, Jussi; Martin, Andrew

    2017-11-01

    Transport and retardation of cesium in Grimsel granodiorite taking into account heterogeneity of mineral and pore structure was studied using rock samples overcored from an in situ diffusion test at the Grimsel Test Site. The field test was part of the Long-Term Diffusion (LTD) project designed to characterize retardation properties (diffusion and distribution coefficients) under in situ conditions. Results of the LTD experiment for cesium showed that in-diffusion profiles and spatial concentration distributions were strongly influenced by the heterogeneous pore structure and mineral distribution. In order to study the effect of heterogeneity on the in-diffusion profile and spatial concentration distribution, a Time Domain Random Walk (TDRW) method was applied along with a feature for modeling chemical sorption in geological materials. A heterogeneous mineral structure of Grimsel granodiorite was constructed using X-ray microcomputed tomography (X-μCT) and the map was linked to previous results for mineral specific porosities and distribution coefficients (Kd) that were determined using C-14-PMMA autoradiography and batch sorption experiments, respectively. After this the heterogeneous structure contains information on local porosity and Kd in 3-D. It was found that the heterogeneity of the mineral structure on the micrometer scale affects significantly the diffusion and sorption of cesium in Grimsel granodiorite at the centimeter scale. Furthermore, the modeled in-diffusion profiles and spatial concentration distributions show similar shape and pattern to those from the LTD experiment. It was concluded that the use of detailed structure characterization and quantitative data on heterogeneity can significantly improve the interpretation and evaluation of transport experiments.

  5. Multiscale Laboratory Infrastructure and Services to users: Plans within EPOS

    NASA Astrophysics Data System (ADS)

    Spiers, Chris; Willingshofer, Ernst; Drury, Martyn; Funiciello, Francesca; Rosenau, Matthias; Scarlato, Piergiorgio; Sagnotti, Leonardo; EPOS WG6, Corrado Cimarelli

    2015-04-01

    The participant countries in EPOS embody a wide range of world-class laboratory infrastructures ranging from high temperature and pressure experimental facilities, to electron microscopy, micro-beam analysis, analogue modeling and paleomagnetic laboratories. Most data produced by the various laboratory centres and networks are presently available only in limited "final form" in publications. Many data remain inaccessible and/or poorly preserved. However, the data produced at the participating laboratories are crucial to serving society's need for geo-resources exploration and for protection against geo-hazards. Indeed, to model resource formation and system behaviour during exploitation, we need an understanding from the molecular to the continental scale, based on experimental data. This contribution will describe the plans that the laboratories community in Europe is making, in the context of EPOS. The main objectives are: • To collect and harmonize available and emerging laboratory data on the properties and processes controlling rock system behaviour at multiple scales, in order to generate products accessible and interoperable through services for supporting research activities. • To co-ordinate the development, integration and trans-national usage of the major solid Earth Science laboratory centres and specialist networks. The length scales encompassed by the infrastructures included range from the nano- and micrometer levels (electron microscopy and micro-beam analysis) to the scale of experiments on centimetre sized samples, and to analogue model experiments simulating the reservoir scale, the basin scale and the plate scale. • To provide products and services supporting research into Geo-resources and Geo-storage, Geo-hazards and Earth System Evolution. If the EPOS Implementation Phase proposal presently under construction is successful, then a range of services and transnational activities will be put in place to realize these objectives.

  6. The Ship Tethered Aerostat Remote Sensing System (STARRS): Observations of Small-Scale Surface Lateral Transport During the LAgrangian Submesoscale ExpeRiment (LASER)

    NASA Astrophysics Data System (ADS)

    Carlson, D. F.; Novelli, G.; Guigand, C.; Özgökmen, T.; Fox-Kemper, B.; Molemaker, M. J.

    2016-02-01

    The Consortium for Advanced Research on the Transport of Hydrocarbon in the Environment (CARTHE) will carry out the LAgrangian Submesoscale ExpeRiment (LASER) to study the role of small-scale processes in the transport and dispersion of oil and passive tracers. The Ship-Tethered Aerostat Remote Sensing System (STARRS) will observe small-scale surface dispersion in the open ocean. STARRS is built around a high-lift-capacity (30 kg) helium-filled aerostat. STARRS is equipped with a high resolution digital camera. An integrated GNSS receiver and inertial navigation system permit direct geo-rectification of the imagery. Consortium for Advanced Research on the Transport of Hydrocarbon in the Environment (CARTHE) will carry out the LAgrangian Submesoscale ExpeRiment (LASER) to study the role of small-scale processes in the transport and dispersion of oil and passive tracers. The Ship-Tethered Aerostat Remote Sensing System (STARRS) was developed to produce observational estimates of small-scale surface dispersion in the open ocean. STARRS is built around a high-lift-capacity (30 kg) helium-filled aerostat. STARRS is equipped with a high resolution digital camera. An integrated GNSS receiver and inertial navigation system permit direct geo-rectification of the imagery. Thousands of drift cards deployed in the field of view of STARRS and tracked over time provide the first observational estimates of small-scale (1-500 m) surface dispersion in the open ocean. The STARRS imagery will be combined with GPS-tracked surface drifter trajectories, shipboard observations, and aerial surveys of sea surface temperature in the DeSoto Canyon. In addition to obvious applications to oil spill modelling, the STARRS observations will provide essential benchmarks for high resolution numerical modelsDrift cards deployed in the field of view of STARRS and tracked over time provide the first observational estimates of small-scale (1-100 m) surface dispersion in the open ocean. The STARRS imagery will be combined with GPS-tracked surface drifter trajectories, shipboard observations, and aerial surveys of sea surface temperature in the DeSoto Canyon. In addition to obvious applications to oil spill modelling, the STARRS observations will provide essential benchmarks for high resolution numerical models

  7. Statistical model of exotic rotational correlations in emergent space-time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Craig; Kwon, Ohkyung; Richardson, Jonathan

    2017-06-06

    A statistical model is formulated to compute exotic rotational correlations that arise as inertial frames and causal structure emerge on large scales from entangled Planck scale quantum systems. Noncommutative quantum dynamics are represented by random transverse displacements that respect causal symmetry. Entanglement is represented by covariance of these displacements in Planck scale intervals defined by future null cones of events on an observer's world line. Light that propagates in a nonradial direction inherits a projected component of the exotic rotational correlation that accumulates as a random walk in phase. A calculation of the projection and accumulation leads to exact predictionsmore » for statistical properties of exotic Planck scale correlations in an interferometer of any configuration. The cross-covariance for two nearly co-located interferometers is shown to depart only slightly from the autocovariance. Specific examples are computed for configurations that approximate realistic experiments, and show that the model can be rigorously tested.« less

  8. GEWEX Continental-scale International Project (GCIP)

    NASA Technical Reports Server (NTRS)

    Try, Paul

    1993-01-01

    The Global Energy and Water Cycle Experiment (GEWEX) represents the World Climate Research Program activities on clouds, radiation, and land-surface processes. The goal of the program is to reproduce and predict, by means of suitable models, the variations of the global hydrological regime and its impact on atmospheric and oceanic dynamics. However, GEWEX is also concerned with variations in regional hydrological processes and water resources and their response to changes in the environment such as increasing greenhouse gases. In fact, GEWEX contains a major new international project called the GEWEX Continental-scale International Project (GCIP), which is designed to bridge the gap between the small scales represented by hydrological models and those scales that are practical for predicting the regional impacts of climate change. The development and use of coupled mesoscale-hydrological models for this purpose is a high priority in GCIP. The objectives of GCIP are presented.

  9. Assessing Attitudes toward Mathematics across Teacher Education Contexts

    ERIC Educational Resources Information Center

    Jong, Cindy; Hodges, Thomas E.

    2015-01-01

    This article reports on the development of attitudes toward mathematics among pre-service elementary teachers (n = 146) in relation to their experiences as K-12 learners of mathematics and experiences within a teacher education program. Using a combination of the Rasch Rating Scale Model and traditional parametric analyses, results indicate that…

  10. Increasing Teacher Confidence in Teaching and Technology Use through Vicarious Experiences within an Environmental Education Context

    ERIC Educational Resources Information Center

    Willis, Jana; Weiser, Brenda; Smith, Donna

    2016-01-01

    Providing teacher candidates opportunities to engage in experiences modeling effective technology integration could improve confidence/comfort in using technology and support skill development and transfer. A purposeful sample of 424 candidates in an educational technology course was administered the Technology and Teaching Efficacy Scale to…

  11. Combining sprinkling experiments and superconducting gravimetry in the field: a qualitative approach to identify dominant infiltration patterns

    NASA Astrophysics Data System (ADS)

    Reich, Marvin; Mikolaj, Michal; Blume, Theresa; Güntner, Andreas

    2017-04-01

    Hydrological process research at the plot to catchment scale commonly involves invasive field methods, leading to a large amount of point data. A promising alternative, which gained increasing interest in the hydrological community over the last years, is gravimetry. The combination of its non-invasive and integrative nature opens up new possibilities to approach hydrological process research. In this study we combine a field-scale sprinkling experiment with continuous superconducting gravity (SG) measurements. The experimental design consists of 8 sprinkler units, arranged symmetrically within a radius of about ten meters around an iGrav (SG) in a field enclosure. The gravity signal of the infiltrating sprinkling water is analyzed using a simple 3D water mass distribution model. We first conducted a number of virtual sprinkling experiments resulting in different idealized infiltration patterns and determined the pattern specific gravity response. In a next step we determined which combination of idealized infiltration patterns was able to reproduce the gravity response of our real-world experiment at the Wettzell Observatory (Germany). This process hypothesis is then evaluated with measured point-scale soil moisture responses and the results of the time-lapse electric resistivity survey which was carried out during the sprinkling experiment. This study demonstrates that a controlled sprinkling experiment around a gravimeter in combination with a simple infiltration model is sufficient to identify subsurface flow patterns and thus the dominant infiltration processes. As gravimeters become more portable and can actually be deployed in the field, their combination with sprinkling experiments as shown here constitutes a promising possibility to investigate hydrological processes in a non-invasive way.

  12. New Models and Methods for the Electroweak Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Linda

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently beingmore » measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac Gaugino Models.« less

  13. Anomalous scaling of a passive scalar advected by the Navier-Stokes velocity field: two-loop approximation.

    PubMed

    Adzhemyan, L Ts; Antonov, N V; Honkonen, J; Kim, T L

    2005-01-01

    The field theoretic renormalization group and operator-product expansion are applied to the model of a passive scalar quantity advected by a non-Gaussian velocity field with finite correlation time. The velocity is governed by the Navier-Stokes equation, subject to an external random stirring force with the correlation function proportional to delta(t- t')k(4-d-2epsilon). It is shown that the scalar field is intermittent already for small epsilon, its structure functions display anomalous scaling behavior, and the corresponding exponents can be systematically calculated as series in epsilon. The practical calculation is accomplished to order epsilon2 (two-loop approximation), including anisotropic sectors. As for the well-known Kraichnan rapid-change model, the anomalous scaling results from the existence in the model of composite fields (operators) with negative scaling dimensions, identified with the anomalous exponents. Thus the mechanism of the origin of anomalous scaling appears similar for the Gaussian model with zero correlation time and the non-Gaussian model with finite correlation time. It should be emphasized that, in contrast to Gaussian velocity ensembles with finite correlation time, the model and the perturbation theory discussed here are manifestly Galilean covariant. The relevance of these results for real passive advection and comparison with the Gaussian models and experiments are briefly discussed.

  14. From Tomography to Material Properties of Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Mansour, Nagi N.; Panerai, Francesco; Ferguson, Joseph C.; Borner, Arnaud; Barnhardt, Michael; Wright, Michael

    2017-01-01

    A NASA Ames Research Center (ARC) effort, under the Entry Systems Modeling (ESM) project, aims at developing micro-tomography (micro-CT) experiments and simulations for studying materials used in hypersonic entry systems. X-ray micro-tomography allows for non-destructive 3D imaging of a materials micro-structure at the sub-micron scale, providing fiber-scale representations of porous thermal protection systems (TPS) materials. The technique has also allowed for In-situ experiments that can resolve response phenomena under realistic environmental conditions such as high temperature, mechanical loads, and oxidizing atmospheres. Simulation tools have been developed at the NASA Ames Research Center to determine material properties and material response from the high-fidelity tomographic representations of the porous materials with the goal of informing macroscopic TPS response models and guiding future TPS design.

  15. Regional scale hydrology with a new land surface processes model

    NASA Technical Reports Server (NTRS)

    Laymon, Charles; Crosson, William

    1995-01-01

    Through the CaPE Hydrometeorology Project, we have developed an understanding of some of the unique data quality issues involved in assimilating data of disparate types for regional-scale hydrologic modeling within a GIS framework. Among others, the issues addressed here include the development of adequate validation of the surface water budget, implementation of the STATSGO soil data set, and implementation of a remote sensing-derived landcover data set to account for surface heterogeneity. A model of land surface processes has been developed and used in studies of the sensitivity of surface fluxes and runoff to soil and landcover characterization. Results of these experiments have raised many questions about how to treat the scale-dependence of land surface-atmosphere interactions on spatial and temporal variability. In light of these questions, additional modifications are being considered for the Marshall Land Surface Processes Model. It is anticipated that these techniques can be tested and applied in conjunction with GCIP activities over regional scales.

  16. Asynchronously Coupled Models of Ice Loss from Airless Planetary Bodies

    NASA Astrophysics Data System (ADS)

    Schorghofer, N.

    2016-12-01

    Ice is found near the surface of dwarf planet Ceres, in some main belt asteroids, and perhaps in NEOs that will be explored or even mined in future. The simple but important question of how fast ice is lost from airless bodies can present computational challenges. The thermal cycle on the surface repeats on much shorter time-scales than ice retreats; one process acts on the time-scale of hours, the other over billions of years. This multi-scale situation is addressed with asynchronous coupling, where models with different time steps are woven together. The sharp contrast at the retreating ice table is dealt with with explicit interface tracking. For Ceres, which is covered with a thermally insulating dust mantle, desiccation rates are orders of magnitude slower than had been calculated with simpler models. More model challenges remain: The role of impact devolatization and the time-scale for complete desiccation of an asteroid. I will also share my experience with code distribution using GitHub and Zenodo.

  17. Turbulent and Laminar Flow in Karst Conduits Under Unsteady Flow Conditions: Interpretation of Pumping Tests by Discrete Conduit-Continuum Modeling

    NASA Astrophysics Data System (ADS)

    Giese, M.; Reimann, T.; Bailly-Comte, V.; Maréchal, J.-C.; Sauter, M.; Geyer, T.

    2018-03-01

    Due to the duality in terms of (1) the groundwater flow field and (2) the discharge conditions, flow patterns of karst aquifer systems are complex. Estimated aquifer parameters may differ by several orders of magnitude from local (borehole) to regional (catchment) scale because of the large contrast in hydraulic parameters between matrix and conduit, their heterogeneity and anisotropy. One approach to deal with the scale effect problem in the estimation of hydraulic parameters of karst aquifers is the application of large-scale experiments such as long-term high-abstraction conduit pumping tests, stimulating measurable groundwater drawdown in both, the karst conduit system as well as the fractured matrix. The numerical discrete conduit-continuum modeling approach MODFLOW-2005 Conduit Flow Process Mode 1 (CFPM1) is employed to simulate laminar and nonlaminar conduit flow, induced by large-scale experiments, in combination with Darcian matrix flow. Effects of large-scale experiments were simulated for idealized settings. Subsequently, diagnostic plots and analyses of different fluxes are applied to interpret differences in the simulated conduit drawdown and general flow patterns. The main focus is set on the question to which extent different conduit flow regimes will affect the drawdown in conduit and matrix depending on the hydraulic properties of the conduit system, i.e., conduit diameter and relative roughness. In this context, CFPM1 is applied to investigate the importance of considering turbulent conditions for the simulation of karst conduit flow. This work quantifies the relative error that results from assuming laminar conduit flow for the interpretation of a synthetic large-scale pumping test in karst.

  18. Psychometric properties of a Chinese version of the Stigma Scale: examining the complex experience of stigma and its relationship with self-esteem and depression among people living with mental illness in Hong Kong.

    PubMed

    Ho, Andy H Y; Potash, Jordan S; Fong, Ted C T; Ho, Vania F L; Chen, Eric Y H; Lau, Robert H W; Au Yeung, Friendly S W; Ho, Rainbow T H

    2015-01-01

    Stigma of mental illness is a global public health concern, but there lacks a standardized and cross-culturally validated instrument for assessing the complex experience of stigma among people living with mental illness (PLMI) in the Chinese context. This study examines the psychometric properties of a Chinese version of the Stigma Scale (CSS), and explores the relationships between stigma, self-esteem and depression. A cross-sectional survey was conducted with a community sample of 114 Chinese PLMI in Hong Kong. Participants completed the CSS, the Chinese Self-Stigma of Mental Illness Scale, the Chinese Rosenberg Self-Esteem Scale, and the Chinese Patient Health Questionnaire-9. An exploratory factor analysis was conducted to identify the underlying factors of the CSS; concurrent validity assessment was performed via correlation analysis. The original 28-item three-factor structure of the Stigma Scale was found to be a poor fit to the data, whereas a revised 14-item three-factor model provided a good fit with all 14 items loaded significantly onto the original factors: discrimination, disclosure and positive aspects of mental illness. The revised model also displayed moderate to good internal consistency and good construct validity. Further findings revealed that the total stigma scale score and all three of its subscale scores correlated negatively with self-esteem; but only total stigma, discrimination and disclosure correlated positively with depression. The CSS is a short and user-friendly self-administrated questionnaire that proves valuable for understanding the multifaceted stigma experiences among PLMI as well as their impact on psychiatric recovery and community integration in Chinese communities. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    PubMed Central

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  20. Relating Ab Initio Mechanical Behavior of Intergranular Glassy Films in Γ-Si3N4 to Continuum Scales

    NASA Astrophysics Data System (ADS)

    Ouyang, L.; Chen, J.; Ching, W.; Misra, A.

    2006-05-01

    Nanometer thin intergranular glassy films (IGFs) form in polycrystalline ceramics during sintering at high temperatures. The structure and properties of these IGFs are significantly changed by doping with rare earth elements. We have performed highly accurate large-scale ab initio calculations of the mechanical properties of both undoped and Yittria doped (Y-IGF) model by theoretical uniaxial tensile experiments. Uniaxial strain was applied by incrementally stretching the super cell in one direction, while the other two dimensions were kept constant. At each strain, all atoms in the model were fully relaxed using Vienna Ab initio Simulation Package VASP. The relaxed model at a given strain serves as the starting position for the next increment of strain. This process is carried on until the total energy (TE) and stress data show that the "sample" is fully fractured. Interesting differences are seen between the stress-strain response of undoped and Y-doped models. For the undoped model, the stress-strain behavior indicates that the initial atomic structure of the IGF is such that there is negligible coupling between the x- and the y-z directions. However, once the behavior becomes non- linear the lateral stresses increase, indicating that the atomic structure evolves with loading [1]. To relate the ab initio calculations to the continuum scales we analyze the atomic-scale deformation field under this uniaxial loading [1]. The applied strain in the x-direction is mostly accommodated by the IGF part of the model and the crystalline part experiences almost negligible strain. As the overall strain on the sample is incrementally increased, the local strain field evolves such that locations proximal to the softer spots attract higher strains. As the load progresses, the strain concentration spots coalesce and eventually form persistent strain localization zone across the IGF. The deformation pattern obtained through ab initio calculations indicates that it is possible to construct discrete grain-scale models that may be used to bridge these calculations to the continuum scale for finite element analysis. Reference: 1. J. Chen, L. Ouyang, P. Rulis, A. Misra, W. Y. Ching, Phys. Rev. Lett. 95, 256103 (2005)

  1. Large-scale geomorphology: Classical concepts reconciled and integrated with contemporary ideas via a surface processes model

    NASA Astrophysics Data System (ADS)

    Kooi, Henk; Beaumont, Christopher

    1996-02-01

    Linear systems analysis is used to investigate the response of a surface processes model (SPM) to tectonic forcing. The SPM calculates subcontinental scale denudational landscape evolution on geological timescales (1 to hundreds of million years) as the result of simultaneous hillslope transport, modeled by diffusion, and fluvial transport, modeled by advection and reaction. The tectonically forced SPM accommodates the large-scale behavior envisaged in classical and contemporary conceptual geomorphic models and provides a framework for their integration and unification. The following three model scales are considered: micro-, meso-, and macroscale. The concepts of dynamic equilibrium and grade are quantified at the microscale for segments of uniform gradient subject to tectonic uplift. At the larger meso- and macroscales (which represent individual interfluves and landscapes including a number of drainage basins, respectively) the system response to tectonic forcing is linear for uplift geometries that are symmetric with respect to baselevel and which impose a fully integrated drainage to baselevel. For these linear models the response time and the transfer function as a function of scale characterize the model behavior. Numerical experiments show that the styles of landscape evolution depend critically on the timescales of the tectonic processes in relation to the response time of the landscape. When tectonic timescales are much longer than the landscape response time, the resulting dynamic equilibrium landscapes correspond to those envisaged by Hack (1960). When tectonic timescales are of the same order as the landscape response time and when tectonic variations take the form of pulses (much shorter than the response time), evolving landscapes conform to the Penck type (1972) and to the Davis (1889, 1899) and King (1953, 1962) type frameworks, respectively. The behavior of the SPM highlights the importance of phase shifts or delays of the landform response and sediment yield in relation to the tectonic forcing. Finally, nonlinear behavior resulting from more general uplift geometries is discussed. A number of model experiments illustrate the importance of "fundamental form," which is an expression of the conformity of antecedent topography with the current tectonic regime. Lack of conformity leads to models that exhibit internal thresholds and a complex response.

  2. Downscaling ocean conditions: Experiments with a quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Katavouta, A.; Thompson, K. R.

    2013-12-01

    The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.

  3. Nanodosimetry of electrons: analysis by experiment and modelling.

    PubMed

    Bantsar, A; Pszona, S

    2015-09-01

    Nanodosimetry experiments for high-energy electrons from a (131)I radioactive source interacting with gaseous nitrogen with sizes on a scale equivalent to the mass per area of a segment of DNA and nucleosome are described. The discrete ionisation cluster-size distributions were measured in experiments carried out with the Jet Counter. The experimental results were compared with those obtained by Monte Carlo modelling. The descriptors of radiation damages have been derived from the data obtained from ionisation cluster-size distributions. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. On unravelling mechanism of interplay between cloud and large scale circulation: a grey area in climate science

    NASA Astrophysics Data System (ADS)

    De, S.; Agarwal, N. K.; Hazra, Anupam; Chaudhari, Hemantkumar S.; Sahai, A. K.

    2018-04-01

    The interaction between cloud and large scale circulation is much less explored area in climate science. Unfolding the mechanism of coupling between these two parameters is imperative for improved simulation of Indian summer monsoon (ISM) and to reduce imprecision in climate sensitivity of global climate model. This work has made an effort to explore this mechanism with CFSv2 climate model experiments whose cloud has been modified by changing the critical relative humidity (CRH) profile of model during ISM. Study reveals that the variable CRH in CFSv2 has improved the nonlinear interactions between high and low frequency oscillations in wind field (revealed as internal dynamics of monsoon) and modulates realistically the spatial distribution of interactions over Indian landmass during the contrasting monsoon season compared to the existing CRH profile of CFSv2. The lower tropospheric wind error energy in the variable CRH simulation of CFSv2 appears to be minimum due to the reduced nonlinear convergence of error to the planetary scale range from long and synoptic scales (another facet of internal dynamics) compared to as observed from other CRH experiments in normal and deficient monsoons. Hence, the interplay between cloud and large scale circulation through CRH may be manifested as a change in internal dynamics of ISM revealed from scale interactive quasi-linear and nonlinear kinetic energy exchanges in frequency as well as in wavenumber domain during the monsoon period that eventually modify the internal variance of CFSv2 model. Conversely, the reduced wind bias and proper modulation of spatial distribution of scale interaction between the synoptic and low frequency oscillations improve the eastward and northward extent of water vapour flux over Indian landmass that in turn give feedback to the realistic simulation of cloud condensates attributing improved ISM rainfall in CFSv2.

  5. When small changes matter: the role of cross-scale interactions between habitat and ecological connectivity in recovery.

    PubMed

    Thrush, Simon F; Hewitt, Judi E; Lohrer, Andrew M; Chiaroni, Luca D

    2013-01-01

    Interaction between the diversity of local communities and the degree of connectivity between them has the potential to influence local recovery rates and thus profoundly affect community dynamics in the face of the cumulative impacts that occur across regions. Although such complex interactions have been modeled, field experiments in natural ecosystems to investigate the importance of interactions between local and regional processes are rare, especially so in coastal marine seafloor habitats subjected to many types of disturbance. We conducted a defaunation experiment at eight subtidal sites, incorporating manipulation of habitat structure, to test the relative importance of local habitat features and colonist supply in influencing macrobenthic community recovery rate. Our sites varied in community composition, habitat characteristics, and hydrodynamic conditions, and we conducted the experiment in two phases, exposing defaunated plots to colonists during periods of either high or low larval colonist supply. In both phases of the experiment, five months after disturbance, we were able to develop models that explained a large proportion of variation in community recovery rate between sites. Our results emphasize that the connectivity to the regional species pool influences recovery rate, and although local habitat effects were important, the strength of these effects was affected by broader-scale site characteristics and connectivity. Empirical evidence that cross-scale interactions are important in disturbance-recovery dynamics emphasizes the complex dynamics underlying seafloor community responses to cumulative disturbance.

  6. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGES

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; ...

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  7. APEX Model Simulation for Row Crop Watersheds with Agroforestry and Grass Buffers

    USDA-ARS?s Scientific Manuscript database

    Watershed model simulation has become an important tool in studying ways and means to reduce transport of agricultural pollutants. Conducting field experiments to assess buffer influences on water quality are constrained by the large-scale nature of watersheds, high experimental costs, private owner...

  8. A generalized methodology to characterize composite materials for pyrolysis models

    NASA Astrophysics Data System (ADS)

    McKinnon, Mark B.

    The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.

  9. Adaptive Multiscale Modeling of Geochemical Impacts on Fracture Evolution

    NASA Astrophysics Data System (ADS)

    Molins, S.; Trebotich, D.; Steefel, C. I.; Deng, H.

    2016-12-01

    Understanding fracture evolution is essential for many subsurface energy applications, including subsurface storage, shale gas production, fracking, CO2 sequestration, and geothermal energy extraction. Geochemical processes in particular play a significant role in the evolution of fractures through dissolution-driven widening, fines migration, and/or fracture sealing due to precipitation. One obstacle to understanding and exploiting geochemical fracture evolution is that it is a multiscale process. However, current geochemical modeling of fractures cannot capture this multi-scale nature of geochemical and mechanical impacts on fracture evolution, and is limited to either a continuum or pore-scale representation. Conventional continuum-scale models treat fractures as preferential flow paths, with their permeability evolving as a function (often, a cubic law) of the fracture aperture. This approach has the limitation that it oversimplifies flow within the fracture in its omission of pore scale effects while also assuming well-mixed conditions. More recently, pore-scale models along with advanced characterization techniques have allowed for accurate simulations of flow and reactive transport within the pore space (Molins et al., 2014, 2015). However, these models, even with high performance computing, are currently limited in their ability to treat tractable domain sizes (Steefel et al., 2013). Thus, there is a critical need to develop an adaptive modeling capability that can account for separate properties and processes, emergent and otherwise, in the fracture and the rock matrix at different spatial scales. Here we present an adaptive modeling capability that treats geochemical impacts on fracture evolution within a single multiscale framework. Model development makes use of the high performance simulation capability, Chombo-Crunch, leveraged by high resolution characterization and experiments. The modeling framework is based on the adaptive capability in Chombo which not only enables mesh refinement, but also refinement of the model-pore scale or continuum Darcy scale-in a dynamic way such that the appropriate model is used only when and where it is needed. Explicit flux matching provides coupling betwen the scales.

  10. Scale Interactions in the Tropics from a Simple Multi-Cloud Model

    NASA Astrophysics Data System (ADS)

    Niu, X.; Biello, J. A.

    2017-12-01

    Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.

  11. Testing the low scale seesaw and leptogenesis

    NASA Astrophysics Data System (ADS)

    Drewes, Marco; Garbrecht, Björn; Gueter, Dario; Klarić, Juraj

    2017-08-01

    Heavy neutrinos with masses below the electroweak scale can simultaneously generate the light neutrino masses via the seesaw mechanism and the baryon asymmetry of the universe via leptogenesis. The requirement to explain these phenomena imposes constraints on the mass spectrum of the heavy neutrinos, their flavour mixing pattern and their CP properties. We first combine bounds from different experiments in the past to map the viable parameter regions in which the minimal low scale seesaw model can explain the observed neutrino oscillations, while being consistent with the negative results of past searches for physics beyond the Standard Model. We then study which additional predictions for the properties of the heavy neutrinos can be made based on the requirement to explain the observed baryon asymmetry of the universe. Finally, we comment on the perspectives to find traces of heavy neutrinos in future experimental searches at the LHC, NA62, BELLE II, T2K, SHiP or a future high energy collider, such as ILC, CEPC or FCC-ee. If any heavy neutral leptons are discovered in the future, our results can be used to assess whether these particles are indeed the common origin of the light neutrino masses and the baryon asymmetry of the universe. If the magnitude of their couplings to all Standard Model flavours can be measured individually, and if the Dirac phase in the lepton mixing matrix is determined in neutrino oscillation experiments, then all model parameters can in principle be determined from this data. This makes the low scale seesaw a fully testable model of neutrino masses and baryogenesis.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieder, William R.; Allison, Steven D.; Davidson, Eric A.

    Microbes influence soil organic matter (SOM) decomposition and the long-term stabilization of carbon (C) in soils. We contend that by revising the representation of microbial processes and their interactions with the physicochemical soil environment, Earth system models (ESMs) may make more realistic global C cycle projections. Explicit representation of microbial processes presents considerable challenges due to the scale at which these processes occur. Thus, applying microbial theory in ESMs requires a framework to link micro-scale process-level understanding and measurements to macro-scale models used to make decadal- to century-long projections. Here, we review the diversity, advantages, and pitfalls of simulating soilmore » biogeochemical cycles using microbial-explicit modeling approaches. We present a roadmap for how to begin building, applying, and evaluating reliable microbial-explicit model formulations that can be applied in ESMs. Drawing from experience with traditional decomposition models we suggest: (1) guidelines for common model parameters and output that can facilitate future model intercomparisons; (2) development of benchmarking and model-data integration frameworks that can be used to effectively guide, inform, and evaluate model parameterizations with data from well-curated repositories; and (3) the application of scaling methods to integrate microbial-explicit soil biogeochemistry modules within ESMs. With contributions across scientific disciplines, we feel this roadmap can advance our fundamental understanding of soil biogeochemical dynamics and more realistically project likely soil C response to environmental change at global scales.« less

  13. Probing high scale physics with top quarks at the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Dong, Zhe

    With the Large Hadron Collider (LHC) running at TeV scale, we are expecting to find the deviations from the Standard Model in the experiments, and understanding what is the origin of these deviations. Being the heaviest elementary particle observed so far in the experiments with the mass at the electroweak scale, top quark is a powerful probe for new phenomena of high scale physics at the LHC. Therefore, we concentrate on studying the high scale physics phenomena with top quark pair production or decay at the LHC. In this thesis, we study the discovery potential of string resonances decaying to t/tbar final state, and examine the possibility of observing baryon-number-violating top-quark production or decay, at the LHC. We point out that string resonances for a string scale below 4 TeV can be detected via the t/tbar channel, by reconstructing center-of-mass frame kinematics of the resonances from either the t/tbar semi-leptonic decay or recent techniques of identifying highly boosted tops. For the study of baryon-number-violating processes, by a model independent effective approach and focusing on operators with minimal mass-dimension, we find that corresponding effective coefficients could be directly probed at the LHC already with an integrated luminosity of 1 inverse femtobarns at 7 TeV, and further constrained with 30 (100) inverse femtobarns at 7 (14) TeV.

  14. Bioadsorber efficiency, design, and performance forecasting for alachlor removal.

    PubMed

    Badriyha, Badri N; Ravindran, Varadarajan; Den, Walter; Pirbazari, Massoud

    2003-10-01

    This study discusses a mathematical modeling and design protocol for bioactive granular activated carbon (GAC) adsorbers employed for purification of drinking water contaminated by chlorinated pesticides, exemplified by alachlor. A thin biofilm model is discussed that incorporates the following phenomenological aspects: film transfer from the bulk fluid to the adsorbent particles, diffusion through the biofilm immobilized on adsorbent surface, adsorption of the contaminant into the adsorbent particle. The modeling approach involved independent laboratory-scale experiments to determine the model input parameters. These experiments included adsorption isotherm studies, adsorption rate studies, and biokinetic studies. Bioactive expanded-bed adsorber experiments were conducted to obtain realistic experimental data for determining the ability of the model for predicting adsorber dynamics under different operating conditions. The model equations were solved using a computationally efficient hybrid numerical technique combining orthogonal collocation and finite difference methods. The model provided accurate predictions of adsorber dynamics for bioactive and non-bioactive scenarios. Sensitivity analyses demonstrated the significance of various model parameters, and focussed on enhancement in certain key parameters to improve the overall process efficiency. Scale-up simulation studies for bioactive and non-bioactive adsorbers provided comparisons between their performances, and illustrated the advantages of bioregeneration for enhancing their effective service life spans. Isolation of microbial species revealed that fungal strains were more efficient than bacterial strains in metabolizing alachlor. Microbial degradation pathways for alachlor were proposed and confirmed by the detection of biotransformation metabolites and byproducts using gas chromatography/mass spectrometry.

  15. Gamma-hadron families and scaling violation

    NASA Technical Reports Server (NTRS)

    Gaisser, T. K.; Stanev, T.; Wrotniak, J. A.

    1985-01-01

    For three different interaction models we have simulated gamma-hadron families, including the detector (Pamir emulsion chamber) response. Rates of gamma families, hadrons, and hadron-gamma ratios were compared with experiments.

  16. A sense of life: computational and experimental investigations with models of biochemical and evolutionary processes.

    PubMed

    Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael

    2003-01-01

    We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.

  17. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets.

    PubMed

    Levering, Jennifer; Fiedler, Tomas; Sieg, Antje; van Grinsven, Koen W A; Hering, Silvio; Veith, Nadine; Olivier, Brett G; Klett, Lara; Hugenholtz, Jeroen; Teusink, Bas; Kreikemeyer, Bernd; Kummer, Ursula

    2016-08-20

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes M49. Initially, we based the reconstruction on genome annotations and already existing and curated metabolic networks of Bacillus subtilis, Escherichia coli, Lactobacillus plantarum and Lactococcus lactis. This initial draft was manually curated with the final reconstruction accounting for 480 genes associated with 576 reactions and 558 metabolites. In order to constrain the model further, we performed growth experiments of wild type and arcA deletion strains of S. pyogenes M49 in a chemically defined medium and calculated nutrient uptake and production fluxes. We additionally performed amino acid auxotrophy experiments to test the consistency of the model. The established genome-scale model can be used to understand the growth requirements of the human pathogen S. pyogenes and define optimal and suboptimal conditions, but also to describe differences and similarities between S. pyogenes and related lactic acid bacteria such as L. lactis in order to find strategies to reduce the growth of the pathogen and propose drug targets. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Understanding flavour at the LHC

    ScienceCinema

    Nir, Yosef

    2018-05-22

    Huge progress in flavour physics has been achieved by the two B-factories and the Tevatron experiments. This progress has, however, deepened the new physics flavour puzzle: If there is new physics at the TeV scale, why aren't flavour changing neutral current processes enhanced by orders of magnitude compared to the standard model predictions? The forthcoming ATLAS and CMS experiments can potentially solve this puzzle. Perhaps even more surprisingly, these experiments can potentially lead to progress in understanding the standard model flavour puzzle: Why is there smallness and hierarchy in the flavour parameters? Thus, a rich and informative flavour program is awaiting us not only in the flavour-dedicated LHCb experiment, but also in the high-pT ATLAS and CMS experiments.

  19. Matching time and spatial scales of rapid solidification: dynamic TEM experiments coupled to CALPHAD-informed phase-field simulations

    NASA Astrophysics Data System (ADS)

    Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.; Fattebert, Jean-Luc; McKeown, Joseph T.

    2018-01-01

    A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu-Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid-liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu-Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying from ˜0.1 to ˜0.6 m s-1. After an ‘incubation’ time, the velocity of the planar solid-liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Finally, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid-liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).

  20. Effect of concentration gradients on biodegradation in bench-scale sand columns with HYDRUS modeling of hydrocarbon transport and degradation.

    PubMed

    Horel, Agota; Schiewer, Silke; Misra, Debasmita

    2015-09-01

    The present research investigated to what extent results obtained in small microcosm experiments can be extrapolated to larger settings with non-uniform concentrations. Microbial hydrocarbon degradation in sandy sediments was compared for column experiments versus homogenized microcosms with varying concentrations of diesel, Syntroleum, and fish biodiesel as contaminants. Syntroleum and fish biodiesel had higher degradation rates than diesel fuel. Microcosms showed significantly higher overall hydrocarbon mineralization percentages (p < 0.006) than columns. Oxygen levels and moisture content were likely not responsible for that difference, which could, however, be explained by a strong gradient of fuel and nutrient concentrations through the column. The mineralization percentage in the columns was similar to small-scale microcosms at high fuel concentrations. While absolute hydrocarbon degradation increased, mineralization percentages decreased with increasing fuel concentration which was corroborated by saturation kinetics; the absolute CO2 production reached a steady plateau value at high substrate concentrations. Numerical modeling using HYDRUS 2D/3D simulated the transport and degradation of the investigated fuels in vadose zone conditions similar to those in laboratory column experiments. The numerical model was used to evaluate the impact of different degradation rate constants from microcosm versus column experiments.

Top