Sample records for physical scale models

  1. Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.

    PubMed

    Green, Sara; Batterman, Robert

    2017-02-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.

    Enabled by petascale supercomputing, the next generation of computer models for wind energy will simulate a vast range of scales and physics, spanning from turbine structural dynamics and blade-scale turbulence to mesoscale atmospheric flow. A single model covering all scales and physics is not feasible. Thus, these simulations will require the coupling of different models/codes, each for different physics, interacting at their domain boundaries.

  3. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.; hide

    2008-01-01

    Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.

  4. Comparing the Hydrologic and Watershed Processes between a Full Scale Stochastic Model Versus a Scaled Physical Model of Bell Canyon

    NASA Astrophysics Data System (ADS)

    Hernandez, K. F.; Shah-Fairbank, S.

    2016-12-01

    The San Dimas Experimental Forest has been designated as a research area by the United States Forest Service for use as a hydrologic testing facility since 1933 to investigate watershed hydrology of the 27 square mile land. Incorporation of a computer model provides validity to the testing of the physical model. This study focuses on San Dimas Experimental Forest's Bell Canyon, one of the triad of watersheds contained within the Big Dalton watershed of the San Dimas Experimental Forest. A scaled physical model was constructed of Bell Canyon to highlight watershed characteristics and each's effect on runoff. The physical model offers a comprehensive visualization of a natural watershed and can vary the characteristics of rainfall intensity, slope, and roughness through interchangeable parts and adjustments to the system. The scaled physical model is validated and calibrated through a HEC-HMS model to assure similitude of the system. Preliminary results of the physical model suggest that a 50-year storm event can be represented by a peak discharge of 2.2 X 10-3 cfs. When comparing the results to HEC-HMS, this equates to a flow relationship of approximately 1:160,000, which can be used to model other return periods. The completion of the Bell Canyon physical model can be used for educational instruction in the classroom, outreach in the community, and further research using the model as an accurate representation of the watershed present in the San Dimas Experimental Forest.

  5. Microphysics in Multi-scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  6. Preduction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation

    DTIC Science & Technology

    2016-08-02

    PREDICTION OF VEHICLE MOBILITY ON LARGE-SCALE SOFT- SOIL TERRAIN MAPS USING PHYSICS-BASED SIMULATION Tamer M. Wasfy, Paramsothy Jayakumar, Dave...NRMM • Objectives • Soft Soils • Review of Physics-Based Soil Models • MBD/DEM Modeling Formulation – Joint & Contact Constraints – DEM Cohesive... Soil Model • Cone Penetrometer Experiment • Vehicle- Soil Model • Vehicle Mobility DOE Procedure • Simulation Results • Concluding Remarks 2UNCLASSIFIED

  7. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel

    2017-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.

  8. Spatial calibration and temporal validation of flow for regional scale hydrologic modeling

    USDA-ARS?s Scientific Manuscript database

    Physically based regional scale hydrologic modeling is gaining importance for planning and management of water resources. Calibration and validation of such regional scale model is necessary before applying it for scenario assessment. However, in most regional scale hydrologic modeling, flow validat...

  9. Physical-scale models of engineered log jams in rivers

    USDA-ARS?s Scientific Manuscript database

    Stream restoration and river engineering projects are employing engineered log jams increasingly for stabilization and in-stream improvements. To further advance the design of these structures and their morphodynamic effects on corridors, the basis for physical-scale models of rivers with engineere...

  10. Construct Validity of Selected Measures of Physical Activity Beliefs and Motives in Fifth and Sixth Grade Boys and Girls

    PubMed Central

    Saunders, Ruth P.; McIver, Kerry L.; Dowda, Marsha; Pate, Russell R.

    2013-01-01

    Objective Scales used to measure selected social-cognitive beliefs and motives for physical activity were tested among boys and girls. Methods Covariance modeling was applied to responses obtained from large multi-ethnic samples of students in the fifth and sixth grades. Results Theoretically and statistically sound models were developed, supporting the factorial validity of the scales in all groups. Multi-group longitudinal invariance was confirmed between boys and girls, overweight and normal weight students, and non-Hispanic black and white children. The construct validity of the scales was supported by hypothesized convergent and discriminant relationships within a measurement model that included correlations with physical activity (MET • min/day) measured by an accelerometer. Conclusions Scores from the scales provide valid assessments of selected beliefs and motives that are putative mediators of change in physical activity among boys and girls, as they begin the understudied transition from the fifth grade into middle school, when physical activity naturally declines. PMID:23459310

  11. Construct validity of selected measures of physical activity beliefs and motives in fifth and sixth grade boys and girls.

    PubMed

    Dishman, Rod K; Saunders, Ruth P; McIver, Kerry L; Dowda, Marsha; Pate, Russell R

    2013-06-01

    Scales used to measure selected social-cognitive beliefs and motives for physical activity were tested among boys and girls. Covariance modeling was applied to responses obtained from large multi-ethnic samples of students in the fifth and sixth grades. Theoretically and statistically sound models were developed, supporting the factorial validity of the scales in all groups. Multi-group longitudinal invariance was confirmed between boys and girls, overweight and normal weight students, and non-Hispanic black and white children. The construct validity of the scales was supported by hypothesized convergent and discriminant relationships within a measurement model that included correlations with physical activity (MET • min/day) measured by an accelerometer. Scores from the scales provide valid assessments of selected beliefs and motives that are putative mediators of change in physical activity among boys and girls, as they begin the understudied transition from the fifth grade into middle school, when physical activity naturally declines.

  12. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  13. Light leptonic new physics at the precision frontier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Dall, Matthias, E-mail: mledall@uvic.ca

    2016-06-21

    Precision probes of new physics are often interpreted through their indirect sensitivity to short-distance scales. In this proceedings contribution, we focus on the question of which precision observables, at current sensitivity levels, allow for an interpretation via either short-distance new physics or consistent models of long-distance new physics, weakly coupled to the Standard Model. The electroweak scale is chosen to set the dividing line between these scenarios. In particular, we find that inverse see-saw models of neutrino mass allow for light new physics interpretations of most precision leptonic observables, such as lepton universality, lepton flavor violation, but not for themore » electron EDM.« less

  14. Psychometric Properties of the “Sport Motivation Scale (SMS)” Adapted to Physical Education

    PubMed Central

    Granero-Gallegos, Antonio; Baena-Extremera, Antonio; Gómez-López, Manuel; Sánchez-Fuentes, José Antonio; Abraldes, J. Arturo

    2014-01-01

    The aim of this study was to investigate the factor structure of a Spanish version of the Sport Motivation Scale adapted to physical education. A second aim was to test which one of three hypothesized models (three, five and seven-factor) provided best model fit. 758 Spanish high school students completed the Sport Motivation Scale adapted for Physical Education and also completed the Learning and Performance Orientation in Physical Education Classes Questionnaire. We examined the factor structure of each model using confirmatory factor analysis and also assessed internal consistency and convergent validity. The results showed that all three models in Spanish produce good indicators of fitness, but we suggest using the seven-factor model (χ2/gl = 2.73; ECVI = 1.38) as it produces better values when adapted to physical education, that five-factor model (χ2/gl = 2.82; ECVI = 1.44) and three-factor model (χ2/gl = 3.02; ECVI = 1.53). Key Points Physical education research conducted in Spain has used the version of SMS designed to assess motivation in sport, but validity reliability and validity results in physical education have not been reported. Results of the present study lend support to the factorial validity and internal reliability of three alternative factor structures (3, 5, and 7 factors) of SMS adapted to Physical Education in Spanish. Although all three models in Spanish produce good indicators of fitness, but we suggest using the seven-factor model. PMID:25435772

  15. Modelling strategies to predict the multi-scale effects of rural land management change

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.

    2011-12-01

    Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.

  16. Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?

    DOE PAGES

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...

    2016-10-20

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  17. Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  18. Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks

    NASA Astrophysics Data System (ADS)

    Fahrenthold, Eric; Lee, Sangyup

    2015-06-01

    The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.

  19. Should we trust build-up/wash-off water quality models at the scale of urban catchments?

    PubMed

    Bonhomme, Céline; Petrucci, Guido

    2017-01-01

    Models of runoff water quality at the scale of an urban catchment usually rely on build-up/wash-off formulations obtained through small-scale experiments. Often, the physical interpretation of the model parameters, valid at the small-scale, is transposed to large-scale applications. Testing different levels of spatial variability, the parameter distributions of a water quality model are obtained in this paper through a Monte Carlo Markov Chain algorithm and analyzed. The simulated variable is the total suspended solid concentration at the outlet of a periurban catchment in the Paris region (2.3 km 2 ), for which high-frequency turbidity measurements are available. This application suggests that build-up/wash-off models applied at the catchment-scale do not maintain their physical meaning, but should be considered as "black-box" models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    NASA Astrophysics Data System (ADS)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  1. Scale Development for Perceived School Climate for Girls' Physical Activity

    ERIC Educational Resources Information Center

    Birnbaum, Amanda S.; Evenson, Kelly R.; Motl, Robert W.; Dishman, Rod K.; Voorhees, Carolyn C.; Sallis, James F.; Elder, John P.; Dowda, Marsha

    2005-01-01

    Objectives: To test an original scale assessing perceived school climate for girls' physical activity in middle school girls. Methods: Confirmatory factor analysis (CFA) and structural equation modeling (SEM). Results: CFA retained 5 of 14 original items. A model with 2 correlated factors, perceptions about teachers' and boys' behaviors,…

  2. Development and evaluation of social cognitive measures related to adolescent physical activity.

    PubMed

    Dewar, Deborah L; Lubans, David Revalds; Morgan, Philip James; Plotnikoff, Ronald C

    2013-05-01

    This study aimed to develop and evaluate the construct validity and reliability of modernized social cognitive measures relating to physical activity behaviors in adolescents. An instrument was developed based on constructs from Bandura's Social Cognitive Theory and included the following scales: self-efficacy, situation (perceived physical environment), social support, behavioral strategies, and outcome expectations and expectancies. The questionnaire was administered in a sample of 171 adolescents (age = 13.6 ± 1.2 years, females = 61%). Confirmatory factor analysis was employed to examine model-fit for each scale using multiple indices, including chi-square index, comparative-fit index (CFI), goodness-of-fit index (GFI), and the root mean square error of approximation (RMSEA). Reliability properties were also examined (ICC and Cronbach's alpha). Each scale represented a statistically sound measure: fit indices indicated each model to be an adequate-to-exact fit to the data; internal consistency was acceptable to good (α = 0.63-0.79); rank order repeatability was strong (ICC = 0.82-0.91). Results support the validity and reliability of social cognitive scales relating to physical activity among adolescents. As such, the developed scales have utility for the identification of potential social cognitive correlates of youth physical activity, mediators of physical activity behavior changes and the testing of theoretical models based on Social Cognitive Theory.

  3. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  4. Explanatory Power of Multi-scale Physical Descriptors in Modeling Benthic Indices Across Nested Ecoregions of the Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Holburn, E. R.; Bledsoe, B. P.; Poff, N. L.; Cuhaciyan, C. O.

    2005-05-01

    Using over 300 R/EMAP sites in OR and WA, we examine the relative explanatory power of watershed, valley, and reach scale descriptors in modeling variation in benthic macroinvertebrate indices. Innovative metrics describing flow regime, geomorphic processes, and hydrologic-distance weighted watershed and valley characteristics are used in multiple regression and regression tree modeling to predict EPT richness, % EPT, EPT/C, and % Plecoptera. A nested design using seven ecoregions is employed to evaluate the influence of geographic scale and environmental heterogeneity on the explanatory power of individual and combined scales. Regression tree models are constructed to explain variability while identifying threshold responses and interactions. Cross-validated models demonstrate differences in the explanatory power associated with single-scale and multi-scale models as environmental heterogeneity is varied. Models explaining the greatest variability in biological indices result from multi-scale combinations of physical descriptors. Results also indicate that substantial variation in benthic macroinvertebrate response can be explained with process-based watershed and valley scale metrics derived exclusively from common geospatial data. This study outlines a general framework for identifying key processes driving macroinvertebrate assemblages across a range of scales and establishing the geographic extent at which various levels of physical description best explain biological variability. Such information can guide process-based stratification to avoid spurious comparison of dissimilar stream types in bioassessments and ensure that key environmental gradients are adequately represented in sampling designs.

  5. Rare b-hadron decays as probe of new physics

    NASA Astrophysics Data System (ADS)

    Lanfranchi, Gaia

    2018-05-01

    The unexpected absence of unambiguous signals of New Physics (NP) at the TeV scale at the Large Hadron Collider (LHC) puts today flavor physics at the forefront. In particular, rare decays of b-hadrons represent a unique probe to challenge the Standard Model (SM) paradigm and test models of NP at a scale much higher than that accessible by direct searches. This article reviews the status of the field.

  6. United States Temperature and Precipitation Extremes: Phenomenology, Large-Scale Organization, Physical Mechanisms and Model Representation

    NASA Astrophysics Data System (ADS)

    Black, R. X.

    2017-12-01

    We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.

  7. An experimental method to verify soil conservation by check dams on the Loess Plateau, China.

    PubMed

    Xu, X Z; Zhang, H W; Wang, G Q; Chen, S C; Dang, W Q

    2009-12-01

    A successful experiment with a physical model requires necessary conditions of similarity. This study presents an experimental method with a semi-scale physical model. The model is used to monitor and verify soil conservation by check dams in a small watershed on the Loess Plateau of China. During experiments, the model-prototype ratio of geomorphic variables was kept constant under each rainfall event. Consequently, experimental data are available for verification of soil erosion processes in the field and for predicting soil loss in a model watershed with check dams. Thus, it can predict the amount of soil loss in a catchment. This study also mentions four criteria: similarities of watershed geometry, grain size and bare land, Froude number (Fr) for rainfall event, and soil erosion in downscaled models. The efficacy of the proposed method was confirmed using these criteria in two different downscaled model experiments. The B-Model, a large scale model, simulates watershed prototype. The two small scale models, D(a) and D(b), have different erosion rates, but are the same size. These two models simulate hydraulic processes in the B-Model. Experiment results show that while soil loss in the small scale models was converted by multiplying the soil loss scale number, it was very close to that of the B-Model. Obviously, with a semi-scale physical model, experiments are available to verify and predict soil loss in a small watershed area with check dam system on the Loess Plateau, China.

  8. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  9. An Idealized Test of the Response of the Community Atmosphere Model to Near-Grid-Scale Forcing Across Hydrostatic Resolutions

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Reed, K. A.

    2018-02-01

    A set of idealized experiments are developed using the Community Atmosphere Model (CAM) to understand the vertical velocity response to reductions in forcing scale that is known to occur when the horizontal resolution of the model is increased. The test consists of a set of rising bubble experiments, in which the horizontal radius of the bubble and the model grid spacing are simultaneously reduced. The test is performed with moisture, through incorporating moist physics routines of varying complexity, although convection schemes are not considered. Results confirm that the vertical velocity in CAM is to first-order, proportional to the inverse of the horizontal forcing scale, which is consistent with a scale analysis of the dry equations of motion. In contrast, experiments in which the coupling time step between the moist physics routines and the dynamical core (i.e., the "physics" time step) are relaxed back to more conventional values results in severely damped vertical motion at high resolution, degrading the scaling. A set of aqua-planet simulations using different physics time steps are found to be consistent with the results of the idealized experiments.

  10. Stability and UV completion of the Standard Model

    NASA Astrophysics Data System (ADS)

    Branchina, Vincenzo; Messina, Emanuele

    2017-03-01

    The knowledge of the electroweak vacuum stability condition is of the greatest importance for our understanding of beyond Standard Model physics. It is widely believed that new physics that lives at very high-energy scales should have no impact on the stability analysis. This expectation has been recently challenged, but the results were controversial as new physics was given in terms of non-renormalizable higher-order operators. Here we consider for the first time new physics at extremely high-energy scales (say close to the Planck scale) in terms of renormalizable operators, in other words we consider a sort of toy UV completion of the Standard Model, and definitely show that its presence can be crucial in determining the vacuum stability condition. This result has important phenomenological consequences, as it provides useful guidance in studying beyond Standard Model theories. Moreover, it suggests that very popular speculations based on the so-called “criticality” of the Standard Model do not appear to be well founded.

  11. SU(2)×U(1) gauge invariance and the shape of new physics in rare B decays.

    PubMed

    Alonso, R; Grinstein, B; Martin Camalich, J

    2014-12-12

    New physics effects in B decays are routinely modeled through operators invariant under the strong and electromagnetic gauge symmetries. Assuming the scale for new physics is well above the electroweak scale, we further require invariance under the full standard model gauge symmetry group. Retaining up to dimension-six operators, we unveil new constraints between different new physics operators that are assumed to be independent in the standard phenomenological analyses. We illustrate this approach by analyzing the constraints on new physics from rare B(q) (semi-)leptonic decays.

  12. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.

  13. Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation

    PubMed Central

    Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan

    2010-01-01

    Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939

  14. Planck 2010: From the Planck Scale to the ElectroWeak Scale (Part 9)

    ScienceCinema

    None

    2018-06-27

    "Planck 2010: From the Planck Scale to the ElectroWeak Scale". The conference will be the twelfth one in a series of meetings on physics beyond the Standard Model, organized jointly by several European groups: Bonn, CERN, Ecole Polytechnique, ICTP, Madrid, Oxford, Padua, Pisa, SISSA and Warsaw as part of activities in the framework of the European network UNILHC. The main topic covered will be "Supersymmetry", with discussions on: supergravity and string phenomenology, extra dimensions, electroweak symmetry breaking, LHC and Tevatron physics, collider physics, flavor and neutrino physics, astroparticle and cosmology, gravity and holography, and strongly coupled physics and CFT.

  15. Planck 2010: From the Planck Scale to the ElectroWeak Scale (Part 5)

    ScienceCinema

    None

    2018-06-27

    "Planck 2010: From the Planck Scale to the ElectroWeak Scale". The conference will be the twelfth one in a series of meetings on physics beyond the Standard Model, organized jointly by several European groups: Bonn, CERN, Ecole Polytechnique, ICTP, Madrid, Oxford, Padua, Pisa, SISSA and Warsaw as part of activities in the framework of the European network UNILHC. The main topic covered will be "Supersymmetry", with discussions on: supergravity and string phenomenology, extra dimensions, electroweak symmetry breaking, LHC and Tevatron physics, collider physics, flavor and neutrino physics, astroparticle and cosmology, gravity and holography, and strongly coupled physics and CFT.

  16. Planck 2010: From the Planck Scale to the ElectroWeak Scale (Part 6)

    ScienceCinema

    None

    2018-06-28

    "Planck 2010: From the Planck Scale to the ElectroWeak Scale". The conference will be the twelfth one in a series of meetings on physics beyond the Standard Model, organized jointly by several European groups: Bonn, CERN, Ecole Polytechnique, ICTP, Madrid, Oxford, Padua, Pisa, SISSA and Warsaw as part of activities in the framework of the European network UNILHC. The main topic covered will be "Supersymmetry", with discussions on: supergravity and string phenomenology, extra dimensions, electroweak symmetry breaking, LHC and Tevatron physics, collider physics, flavor and neutrino physics, astroparticle and cosmology, gravity and holography, and strongly coupled physics and CFT.

  17. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    NASA Astrophysics Data System (ADS)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  18. Correlation of experimentally measured atomic scale properties of EUV photoresist to modeling performance: an exploration

    NASA Astrophysics Data System (ADS)

    Kandel, Yudhishthir; Chandonait, Jonathan; Melvin, Lawrence S.; Marokkey, Sajan; Yan, Qiliang; Grzeskowiak, Steven; Painter, Benjamin; Denbeaux, Gregory

    2017-03-01

    Extreme ultraviolet (EUV) lithography at 13.5 nm stands at the crossroads of next generation patterning technology for high volume manufacturing of integrated circuits. Photo resist models that form the part of overall pattern transform model for lithography play a vital role in supporting this effort. The physics and chemistry of these resists must be understood to enable the construction of accurate models for EUV Optical Proximity Correction (OPC). In this study, we explore the possibility of improving EUV photo-resist models by directly correlating the parameters obtained from experimentally measured atomic scale physical properties; namely, the effect of interaction of EUV photons with photo acid generators in standard chemically amplified EUV photoresist, and associated electron energy loss events. Atomic scale physical properties will be inferred from the measurements carried out in Electron Resist Interaction Chamber (ERIC). This study will use measured physical parameters to establish a relationship with lithographically important properties, such as line edge roughness and CD variation. The data gathered from these measurements is used to construct OPC models of the resist.

  19. COMPARING AND LINKING PLUMES ACROSS MODELING APPROACHES

    EPA Science Inventory

    River plumes carry many pollutants, including microorganisms, into lakes and the coastal ocean. The physical scales of many stream and river plumes often lie between the scales for mixing zone plume models, such as the EPA Visual Plumes model, and larger-sized grid scales for re...

  20. Standard Model Background of the Cosmological Collider.

    PubMed

    Chen, Xingang; Wang, Yi; Xianyu, Zhong-Zhi

    2017-06-30

    The inflationary universe can be viewed as a "cosmological collider" with an energy of the Hubble scale, producing very massive particles and recording their characteristic signals in primordial non-Gaussianities. To utilize this collider to explore any new physics at very high scales, it is a prerequisite to understand the background signals from the particle physics standard model. In this Letter we describe the standard model background of the cosmological collider.

  1. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    NASA Astrophysics Data System (ADS)

    Huang, Dong; Liu, Yangang

    2014-12-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.

  2. Coarse-grained, foldable, physical model of the polypeptide chain.

    PubMed

    Chakraborty, Promita; Zuckermann, Ronald N

    2013-08-13

    Although nonflexible, scaled molecular models like Pauling-Corey's and its descendants have made significant contributions in structural biology research and pedagogy, recent technical advances in 3D printing and electronics make it possible to go one step further in designing physical models of biomacromolecules: to make them conformationally dynamic. We report here the design, construction, and validation of a flexible, scaled, physical model of the polypeptide chain, which accurately reproduces the bond rotational degrees of freedom in the peptide backbone. The coarse-grained backbone model consists of repeating amide and α-carbon units, connected by mechanical bonds (corresponding to ϕ and ψ) that include realistic barriers to rotation that closely approximate those found at the molecular scale. Longer-range hydrogen-bonding interactions are also incorporated, allowing the chain to readily fold into stable secondary structures. The model is easily constructed with readily obtainable parts and promises to be a tremendous educational aid to the intuitive understanding of chain folding as the basis for macromolecular structure. Furthermore, this physical model can serve as the basis for linking tangible biomacromolecular models directly to the vast array of existing computational tools to provide an enhanced and interactive human-computer interface.

  3. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  4. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  5. Balance Confidence: A Predictor of Perceived Physical Function, Perceived Mobility, and Perceived Recovery 1 Year After Inpatient Stroke Rehabilitation.

    PubMed

    Torkia, Caryne; Best, Krista L; Miller, William C; Eng, Janice J

    2016-07-01

    To estimate the effect of balance confidence measured at 1 month poststroke rehabilitation on perceived physical function, mobility, and stroke recovery 12 months later. Longitudinal study (secondary analysis). Multisite, community-based. Community-dwelling individuals (N=69) with stroke living in a home setting. Not applicable. Activities-specific Balance Confidence scale; physical function and mobility subscales of the Stroke Impact Scale 3.0; and a single item from the Stroke Impact Scale for perceived recovery. Balance confidence at 1 month postdischarge from inpatient rehabilitation predicts perceived physical function (model 1), mobility (model 2), and recovery (model 3) 12 months later after adjusting for important covariates. The covariates included in model 1 were age, sex, basic mobility, and depression. The covariates selected for model 2 were age, sex, balance capacity, and anxiety, and the covariates in model 3 were age, sex, walking capacity, and social support. The amount of variance in perceived physical function, perceived mobility, and perceived recovery that balance confidence accounted for was 12%, 9%, and 10%, respectively. After discharge from inpatient rehabilitation poststroke, balance confidence predicts individuals' perceived physical function, mobility, and recovery 12 months later. There is a need to address balance confidence at discharge from inpatient stroke rehabilitation. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  6. Heat Source Characterization In A TREAT Fuel Particle Using Coupled Neutronics Binary Collision Monte-Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram

    This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less

  7. Influence of a health-related physical fitness model on students' physical activity, perceived competence, and enjoyment.

    PubMed

    Fu, You; Gao, Zan; Hannon, James; Shultz, Barry; Newton, Maria; Sibthorp, Jim

    2013-12-01

    This study was designed to explore the effects of a health-related physical fitness physical education model on students' physical activity, perceived competence, and enjoyment. 61 students (25 boys, 36 girls; M age = 12.6 yr., SD = 0.6) were assigned to two groups (health-related physical fitness physical education group, and traditional physical education group), and participated in one 50-min. weekly basketball class for 6 wk. Students' in-class physical activity was assessed using NL-1000 pedometers. The physical subscale of the Perceived Competence Scale for Children was employed to assess perceived competence, and children's enjoyment was measured using the Sport Enjoyment Scale. The findings suggest that students in the intervention group increased their perceived competence, enjoyment, and physical activity over a 6-wk. intervention, while the comparison group simply increased physical activity over time. Children in the intervention group had significantly greater enjoyment.

  8. Scaling laws for ignition at the National Ignition Facility from first principles.

    PubMed

    Cheng, Baolian; Kwan, Thomas J T; Wang, Yi-Ming; Batha, Steven H

    2013-10-01

    We have developed an analytical physics model from fundamental physics principles and used the reduced one-dimensional model to derive a thermonuclear ignition criterion and implosion energy scaling laws applicable to inertial confinement fusion capsules. The scaling laws relate the fuel pressure and the minimum implosion energy required for ignition to the peak implosion velocity and the equation of state of the pusher and the hot fuel. When a specific low-entropy adiabat path is used for the cold fuel, our scaling laws recover the ignition threshold factor dependence on the implosion velocity, but when a high-entropy adiabat path is chosen, the model agrees with recent measurements.

  9. Unitarity and predictiveness in new Higgs inflation

    NASA Astrophysics Data System (ADS)

    Fumagalli, Jacopo; Mooij, Sander; Postma, Marieke

    2018-03-01

    In new Higgs inflation the Higgs kinetic terms are non-minimally coupled to the Einstein tensor, allowing the Higgs field to play the role of the inflaton. The new interaction is non-renormalizable, and the model only describes physics below some cutoff scale. Even if the unknown UV physics does not affect the tree level inflaton potential significantly, it may still enter at loop level and modify the running of the Standard Model (SM) parameters. This is analogous to what happens in the original model for Higgs inflation. A key difference, though, is that in new Higgs inflation the inflationary predictions are sensitive to this running. Thus the boundary conditions at the EW scale as well as the unknown UV completion may leave a signature on the inflationary parameters. However, this dependence can be evaded if the kinetic terms of the SM fermions and gauge fields are non-minimally coupled to gravity as well. Our approach to determine the model's UV dependence and the connection between low and high scale physics can be used in any particle physics model of inflation.

  10. A paradigm for modeling and computation of gas dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kun; Liu, Chang

    2017-02-01

    In the continuum flow regime, the Navier-Stokes (NS) equations are usually used for the description of gas dynamics. On the other hand, the Boltzmann equation is applied for the rarefied flow. These two equations are based on distinguishable modeling scales for flow physics. Fortunately, due to the scale separation, i.e., the hydrodynamic and kinetic ones, both the Navier-Stokes equations and the Boltzmann equation are applicable in their respective domains. However, in real science and engineering applications, they may not have such a distinctive scale separation. For example, around a hypersonic flying vehicle, the flow physics at different regions may correspond to different regimes, where the local Knudsen number can be changed significantly in several orders of magnitude. With a variation of flow physics, theoretically a continuous governing equation from the kinetic Boltzmann modeling to the hydrodynamic Navier-Stokes dynamics should be used for its efficient description. However, due to the difficulties of a direct modeling of flow physics in the scale between the kinetic and hydrodynamic ones, there is basically no reliable theory or valid governing equations to cover the whole transition regime, except resolving flow physics always down to the mean free path scale, such as the direct Boltzmann solver and the Direct Simulation Monte Carlo (DSMC) method. In fact, it is an unresolved problem about the exact scale for the validity of the NS equations, especially in the small Reynolds number cases. The computational fluid dynamics (CFD) is usually based on the numerical solution of partial differential equations (PDEs), and it targets on the recovering of the exact solution of the PDEs as mesh size and time step converging to zero. This methodology can be hardly applied to solve the multiple scale problem efficiently because there is no such a complete PDE for flow physics through a continuous variation of scales. For the non-equilibrium flow study, the direct modeling methods, such as DSMC, particle in cell, and smooth particle hydrodynamics, play a dominant role to incorporate the flow physics into the algorithm construction directly. It is fully legitimate to combine the modeling and computation together without going through the process of constructing PDEs. In other words, the CFD research is not only to obtain the numerical solution of governing equations but to model flow dynamics as well. This methodology leads to the unified gas-kinetic scheme (UGKS) for flow simulation in all flow regimes. Based on UGKS, the boundary for the validation of the Navier-Stokes equations can be quantitatively evaluated. The combination of modeling and computation provides a paradigm for the description of multiscale transport process.

  11. A multi-scale model for geared transmission aero-thermodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Sean M.

    A multi-scale, multi-physics computational tool for the simulation of high-per- formance gearbox aero-thermodynamics was developed and applied to equilibrium and pathological loss-of-lubrication performance simulation. The physical processes at play in these systems include multiphase compressible ow of the air and lubricant within the gearbox, meshing kinematics and tribology, as well as heat transfer by conduction, and free and forced convection. These physics are coupled across their representative space and time scales in the computational framework developed in this dissertation. These scales span eight orders of magnitude, from the thermal response of the full gearbox O(100 m; 10 2 s), through effects at the tooth passage time scale O(10-2 m; 10-4 s), down to tribological effects on the meshing gear teeth O(10-6 m; 10-6 s). Direct numerical simulation of these coupled physics and scales is intractable. Accordingly, a scale-segregated simulation strategy was developed by partitioning and treating the contributing physical mechanisms as sub-problems, each with associated space and time scales, and appropriate coupling mechanisms. These are: (1) the long time scale thermal response of the system, (2) the multiphase (air, droplets, and film) aerodynamic flow and convective heat transfer within the gearbox, (3) the high-frequency, time-periodic thermal effects of gear tooth heating while in mesh and its subsequent cooling through the rest of rotation, (4) meshing effects including tribology and contact mechanics. The overarching goal of this dissertation was to develop software and analysis procedures for gearbox loss-of-lubrication performance. To accommodate these four physical effects and their coupling, each is treated in the CFD code as a sub problem. These physics modules are coupled algorithmically. Specifically, the high- frequency conduction analysis derives its local heat transfer coefficient and near-wall air temperature boundary conditions from a quasi-steady cyclic-symmetric simulation of the internal flow. This high-frequency conduction solution is coupled directly with a model for the meshing friction, developed by a collaborator, which was adapted for use in a finite-volume CFD code. The local surface heat flux on solid surfaces is calculated by time-averaging the heat flux in the high-frequency analysis. This serves as a fixed-flux boundary condition in the long time scale conduction module. The temperature distribution from this long time scale heat transfer calculation serves as a boundary condition for the internal convection simulation, and as the initial condition for the high-frequency heat transfer module. Using this multi-scale model, simulations were performed for equilibrium and loss-of-lubrication operation of the NASA Glenn Research Center test stand. Results were compared with experimental measurements. In addition to the multi-scale model itself, several other specific contributions were made. Eulerian models for droplets and wall-films were developed and im- plemented in the CFD code. A novel approach to retaining liquid film on the solid surfaces, and strategies for its mass exchange with droplets, were developed and verified. Models for interfacial transfer between droplets and wall-film were implemented, and include the effects of droplet deposition, splashing, bouncing, as well as film breakup. These models were validated against airfoil data. To mitigate the observed slow convergence of CFD simulations of the enclosed aerodynamic flows within gearboxes, Fourier stability analysis was applied to the SIMPLE-C fractional-step algorithm. From this, recommendations to accelerate the convergence rate through enhanced pressure-velocity coupling were made. These were shown to be effective. A fast-running finite-volume reduced-order-model of the gearbox aero-thermo- dynamics was developed, and coupled with the tribology model to investigate the sensitivity of loss-of-lubrication predictions to various model and physical param- eters. This sensitivity study was instrumental in guiding efforts toward improving the accuracy of the multi-scale model without undue increase in computational cost. In addition, the reduced-order model is now used extensively by a collaborator in tribology model development and testing. Experimental measurements of high-speed gear windage in partially and fully- shrouded configurations were performed to supplement the paucity of available validation data. This measurement program provided measurements of windage loss for a gear of design-relevant size and operating speed, as well as guidance for increasing the accuracy of future measurements.

  12. Effects of pore-scale physics on uranium geochemistry in Hanford sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Qinhong; Ewing, Robert P.

    Overall, this work examines a key scientific issue, mass transfer limitations at the pore-scale, using both new instruments with high spatial resolution, and new conceptual and modeling paradigms. The complementary laboratory and numerical approaches connect pore-scale physics to macroscopic measurements, providing a previously elusive scale integration. This Exploratory research project produced five peer-reviewed journal publications and eleven scientific presentations. This work provides new scientific understanding, allowing the DOE to better incorporate coupled physical and chemical processes into decision making for environmental remediation and long-term stewardship.

  13. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Liu, Yangang

    2014-12-18

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less

  14. Progress on Implementing Additional Physics Schemes into ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and options available in the Weather Research and Forecasting (WRF) model are regularly used by the USEPA with the Community Multiscale Air Quality (CMAQ) model to conduct retrospective air quality simulations. These include the Pleim surface layer, the Pleim-Xiu (PX) land surface model with fractional land use for a 40-class National Land Cover Database (NLCD40), the Asymmetric Convective Model 2 (ACM2) planetary boundary layer scheme, the Kain-Fritsch (KF) convective parameterization with subgrid-scale cloud feedback to the radiation schemes and a scale-aware convective time scale, and analysis nudging four-dimensional data assimilation (FDDA). All of these physics modules and options have already been implemented by the USEPA into MPAS-A v4.0, tested, and evaluated (please see the presentations of R. Gilliam and R. Bullock at this workshop). Since the release of MPAS v5.1 in May 2017, work has been under way to implement these preferred physics options into the MPAS-A v5.1 code. Test simulations of a summer month are being conducted on a global variable resolution mesh with the higher resolution cells centered over the contiguous United States. Driving fields for the FDDA and soil nudging are

  15. Evaluating CONUS-Scale Runoff Simulation across the National Water Model WRF-Hydro Implementation to Disentangle Regional Controls on Streamflow Generation and Model Error Contribution

    NASA Astrophysics Data System (ADS)

    Dugger, A. L.; Rafieeinasab, A.; Gochis, D.; Yu, W.; McCreight, J. L.; Karsten, L. R.; Pan, L.; Zhang, Y.; Sampson, K. M.; Cosgrove, B.

    2016-12-01

    Evaluation of physically-based hydrologic models applied across large regions can provide insight into dominant controls on runoff generation and how these controls vary based on climatic, biological, and geophysical setting. To make this leap, however, we need to combine knowledge of regional forcing skill, model parameter and physics assumptions, and hydrologic theory. If we can successfully do this, we also gain information on how well our current approximations of these dominant physical processes are represented in continental-scale models. In this study, we apply this diagnostic approach to a 5-year retrospective implementation of the WRF-Hydro community model configured for the U.S. National Weather Service's National Water Model (NWM). The NWM is a water prediction model in operations over the contiguous U.S. as of summer 2016, providing real-time estimates and forecasts out to 30 days of streamflow across 2.7 million stream reaches as well as distributed snowpack, soil moisture, and evapotranspiration at 1-km resolution. The WRF-Hydro system permits not only the standard simulation of vertical energy and water fluxes common in continental-scale models, but augments these processes with lateral redistribution of surface and subsurface water, simple groundwater dynamics, and channel routing. We evaluate 5 years of NLDAS-2 precipitation forcing and WRF-Hydro streamflow and evapotranspiration simulation across the contiguous U.S. at a range of spatial (gage, basin, ecoregion) and temporal (hourly, daily, monthly) scales and look for consistencies and inconsistencies in performance in terms of bias, timing, and extremes. Leveraging results from other CONUS-scale hydrologic evaluation studies, we translate our performance metrics into a matrix of likely dominant process controls and error sources (forcings, parameter estimates, and model physics). We test our hypotheses in a series of controlled model experiments on a subset of representative basins from distinct "problem" environments (Southeast U.S. Coastal Plain, Central and Coastal Texas, Northern Plains, and Arid Southwest). The results from these longer-term model diagnostics will inform future improvements in forcing bias correction, parameter calibration, and physics developments in the National Water Model.

  16. Multi-scale Modeling of Chromosomal DNA in Living Cells

    NASA Astrophysics Data System (ADS)

    Spakowitz, Andrew

    The organization and dynamics of chromosomal DNA play a pivotal role in a range of biological processes, including gene regulation, homologous recombination, replication, and segregation. Establishing a quantitative theoretical model of DNA organization and dynamics would be valuable in bridging the gap between the molecular-level packaging of DNA and genome-scale chromosomal processes. Our research group utilizes analytical theory and computational modeling to establish a predictive theoretical model of chromosomal organization and dynamics. In this talk, I will discuss our efforts to develop multi-scale polymer models of chromosomal DNA that are both sufficiently detailed to address specific protein-DNA interactions while capturing experimentally relevant time and length scales. I will demonstrate how these modeling efforts are capable of quantitatively capturing aspects of behavior of chromosomal DNA in both prokaryotic and eukaryotic cells. This talk will illustrate that capturing dynamical behavior of chromosomal DNA at various length scales necessitates a range of theoretical treatments that accommodate the critical physical contributions that are relevant to in vivo behavior at these disparate length and time scales. National Science Foundation, Physics of Living Systems Program (PHY-1305516).

  17. Evaluating the Global Precipitation Measurement mission with NOAA/NSSL Multi-Radar Multisensor: current status and future directions.

    NASA Astrophysics Data System (ADS)

    Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C. D.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.

    2016-12-01

    Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.

  18. Visualizing and measuring flow in shale matrix using in situ synchrotron X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Kohli, A. H.; Kiss, A. M.; Kovscek, A. R.; Bargar, J.

    2017-12-01

    Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.

  19. EMBAYMENT CHARACTERISTIC TIME AND BIOLOGY VIA TIDAL PRISM MODEL

    EPA Science Inventory

    Transport time scales in water bodies are classically based on their physical and chemical aspects rather than on their ecological and biological character. The direct connection between a physical time scale and ecological effects has to be investigated in order to quantitativel...

  20. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  1. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    NASA Astrophysics Data System (ADS)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  2. Mathematical and physical modeling of thermal stratification phenomena in steel ladles

    NASA Astrophysics Data System (ADS)

    Putan, V.; Vilceanu, L.; Socalici, A.; Putan, A.

    2018-01-01

    By means of CFD numerical modeling, a systematic analysis of the similarity between steel ladles and hot-water model regarding natural convection phenomena was studied. The key similarity criteria we found to be dependent on the dimensionless numbers Fr and βΔT. These similarity criteria suggested that hot-water models with scale in the range between 1/5 and 1/3 and using hot water with temperature of 45 °C or higher are appropriate for simulating natural convection in steel ladles. With this physical model, thermal stratification phenomena due to natural convection in steel ladles were investigated. By controlling the cooling intensity of water model to correspond to the heat loss rate of steel ladles, which is governed by Fr and βΔT, the temperature profiles measured in the water bath of the model were to deduce the extent of thermal stratification in liquid steel bath in the ladles. Comparisons between mathematically simulated temperature profiles in the prototype steel ladles and those physically simulated by scaling-up the measured temperatures profiles in the water model showed good agreement. This proved that it is feasible to use a 1/5 scale water model with 45 °C hot water to simulate natural convection in steel ladles. Therefore, besides mathematical CFD models, the physical hot-water model provided an additional means of studying fluid flow and heat transfer in steel ladles.

  3. Overview of the Meso-NH model version 5.4 and its applications

    NASA Astrophysics Data System (ADS)

    Lac, Christine; Chaboureau, Jean-Pierre; Masson, Valéry; Pinty, Jean-Pierre; Tulet, Pierre; Escobar, Juan; Leriche, Maud; Barthe, Christelle; Aouizerats, Benjamin; Augros, Clotilde; Aumond, Pierre; Auguste, Franck; Bechtold, Peter; Berthet, Sarah; Bielli, Soline; Bosseur, Frédéric; Caumont, Olivier; Cohard, Jean-Martial; Colin, Jeanne; Couvreux, Fleur; Cuxart, Joan; Delautier, Gaëlle; Dauhut, Thibaut; Ducrocq, Véronique; Filippi, Jean-Baptiste; Gazen, Didier; Geoffroy, Olivier; Gheusi, François; Honnert, Rachel; Lafore, Jean-Philippe; Lebeaupin Brossier, Cindy; Libois, Quentin; Lunet, Thibaut; Mari, Céline; Maric, Tomislav; Mascart, Patrick; Mogé, Maxime; Molinié, Gilles; Nuissier, Olivier; Pantillon, Florian; Peyrillé, Philippe; Pergaud, Julien; Perraud, Emilie; Pianezze, Joris; Redelsperger, Jean-Luc; Ricard, Didier; Richard, Evelyne; Riette, Sébastien; Rodier, Quentin; Schoetter, Robert; Seyfried, Léo; Stein, Joël; Suhre, Karsten; Taufour, Marie; Thouron, Odile; Turner, Sandra; Verrelle, Antoine; Vié, Benoît; Visentin, Florian; Vionnet, Vincent; Wautelet, Philippe

    2018-05-01

    This paper presents the Meso-NH model version 5.4. Meso-NH is an atmospheric non hydrostatic research model that is applied to a broad range of resolutions, from synoptic to turbulent scales, and is designed for studies of physics and chemistry. It is a limited-area model employing advanced numerical techniques, including monotonic advection schemes for scalar transport and fourth-order centered or odd-order WENO advection schemes for momentum. The model includes state-of-the-art physics parameterization schemes that are important to represent convective-scale phenomena and turbulent eddies, as well as flows at larger scales. In addition, Meso-NH has been expanded to provide capabilities for a range of Earth system prediction applications such as chemistry and aerosols, electricity and lightning, hydrology, wildland fires, volcanic eruptions, and cyclones with ocean coupling. Here, we present the main innovations to the dynamics and physics of the code since the pioneer paper of Lafore et al. (1998) and provide an overview of recent applications and couplings.

  4. New physics at the TeV scale

    NASA Astrophysics Data System (ADS)

    Chakdar, Shreyashi

    The Standard Model of particle physics is assumed to be a low-energy effective theory with new physics theoretically motivated to be around TeV scale. The thesis presents theories with new physics beyond the Standard Model in the TeV scale testable in the colliders. Work done in chapters 2, 3 and 5 in this thesis present some models incorporating different approaches of enlarging the Standard Model gauge group to a grand unified symmetry with each model presenting its unique signatures in the colliders. The study on leptoquarks gauge bosons in reference to TopSU(5) model in chapter 2 showed that their discovery mass range extends up to 1.5 TeV at 14 TeV LHC with luminosity of 100 fb--1. On the other hand, in chapter 3 we studied the collider phenomenology of TeV scale mirror fermions in Left-Right Mirror model finding that the reaches for the mirror quarks goes upto 750 GeV at the 14 TeV LHC with 300 fb--1 luminosity. In chapter 4 we have enlarged the bosonic symmetry to fermi-bose symmetry e.g. supersymmetry and have shown that SUSY with non-universalities in gaugino or scalar masses within high scale SUGRA set up can still be accessible at LHC with 14 TeV. In chapter 5, we performed a study in respect to the e+e-- collider and find that precise measurements of the higgs boson mass splittings up to ˜ 100 MeV may be possible with high luminosity in the International Linear Collider (ILC). In chapter 6 we have shown that the experimental data on neutrino masses and mixings are consistent with the proposed 4/5 parameter Dirac neutrino models yielding a solution for the neutrino masses with inverted mass hierarchy and large CP violating phase delta and thus can be tested experimentally. Chapter 7 of the thesis incorporates a warm dark matter candidate in context of two Higgs doublet model. The model has several testable consequences at colliders with the charged scalar and pseudoscalar being in few hundred GeV mass range. This thesis presents an endeavor to study beyond standard model physics at the TeV scale with testable signals in the Colliders.

  5. Average vs item response theory scores: an illustration using neighbourhood measures in relation to physical activity in adults with arthritis.

    PubMed

    Mielenz, T J; Callahan, L F; Edwards, M C

    2017-01-01

    Our study had two main objectives: 1) to determine whether perceived neighbourhood physical features are associated with physical activity levels in adults with arthritis; and 2) to determine whether the conclusions are more precise when item response theory (IRT) scores are used instead of average scores for the perceived neighbourhood physical features scales. Information on health outcomes, neighbourhood characteristics, and physical activity levels were collected using a telephone survey of 937 participants with self-reported arthritis. Neighbourhood walkability and aesthetic features and physical activity levels were measured by self-report. Adjusted proportional odds models were constructed separately for each neighbourhood physical features scale. We found that among adults with arthritis, poorer perceived neighbourhood physical features (both walkability and aesthetics) are associated with decreased physical activity level compared to better perceived neighbourhood features. This association was only observed in our adjusted models when IRT scoring was employed with the neighbourhood physical feature scales (walkability scale: odds ratio [OR] 1.20, 95% confidence interval [CI] 1.02, 1.41; aesthetics scale: OR 1.32, 95% CI 1.09, 1.62), not when average scoring was used (walkability scale: OR 1.14, 95% CI 1.00, 1.30; aesthetics scale: OR 1.16, 95% CI 1.00, 1.36). In adults with arthritis, those reporting poorer walking and aesthetics features were found to have decreased physical activity levels compared to those reporting better features when IRT scores were used, but not when using average scores. This study may inform public health physical environmental interventions implemented to increase physical activity, especially since arthritis prevalence is expected to be close to 20% of the population in 2020. Based on NIH initiatives, future health research will utilize IRT scores. The differences found in this study may be a precursor for research on how past and future treatment effects may vary between these two types of measurement scores. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  6. The physics behind the larger scale organization of DNA in eukaryotes.

    PubMed

    Emanuel, Marc; Radja, Nima Hamedani; Henriksson, Andreas; Schiessel, Helmut

    2009-07-01

    In this paper, we discuss in detail the organization of chromatin during a cell cycle at several levels. We show that current experimental data on large-scale chromatin organization have not yet reached the level of precision to allow for detailed modeling. We speculate in some detail about the possible physics underlying the larger scale chromatin organization.

  7. Validation of the TTM processes of change measure for physical activity in an adult French sample.

    PubMed

    Bernard, Paquito; Romain, Ahmed-Jérôme; Trouillet, Raphael; Gernigon, Christophe; Nigg, Claudio; Ninot, Gregory

    2014-04-01

    Processes of change (POC) are constructs from the transtheoretical model that propose to examine how people engage in a behavior. However, there is no consensus about a leading model explaining POC and there is no validated French POC scale in physical activity This study aimed to compare the different existing models to validate a French POC scale. Three studies, with 748 subjects included, were carried out to translate the items and evaluate their clarity (study 1, n = 77), to assess the factorial validity (n = 200) and invariance/equivalence (study 2, n = 471), and to analyze the concurrent validity by stage × process analyses (study 3, n = 671). Two models displayed adequate fit to the data; however, based on the Akaike information criterion, the fully correlated five-factor model appeared as the most appropriate to measure POC in physical activity. The invariance/equivalence was also confirmed across genders and student status. Four of the five existing factors discriminated pre-action and post-action stages. These data support the validation of the POC questionnaire in physical activity among a French sample. More research is needed to explore the longitudinal properties of this scale.

  8. Regionalization of subsurface stormflow parameters of hydrologic models: Up-scaling from physically based numerical simulations at hillslope scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Melkamu; Ye, Sheng; Li, Hongyi

    2014-07-19

    Subsurface stormflow is an important component of the rainfall-runoff response, especially in steep forested regions. However; its contribution is poorly represented in current generation of land surface hydrological models (LSMs) and catchment-scale rainfall-runoff models. The lack of physical basis of common parameterizations precludes a priori estimation (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global models. This paper is aimed at deriving physically based parameterizations of the storage-discharge relationship relating to subsurface flow. These parameterizations are derived through a two-step up-scaling procedure: firstly, through simulations with a physically based (Darcian) subsurfacemore » flow model for idealized three dimensional rectangular hillslopes, accounting for within-hillslope random heterogeneity of soil hydraulic properties, and secondly, through subsequent up-scaling to the catchment scale by accounting for between-hillslope and within-catchment heterogeneity of topographic features (e.g., slope). These theoretical simulation results produced parameterizations of the storage-discharge relationship in terms of soil hydraulic properties, topographic slope and their heterogeneities, which were consistent with results of previous studies. Yet, regionalization of the resulting storage-discharge relations across 50 actual catchments in eastern United States, and a comparison of the regionalized results with equivalent empirical results obtained on the basis of analysis of observed streamflow recession curves, revealed a systematic inconsistency. It was found that the difference between the theoretical and empirically derived results could be explained, to first order, by climate in the form of climatic aridity index. This suggests a possible codependence of climate, soils, vegetation and topographic properties, and suggests that subsurface flow parameterization needed for ungauged locations must account for both the physics of flow in heterogeneous landscapes, and the co-dependence of soil and topographic properties with climate, including possibly the mediating role of vegetation.« less

  9. CPMIP: measurements of real computational performance of Earth system models in CMIP6

    NASA Astrophysics Data System (ADS)

    Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett

    2017-01-01

    A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

  10. Microfluidic Experiments Studying Pore Scale Interactions of Microbes and Geochemistry

    NASA Astrophysics Data System (ADS)

    Chen, M.; Kocar, B. D.

    2016-12-01

    Understanding how physical phenomena, chemical reactions, and microbial behavior interact at the pore-scale is crucial to understanding larger scale trends in groundwater chemistry. Recent studies illustrate the utility of microfluidic devices for illuminating pore-scale physical-biogeochemical processes and their control(s) on the cycling of iron, uranium, and other important elements 1-3. These experimental systems are ideal for examining geochemical reactions mediated by microbes, which include processes governed by complex biological phenomenon (e.g. biofilm formation, etc.)4. We present results of microfluidic experiments using a model metal reducing bacteria and varying pore geometries, exploring the limitations of the microorganisms' ability to access tight pore spaces, and examining coupled biogeochemical-physical controls on the cycling of redox sensitive metals. Experimental results will provide an enhanced understanding of coupled physical-biogeochemical processes transpiring at the pore-scale, and will constrain and compliment continuum models used to predict and describe the subsurface cycling of redox-sensitive elements5. 1. Vrionis, H. A. et al. Microbiological and geochemical heterogeneity in an in situ uranium bioremediation field site. Appl. Environ. Microbiol. 71, 6308-6318 (2005). 2. Pearce, C. I. et al. Pore-scale characterization of biogeochemical controls on iron and uranium speciation under flow conditions. Environ. Sci. Technol. 46, 7992-8000 (2012). 3. Zhang, C., Liu, C. & Shi, Z. Micromodel investigation of transport effect on the kinetics of reductive dissolution of hematite. Environ. Sci. Technol. 47, 4131-4139 (2013). 4. Ginn, T. R. et al. Processes in microbial transport in the natural subsurface. Adv. Water Resour. 25, 1017-1042 (2002). 5. Scheibe, T. D. et al. Coupling a genome-scale metabolic model with a reactive transport model to describe in situ uranium bioremediation. Microb. Biotechnol. 2, 274-286 (2009).

  11. A transparently scalable visualization architecture for exploring the universe.

    PubMed

    Fu, Chi-Wing; Hanson, Andrew J

    2007-01-01

    Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.

  12. USE OF REMOTE SENSING AIR QUALITY INFORMATION IN REGIONAL SCALE AIR POLLUTION MODELING: CURRENT USE AND REQUIREMENTS

    EPA Science Inventory

    In recent years the applications of regional air quality models are continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physic...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E.; Lenee-Bluhm, Pukha; Prudell, Joseph H.

    The most prudent path to a full-scale design, build and deployment of a wave energy conversion (WEC) system involves establishment of validated numerical models using physical experiments in a methodical scaling program. This Project provides essential additional rounds of wave tank testing at 1:33 scale and ocean/bay testing at a 1:7 scale, necessary to validate numerical modeling that is essential to a utility-scale WEC design and associated certification.

  14. COMPUTATIONAL CHALLENGES IN BUILDING MULTI-SCALE AND MULTI-PHYSICS MODELS OF CARDIAC ELECTRO-MECHANICS

    PubMed Central

    Plank, G; Prassl, AJ; Augustin, C

    2014-01-01

    Despite the evident multiphysics nature of the heart – it is an electrically controlled mechanical pump – most modeling studies considered electrophysiology and mechanics in isolation. In no small part, this is due to the formidable modeling challenges involved in building strongly coupled anatomically accurate and biophyically detailed multi-scale multi-physics models of cardiac electro-mechanics. Among the main challenges are the selection of model components and their adjustments to achieve integration into a consistent organ-scale model, dealing with technical difficulties such as the exchange of data between electro-physiological and mechanical model, particularly when using different spatio-temporal grids for discretization, and, finally, the implementation of advanced numerical techniques to deal with the substantial computational. In this study we report on progress made in developing a novel modeling framework suited to tackle these challenges. PMID:24043050

  15. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  16. The Australian Computational Earth Systems Simulator

    NASA Astrophysics Data System (ADS)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.

  17. Validity evidence for the adaptation of the State Mindfulness Scale for Physical Activity (SMS-PA) in Spanish youth.

    PubMed

    Ullrich-French, Sarah; González Hernández, Juan; Hidalgo Montesinos, María D

    2017-02-01

    Mindfulness is an increasingly popular construct with promise in enhancing multiple positive health outcomes. Physical activity is an important behavior for enhancing overall health, but no Spanish language scale exists to test how mindfulness during physical activity may facilitate physical activity motivation or behavior. This study examined the validity of a Spanish adaption of a new scale, the State Mindfulness Scale for Physical Activity, to assess mindfulness during a specific experience of physical activity. Spanish youths (N = 502) completed a cross-sectional survey of state mindfulness during physical activity and physical activity motivation regulations based on Self-Determination Theory. A high-order model fit the data well and supports the use of one general state mindfulness factor or the use of separate subscales of mindfulness of mental (e.g., thoughts, emotions) and body (physical movement, muscles) aspects of the experience. Internal consistency reliability was good for the general scale and both sub-scales. The pattern of correlations with motivation regulations provides further support for construct validity with significant and positive correlations with self-determined forms of motivation and significant and negative correlations with external regulation and amotivation. Initial validity evidence is promising for the use of the adapted measure.

  18. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.

  19. Spin determination at the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Yavin, Itay

    The quantum field theory describing the Electroweak sector demands some new physics at the TeV scale in order to unitarize the scattering of longitudinal W bosons. If this new physics takes the form of a scalar Higgs boson then it is hard to understand the huge hierarchy of scales between the Electroweak scale ˜ TeV and the Planck scale ˜ 1019 GeV. This is known as the Naturalness problem. Normally, in order to solve this problem, new particles, in addition to the Higgs boson, are required to be present in the spectrum below a few TeV. If such particles are indeed discovered at the Large Hadron Collider it will become important to determine their spin. Several classes of models for physics beyond the Electroweak scale exist. Determining the spin of any such newly discovered particle could prove to be the only means of distinguishing between these different models. In the first part of this thesis; we present a thorough discussion regarding such a measurement. We survey the different potentially useful channels for spin determination and a detailed analysis of the most promising channel is performed. The Littlest Higgs model offers a way to solve the Hierarchy problem by introduring heavy partners to Standard Model particles with the same spin and quantum numbers. However, this model is only good up to ˜ 10 TeV. In the second part of this thesis we present an extension of this model into a strongly coupled theory above ˜ 10 TeV. We use the celebrated AdS/CFT correspondence to calculate properties of the low-energy physics in terms of high-energy parameters. We comment on some of the tensions inherent to such a construction involving a large-N CFT (or equivalently, an AdS space).

  20. Physically based modeling in catchment hydrology at 50: Survey and outlook

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Putti, Mario

    2015-09-01

    Integrated, process-based numerical models in hydrology are rapidly evolving, spurred by novel theories in mathematical physics, advances in computational methods, insights from laboratory and field experiments, and the need to better understand and predict the potential impacts of population, land use, and climate change on our water resources. At the catchment scale, these simulation models are commonly based on conservation principles for surface and subsurface water flow and solute transport (e.g., the Richards, shallow water, and advection-dispersion equations), and they require robust numerical techniques for their resolution. Traditional (and still open) challenges in developing reliable and efficient models are associated with heterogeneity and variability in parameters and state variables; nonlinearities and scale effects in process dynamics; and complex or poorly known boundary conditions and initial system states. As catchment modeling enters a highly interdisciplinary era, new challenges arise from the need to maintain physical and numerical consistency in the description of multiple processes that interact over a range of scales and across different compartments of an overall system. This paper first gives an historical overview (past 50 years) of some of the key developments in physically based hydrological modeling, emphasizing how the interplay between theory, experiments, and modeling has contributed to advancing the state of the art. The second part of the paper examines some outstanding problems in integrated catchment modeling from the perspective of recent developments in mathematical and computational science.

  1. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  2. Coupled Mechanical-Electrochemical-Thermal Modeling for Accelerated Design of EV Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanagopalan, Shriram; Zhang, Chao; Kim, Gi-Heon

    2015-05-03

    This presentation provides an overview of the mechanical electrochemical-thermal (M-ECT) modeling efforts. The physical phenomena occurring in a battery are many and complex and operate at different scales (particle, electrodes, cell, and pack). A better understanding of the interplay between different physics occurring at different scales through modeling could provide insight to design improved batteries for electric vehicles. Work funded by the U.S. DOE has resulted in development of computer-aided engineering (CAE) tools to accelerate electrochemical and thermal design of batteries; mechanical modeling is under way. Three competitive CAE tools are now commercially available.

  3. Physics textbooks from the viewpoint of network structures

    NASA Astrophysics Data System (ADS)

    Králiková, Petra; Teleki, Aba

    2017-01-01

    We can observe self-organized networks all around us. These networks are, in general, scale invariant networks described by the Bianconi-Barabasi model. The self-organized networks (networks formed naturally when feedback acts on the system) show certain universality. These networks, in simplified models, have scale invariant distribution (Pareto distribution type I) and parameter α has value between 2 and 5. The textbooks are extremely important in the learning process and from this reason we studied physics textbook at the level of sentences and physics terms (bipartite network). The nodes represent physics terms, sentences, and pictures, tables, connected by links (by physics terms and transitional words and transitional phrases). We suppose that learning process are more robust and goes faster and easier if the physics textbook has a structure similar to structures of self-organized networks.

  4. Direct modeling for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct construction of discrete numerical evolution equations, where the mesh size and time step will play dynamic roles in the modeling process. With the variation of the ratio between mesh size and local particle mean free path, the scheme will capture flow physics from the kinetic particle transport and collision to the hydrodynamic wave propagation. Based on the direct modeling, a continuous dynamics of flow motion will be captured in the unified gas-kinetic scheme. This scheme can be faithfully used to study the unexplored non-equilibrium flow physics in the transition regime.

  5. Progress on Implementing Additional Physics Schemes into MPAS-A v5.1 for Next Generation Air Quality Modeling

    EPA Science Inventory

    The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and ...

  6. Mesoscale Computational Investigation of Shocked Heterogeneous Materials with Application to Large Impact Craters

    NASA Technical Reports Server (NTRS)

    Crawford, D. A.; Barnouin-Jha, O. S.; Cintala, M. J.

    2003-01-01

    The propagation of shock waves through target materials is strongly influenced by the presence of small-scale structure, fractures, physical and chemical heterogeneities. Pre-existing fractures often create craters that appear square in outline (e.g. Meteor Crater). Reverberations behind the shock from the presence of physical heterogeneity have been proposed as a mechanism for transient weakening of target materials. Pre-existing fractures can also affect melt generation. In this study, we are attempting to bridge the gap in numerical modeling between the micro-scale and the continuum, the so-called meso-scale. To accomplish this, we are developing a methodology to be used in the shock physics hydrocode (CTH) using Monte-Carlo-type methods to investigate the shock properties of heterogeneous materials. By comparing the results of numerical experiments at the micro-scale with experimental results and by using statistical techniques to evaluate the performance of simple constitutive models, we hope to embed the effect of physical heterogeneity into the field variables (pressure, stress, density, velocity) allowing us to directly imprint the effects of micro-scale heterogeneity at the continuum level without incurring high computational cost.

  7. Multi-scale Modeling, Design Strategies and Physical Properties of 2D Composite Sheets

    DTIC Science & Technology

    2014-09-22

    talks and training of two postdoctoral candidates, one graduate student The theoretical work on thennal, elecu·onic and optical prope1iies of 2D ...materials led to several new experimentalists to validate our predictions. 1S. SUBJECT TERMS 2D materials, multi scale modeling 16. SECURITY...strategies and physical properties of 2D composite sheets: Final Report Report Title This report describes the progress made as part of the subject contract

  8. Physical modelling in biomechanics.

    PubMed Central

    Koehl, M A R

    2003-01-01

    Physical models, like mathematical models, are useful tools in biomechanical research. Physical models enable investigators to explore parameter space in a way that is not possible using a comparative approach with living organisms: parameters can be varied one at a time to measure the performance consequences of each, while values and combinations not found in nature can be tested. Experiments using physical models in the laboratory or field can circumvent problems posed by uncooperative or endangered organisms. Physical models also permit some aspects of the biomechanical performance of extinct organisms to be measured. Use of properly scaled physical models allows detailed physical measurements to be made for organisms that are too small or fast to be easily studied directly. The process of physical modelling and the advantages and limitations of this approach are illustrated using examples from our research on hydrodynamic forces on sessile organisms, mechanics of hydraulic skeletons, food capture by zooplankton and odour interception by olfactory antennules. PMID:14561350

  9. On the physical properties of volcanic rock masses

    NASA Astrophysics Data System (ADS)

    Heap, M. J.; Villeneuve, M.; Ball, J. L.; Got, J. L.

    2017-12-01

    The physical properties (e.g., elastic properties, porosity, permeability, cohesion, strength, amongst others) of volcanic rocks are crucial input parameters for modelling volcanic processes. These parameters, however, are often poorly constrained and there is an apparent disconnect between modellers and those who measure/determine rock and rock mass properties. Although it is well known that laboratory measurements are scale dependent, experimentalists, field volcanologists, and modellers should work together to provide the most appropriate model input parameters. Our pluridisciplinary approach consists of (1) discussing with modellers to better understand their needs, (2) using experimental know-how to build an extensive database of volcanic rock properties, and (3) using geotechnical and field-based volcanological know-how to address scaling issues. For instance, increasing the lengthscale of interest from the laboratory-scale to the volcano-scale will reduce the elastic modulus and strength and increase permeability, but to what extent? How variable are the physical properties of volcanic rocks, and is it appropriate to assume constant, isotropic, and/or homogeneous values for volcanoes? How do alteration, depth, and temperature influence rock physical and mechanical properties? Is rock type important, or do rock properties such as porosity exert a greater control on such parameters? How do we upscale these laboratory-measured properties to rock mass properties using the "fracturedness" of a volcano or volcanic outcrop, and how do we quantify fracturedness? We hope to discuss and, where possible, address some of these issues through active discussion between two (or more) scientific communities.

  10. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun

    2018-03-01

    Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  11. Hydrology or biology? Modeling simplistic physical constraints on lake carbon biogeochemistry to identify when and where biology is likely to matter

    NASA Astrophysics Data System (ADS)

    Jones, S.; Zwart, J. A.; Solomon, C.; Kelly, P. T.

    2017-12-01

    Current efforts to scale lake carbon biogeochemistry rely heavily on empirical observations and rarely consider physical or biological inter-lake heterogeneity that is likely to regulate terrestrial dissolved organic carbon (tDOC) decomposition in lakes. This may in part result from a traditional focus of lake ecologists on in-lake biological processes OR physical-chemical pattern across lake regions, rather than on process AND pattern across scales. To explore the relative importance of local biological processes and physical processes driven by lake hydrologic setting, we created a simple, analytical model of tDOC decomposition in lakes that focuses on the regulating roles of lake size and catchment hydrologic export. Our simplistic model can generally recreate patterns consistent with both local- and regional-scale patterns in tDOC concentration and decomposition. We also see that variation in lake hydrologic setting, including the importance of evaporation as a hydrologic export, generates significant, emergent variation in tDOC decomposition at a given hydrologic residence time, and creates patterns that have been historically attributed to variation in tDOC quality. Comparing predictions of this `biologically null model' to field observations and more biologically complex models could indicate when and where biology is likely to matter most.

  12. Application of physical scaling towards downscaling climate model precipitation data

    NASA Astrophysics Data System (ADS)

    Gaur, Abhishek; Simonovic, Slobodan P.

    2018-04-01

    Physical scaling (SP) method downscales climate model data to local or regional scales taking into consideration physical characteristics of the area under analysis. In this study, multiple SP method based models are tested for their effectiveness towards downscaling North American regional reanalysis (NARR) daily precipitation data. Model performance is compared with two state-of-the-art downscaling methods: statistical downscaling model (SDSM) and generalized linear modeling (GLM). The downscaled precipitation is evaluated with reference to recorded precipitation at 57 gauging stations located within the study region. The spatial and temporal robustness of the downscaling methods is evaluated using seven precipitation based indices. Results indicate that SP method-based models perform best in downscaling precipitation followed by GLM, followed by the SDSM model. Best performing models are thereafter used to downscale future precipitations made by three global circulation models (GCMs) following two emission scenarios: representative concentration pathway (RCP) 2.6 and RCP 8.5 over the twenty-first century. The downscaled future precipitation projections indicate an increase in mean and maximum precipitation intensity as well as a decrease in the total number of dry days. Further an increase in the frequency of short (1-day), moderately long (2-4 day), and long (more than 5-day) precipitation events is projected.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  14. A Physically Based Runoff Routing Model for Land Surface and Earth System Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hongyi; Wigmosta, Mark S.; Wu, Huan

    2013-06-13

    A new physically based runoff routing model, called the Model for Scale Adaptive River Transport (MOSART), has been developed to be applicable across local, regional, and global scales. Within each spatial unit, surface runoff is first routed across hillslopes and then discharged along with subsurface runoff into a ‘‘tributary subnetwork’’ before entering the main channel. The spatial units are thus linked via routing through the main channel network, which is constructed in a scale-consistent way across different spatial resolutions. All model parameters are physically based, and only a small subset requires calibration.MOSART has been applied to the Columbia River basinmore » at 1/ 168, 1/ 88, 1/ 48, and 1/ 28 spatial resolutions and was evaluated using naturalized or observed streamflow at a number of gauge stations. MOSART is compared to two other routing models widely used with land surface models, the River Transport Model (RTM) in the Community Land Model (CLM) and the Lohmann routing model, included as a postprocessor in the Variable Infiltration Capacity (VIC) model package, yielding consistent performance at multiple resolutions. MOSART is further evaluated using the channel velocities derived from field measurements or a hydraulic model at various locations and is shown to be capable of producing the seasonal variation and magnitude of channel velocities reasonably well at different resolutions. Moreover, the impacts of spatial resolution on model simulations are systematically examined at local and regional scales. Finally, the limitations ofMOSART and future directions for improvements are discussed.« less

  15. Overview of lower length scale model development for accident tolerant fuels regarding U3Si2 fuel and FeCrAl cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yongfeng

    2016-09-01

    U3Si2 and FeCrAl have been proposed as fuel and cladding concepts, respectively, for accident tolerance fuels with higher tolerance to accident scenarios compared to UO2. However, a lot of key physics and material properties regarding their in-pile performance are yet to be explored. To accelerate the understanding and reduce the cost of experimental studies, multiscale modeling and simulation are used to develop physics-based materials models to assist engineering scale fuel performance modeling. In this report, the lower-length-scale efforts in method and material model development supported by the Accident Tolerance Fuel (ATF) high-impact-problem (HIP) under the NEAMS program are summarized. Significantmore » progresses have been made regarding interatomic potential, phase field models for phase decomposition and gas bubble formation, and thermal conductivity for U3Si2 fuel, and precipitation in FeCrAl cladding. The accomplishments are very useful by providing atomistic and mesoscale tools, improving the current understanding, and delivering engineering scale models for these two ATF concepts.« less

  16. Continuous data assimilation for downscaling large-footprint soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.

    2016-10-01

    Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.

  17. A Mixed-dimensional Model for Determining the Impact of Permafrost Polygonal Ground Degradation on Arctic Hydrology.

    NASA Astrophysics Data System (ADS)

    Coon, E.; Jan, A.; Painter, S. L.; Moulton, J. D.; Wilson, C. J.

    2017-12-01

    Many permafrost-affected regions in the Arctic manifest a polygonal patterned ground, which contains large carbon stores and is vulnerability to climate change as warming temperatures drive melting ice wedges, polygon degradation, and thawing of the underlying carbon-rich soils. Understanding the fate of this carbon is difficult. The system is controlled by complex, nonlinear physics coupling biogeochemistry, thermal-hydrology, and geomorphology, and there is a strong spatial scale separation between microtopograpy (at the scale of an individual polygon) and the scale of landscape change (at the scale of many thousands of polygons). Physics-based models have come a long way, and are now capable of representing the diverse set of processes, but only on individual polygons or a few polygons. Empirical models have been used to upscale across land types, including ecotypes evolving from low-centered (pristine) polygons to high-centered (degraded) polygon, and do so over large spatial extent, but are limited in their ability to discern causal process mechanisms. Here we present a novel strategy that looks to use physics-based models across scales, bringing together multiple capabilities to capture polygon degradation under a warming climate and its impacts on thermal-hydrology. We use fine-scale simulations on individual polygons to motivate a mixed-dimensional strategy that couples one-dimensional columns representing each individual polygon through two-dimensional surface flow. A subgrid model is used to incorporate the effects of surface microtopography on surface flow; this model is described and calibrated to fine-scale simulations. And critically, a subsidence model that tracks volume loss in bulk ice wedges is used to alter the subsurface structure and subgrid parameters, enabling the inclusion of the feedbacks associated with polygon degradation. This combined strategy results in a model that is able to capture the key features of polygon permafrost degradation, but in a simulation across a large spatial extent of polygonal tundra.

  18. Future sensitivity to new physics in Bd, Bs, and K mixings

    NASA Astrophysics Data System (ADS)

    Charles, Jérôme; Descotes-Genon, Sébastien; Ligeti, Zoltan; Monteil, Stéphane; Papucci, Michele; Trabelsi, Karim

    2014-02-01

    We estimate, in a large class of scenarios, the sensitivity to new physics in Bd and Bs mixings achievable with 50 ab-1 of Belle II and 50 fb-1 of LHCb data. We find that current limits on new physics contributions in both Bd ,s systems can be improved by a factor of ˜5 for all values of the CP-violating phases, corresponding to over a factor of 2 increase in the scale of new physics probed. Assuming the same suppressions by Cabbibo-Kobayashi-Maskawa matrix elements as those of the standard model box diagrams, the scale probed will be about 20 TeV for tree-level new physics contributions, and about 2 TeV for new physics arising at one loop. We also explore the future sensitivity to new physics in K mixing. Implications for generic new physics and for various specific scenarios, such as minimal flavor violation, light third-generation dominated flavor violation, or U(2) flavor models are studied.

  19. Examples of data assimilation in mesoscale models

    NASA Technical Reports Server (NTRS)

    Carr, Fred; Zack, John; Schmidt, Jerry; Snook, John; Benjamin, Stan; Stauffer, David

    1993-01-01

    The keynote address was the problem of physical initialization of mesoscale models. The classic purpose of physical or diabatic initialization is to reduce or eliminate the spin-up error caused by the lack, at the initial time, of the fully developed vertical circulations required to support regions of large rainfall rates. However, even if a model has no spin-up problem, imposition of observed moisture and heating rate information during assimilation can improve quantitative precipitation forecasts, especially early in the forecast. The two key issues in physical initialization are the choice of assimilating technique and sources of hydrologic/hydrometeor data. Another example of data assimilation in mesoscale models was presented in a series of meso-beta scale model experiments with and 11 km version of the MASS model designed to investigate the sensitivity of convective initiation forced by thermally direct circulations resulting from differential surface heating to four dimensional assimilation of surface and radar data. The results of these simulations underscore the need to accurately initialize and simulate grid and sub-grid scale clouds in meso- beta scale models. The status of the application of the CSU-RAMS mesoscale model by the NOAA Forecast Systems Lab for producing real-time forecasts with 10-60 km mesh resolutions over (4000 km)(exp 2) domains for use by the aviation community was reported. Either MAPS or LAPS model data are used to initialize the RAMS model on a 12-h cycle. The use of MAPS (Mesoscale Analysis and Prediction System) model was discussed. Also discussed was the mesobeta-scale data assimilation using a triply-nested nonhydrostatic version of the MM5 model.

  20. Burden and Cognitive Appraisal of Stroke Survivors' Informal Caregivers: An Assessment of Depression Model With Mediating and Moderating Effects.

    PubMed

    Tsai, Yi-Chen; Pai, Hsiang-Chu

    2016-04-01

    This study proposes and evaluates a model of depression that concerns the role of burden and cognitive appraisal as mediators or moderators of outcomes among stroke survivor caregivers. A total of 105 informal caregivers of stroke survivor completed the self-report measures of Caregiver Burden Inventory, Center for Epidemiologic Studies Depression Scale, and Cognitive Impact of Appraisal Scale. The Glasgow Coma Scale and Barthel Index were used by the researcher to examine the physical functional status of the survivor. Partial least squares (PLS) path modeling was used to estimate the parameters of a depression model that included mediating or moderating effects. The model shows that burden and impact of cognitive appraisal have a significant direct and indirect impact on depression, while survivor physical functional status does not have a direct impact. The model also demonstrates that burden and impact of cognitive appraisal separately play a mediating role between survivor physical functional status and caregiver depression. In addition, cognitive appraisal has a moderating influence on the relationship between burden and depression. Overall, survivor physical functional status, burden, and cognitive appraisal were the predictors of caregiver depression, explaining 47.1% of the variance. This study has shown that burden and cognitive appraisal are mediators that more fully explain the relationship between patient severity and caregiver depression. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Lumped Parameter Models for Predicting Nitrogen Transport in Lower Coastal Plain Watersheds

    Treesearch

    Devendra M. Amatya; George M. Chescheir; Glen P. Fernandez; R. Wayne Skaggs; F. Birgand; J.W. Gilliam

    2003-01-01

    hl recent years physically based comprehensive disfributed watershed scale hydrologic/water quality models have been developed and applied 10 evaluate cumulative effects of land arld water management practices on receiving waters, Although fhesc complex physically based models are capable of simulating the impacts ofthese changes in large watersheds, they are often...

  2. No-scale inflation

    NASA Astrophysics Data System (ADS)

    Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.

    2016-05-01

    Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on R+{R}2 gravity, with a tilted spectrum of scalar perturbations: {n}s∼ 0.96, and small values of the tensor-to-scalar perturbation ratio r\\lt 0.1, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.

  3. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  4. A Multi-Scale Integrated Approach to Representing Watershed Systems: Significance and Challenges

    NASA Astrophysics Data System (ADS)

    Kim, J.; Ivanov, V. Y.; Katopodes, N.

    2013-12-01

    A range of processes associated with supplying services and goods to human society originate at the watershed level. Predicting watershed response to forcing conditions has been of high interest to many practical societal problems, however, remains challenging due to two significant properties of the watershed systems, i.e., connectivity and non-linearity. Connectivity implies that disturbances arising at any larger scale will necessarily propagate and affect local-scale processes; their local effects consequently influence other processes, and often convey nonlinear relationships. Physically-based, process-scale modeling is needed to approach the understanding and proper assessment of non-linear effects between the watershed processes. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion and sediment transport, tRIBS-OFM-HRM (Triangulated irregular network - based Real time Integrated Basin Simulator-Overland Flow Model-Hairsine and Rose Model). This coupled model offers the advantage of exploring the hydrological effects of watershed physical factors such as topography, vegetation, and soil, as well as their feedback mechanisms. Several examples investigating the effects of vegetation on flow movement, the role of soil's substrate on sediment dynamics, and the driving role of topography on morphological processes are illustrated. We show how this comprehensive modeling tool can help understand interconnections and nonlinearities of the physical system, e.g., how vegetation affects hydraulic resistance depending on slope, vegetation cover fraction, discharge, and bed roughness condition; how the soil's substrate condition impacts erosion processes with an non-unique characteristic at the scale of a zero-order catchment; and how topographic changes affect spatial variations of morphologic variables. Due to feedback and compensatory nature of mechanisms operating in different watershed compartments, our conclusion is that a key to representing watershed systems lies in an integrated, interdisciplinary approach, whereby a physically-based model is used for assessments/evaluations associated with future changes in landuse, climate, and ecosystems.

  5. Construct and test scale model box culvert design project.

    DOT National Transportation Integrated Search

    2010-11-01

    The research team at the University of New Mexicos (UNM) hydraulics lab designed, constructed, and : tested a 1:20 scale physical model of a proposed culvert in Jemez Springs, New Mexico. The culvert : design was developed by the New Mexico Depart...

  6. Probing Higgs self-coupling of a classically scale invariant model in e+e- → Zhh: Evaluation at physical point

    NASA Astrophysics Data System (ADS)

    Fujitani, Y.; Sumino, Y.

    2018-04-01

    A classically scale invariant extension of the standard model predicts large anomalous Higgs self-interactions. We compute missing contributions in previous studies for probing the Higgs triple coupling of a minimal model using the process e+e- → Zhh. Employing a proper order counting, we compute the total and differential cross sections at the leading order, which incorporate the one-loop corrections between zero external momenta and their physical values. Discovery/exclusion potential of a future e+e- collider for this model is estimated. We also find a unique feature in the momentum dependence of the Higgs triple vertex for this class of models.

  7. [Influence of autonomy support, social goals and relatedness on amotivation in physical education classes].

    PubMed

    Moreno Murcia, Juan A; Parra Rojas, Nicolás; González-Cutre Coll, David

    2008-11-01

    The purpose of this study was to analyze some factors that influence amotivation in physical education classes. A sample of 399 students, of ages 14 to 16 years, was used. They completed the Perceived Autonomy Support Scale in Exercise Settings (PASSES), the Social Goal Scale-Physical Education (SGS-PE), the factor of the Basic Psychological Needs in Exercise Scale (BPNES) adapted to physical education and the factor of the Perceived Locus of Causality Scale (PLOC). The psychometric properties of the PASSES were analyzed, as this scale had not been validated to the Spanish context. In this analysis, the scale showed appropriate validity and reliability. The results of the structural equation model indicated that social responsibility and social relationship goals positively predicted perception of relatedness, whereas the context of autonomy support did not significantly predict it. In turn, perception of relatedness negatively predicted amotivation. The findings are discussed with regard to enhancing students' positive motivation.

  8. Item response modeling: A psychometric assessment of the children's fruit, vegetable, water, and physical activity self-efficacy scales among Chinese children

    USDA-ARS?s Scientific Manuscript database

    This study aimed to evaluate the psychometric properties of four self-efficacy scales (i.e., self-efficacy for fruit (FSE), vegetable (VSE), and water (WSE) intakes, and physical activity (PASE)) and to investigate their differences in item functioning across sex, age, and body weight status groups ...

  9. Astroparticle physics and cosmology.

    PubMed

    Mitton, Simon

    2006-05-20

    Astroparticle physics is an interdisciplinary field that explores the connections between the physics of elementary particles and the large-scale properties of the universe. Particle physicists have developed a standard model to describe the properties of matter in the quantum world. This model explains the bewildering array of particles in terms of constructs made from two or three quarks. Quarks, leptons, and three of the fundamental forces of physics are the main components of this standard model. Cosmologists have also developed a standard model to describe the bulk properties of the universe. In this new framework, ordinary matter, such as stars and galaxies, makes up only around 4% of the material universe. The bulk of the universe is dark matter (roughly 23%) and dark energy (about 73%). This dark energy drives an acceleration that means that the expanding universe will grow ever larger. String theory, in which the universe has several invisible dimensions, might offer an opportunity to unite the quantum description of the particle world with the gravitational properties of the large-scale universe.

  10. Simulation of nitrate reduction in groundwater - An upscaling approach from small catchments to the Baltic Sea basin

    NASA Astrophysics Data System (ADS)

    Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.

    2018-01-01

    This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.

  11. The power of structural modeling of sub-grid scales - application to astrophysical plasmas

    NASA Astrophysics Data System (ADS)

    Georgiev Vlaykov, Dimitar; Grete, Philipp

    2015-08-01

    In numerous astrophysical phenomena the dynamical range can span 10s of orders of magnitude. This implies more than billions of degrees-of-freedom and precludes direct numerical simulations from ever being a realistic possibility. A physical model is necessary to capture the unresolved physics occurring at the sub-grid scales (SGS).Structural modeling is a powerful concept which renders itself applicable to various physical systems. It stems from the idea of capturing the structure of the SGS terms in the evolution equations based on the scale-separation mechanism and independently of the underlying physics. It originates in the hydrodynamics field of large-eddy simulations. We apply it to the study of astrophysical MHD.Here, we present a non-linear SGS model for compressible MHD turbulence. The model is validated a priori at the tensorial, vectorial and scalar levels against of set of high-resolution simulations of stochastically forced homogeneous isotropic turbulence in a periodic box. The parameter space spans 2 decades in sonic Mach numbers (0.2 - 20) and approximately one decade in magnetic Mach number ~(1-8). This covers the super-Alfvenic sub-, trans-, and hyper-sonic regimes, with a range of plasma beta from 0.05 to 25. The Reynolds number is of the order of 103.At the tensor level, the model components correlate well with the turbulence ones, at the level of 0.8 and above. Vectorially, the alignment with the true SGS terms is encouraging with more than 50% of the model within 30° of the data. At the scalar level we look at the dynamics of the SGS energy and cross-helicity. The corresponding SGS flux terms have median correlations of ~0.8. Physically, the model represents well the two directions of the energy cascade.In comparison, traditional functional models exhibit poor local correlations with the data already at the scalar level. Vectorially, they are indifferent to the anisotropy of the SGS terms. They often struggle to represent the energy backscatter from small to large scales as well as the turbulent dynamo mechanism.Overall, the new model surpasses the traditional ones in all tests by a large margin.

  12. On the contributions of astroparticle physics to cosmology

    NASA Astrophysics Data System (ADS)

    Falkenburg, Brigitte

    2014-05-01

    Studying astroparticle physics sheds new light on scientific explanation and on the ways in which cosmology is empirically underdetermined or not. Astroparticle physics extends the empirical domain of cosmology from purely astronomical data to "multi-messenger astrophysics", i.e., measurements of all kinds of cosmic rays including very high energetic gamma rays, neutrinos, and charged particles. My paper investigates the ways in which these measurements contribute to cosmology and compares them with philosophical views about scientific explanation, the relation between theory and data, and scientific realism. The "standard models" of cosmology and particle physics lack of unified foundations. Both are "piecemeal physics" in Cartwright's sense, but contrary to her metaphysics of a "dappled world" the work in both fields of research aims at unification. Cosmology proceeds "top-down", from models to data and from large scale to small-scale structures of the universe. Astroparticle physics proceeds "bottom-up", from data taking to models and from subatomic particles to large-scale structures of the universe. In order to reconstruct the causal stories of cosmic rays and the nature of their sources, several pragmatic unifying strategies are employed. Standard views about scientific explanation and scientific realism do not cope with these "bottom-up" strategies and the way in which they contribute to cosmology. In addition it has to be noted that the shift to "multi-messenger astrophysics" transforms the relation between cosmological theory and astrophysical data in a mutually holistic way.

  13. Testing a self-determination theory model of children's physical activity motivation: a cross-sectional study.

    PubMed

    Sebire, Simon J; Jago, Russell; Fox, Kenneth R; Edwards, Mark J; Thompson, Janice L

    2013-09-26

    Understanding children's physical activity motivation, its antecedents and associations with behavior is important and can be advanced by using self-determination theory. However, research among youth is largely restricted to adolescents and studies of motivation within certain contexts (e.g., physical education). There are no measures of self-determination theory constructs (physical activity motivation or psychological need satisfaction) for use among children and no previous studies have tested a self-determination theory-based model of children's physical activity motivation. The purpose of this study was to test the reliability and validity of scores derived from scales adapted to measure self-determination theory constructs among children and test a motivational model predicting accelerometer-derived physical activity. Cross-sectional data from 462 children aged 7 to 11 years from 20 primary schools in Bristol, UK were analysed. Confirmatory factor analysis was used to examine the construct validity of adapted behavioral regulation and psychological need satisfaction scales. Structural equation modelling was used to test cross-sectional associations between psychological need satisfaction, motivation types and physical activity assessed by accelerometer. The construct validity and reliability of the motivation and psychological need satisfaction measures were supported. Structural equation modelling provided evidence for a motivational model in which psychological need satisfaction was positively associated with intrinsic and identified motivation types and intrinsic motivation was positively associated with children's minutes in moderate-to-vigorous physical activity. The study provides evidence for the psychometric properties of measures of motivation aligned with self-determination theory among children. Children's motivation that is based on enjoyment and inherent satisfaction of physical activity is associated with their objectively-assessed physical activity and such motivation is positively associated with perceptions of psychological need satisfaction. These psychological factors represent potential malleable targets for interventions to increase children's physical activity.

  14. Factor Structure and Measurement Invariance of the Need-Supportive Teaching Style Scale for Physical Education.

    PubMed

    Liu, Jing-Dong; Chung, Pak-Kwong

    2017-08-01

    The purpose of the current study was to examine the factor structure and measurement invariance of a scale measuring students' perceptions of need-supportive teaching (Need-Supportive Teaching Style Scale in Physical Education; NSTSSPE). We sampled 615 secondary school students in Hong Kong, 200 of whom also completed a follow-up assessment two months later. Factor structure of the scale was examined through exploratory structural equation modeling (ESEM). Further, nomological validity of the NSTSSPE was evaluated by examining the relationships between need-supportive teaching style and student satisfaction of psychological needs. Finally, four measurement models-configural, metric invariance, scalar invariance, and item uniqueness invariance-were assessed using multiple group ESEM to test the measurement invariance of the scale across gender, grade, and time. ESEM results suggested a three-factor structure of the NSTSSPE. Nomological validity was supported, and weak, strong, and strict measurement invariance of the NSTSSPE was evidenced across gender, grade, and time. The current study provides initial psychometric support for the NSTSSPE to assess student perceptions of teachers' need-supportive teaching style in physical education classes.

  15. Probing the scale of new physics by Advanced LIGO/VIRGO

    NASA Astrophysics Data System (ADS)

    Dev, P. S. Bhupal; Mazumdar, A.

    2016-05-01

    We show that if the new physics beyond the standard model is associated with a first-order phase transition around 107- 108 GeV , the energy density stored in the resulting stochastic gravitational waves and the corresponding peak frequency are within the projected final sensitivity of the advanced LIGO/VIRGO detectors. We discuss some possible new physics scenarios that could arise at such energies, and in particular, the consequences for Peccei-Quinn and supersymmetry breaking scales.

  16. COMMUNITY-SCALE MODELING FOR AIR TOXICS AND HOMELAND SECURITY

    EPA Science Inventory

    The purpose of this task is to develop and evaluate numerical and physical modeling tools for simulating ambient concentrations of airborne substances in urban settings at spatial scales ranging from <1-10 km. Research under this task will support client needs in human exposure ...

  17. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to use of the multi-satellite simulator tqimproy precipitation processes will be discussed.

  18. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei--Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2010-01-01

    In recent years, exponentially increasing computer power extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 sq km in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale models can be run in grid size similar to cloud resolving models through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model). (2) a regional scale model (a NASA unified weather research and forecast, W8F). (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling systems to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use the multi-satellite simulator to improve precipitation processes will be discussed.

  19. Using Multi-Scale Modeling Systems to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2010-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  20. Extracting Primordial Non-Gaussianity from Large Scale Structure in the Post-Planck Era

    NASA Astrophysics Data System (ADS)

    Dore, Olivier

    Astronomical observations have become a unique tool to probe fundamental physics. Cosmology, in particular, emerged as a data-driven science whose phenomenological modeling has achieved great success: in the post-Planck era, key cosmological parameters are measured to percent precision. A single model reproduces a wealth of astronomical observations involving very distinct physical processes at different times. This success leads to fundamental physical questions. One of the most salient is the origin of the primordial perturbations that grew to form the large-scale structures we now observe. More and more cosmological observables point to inflationary physics as the origin of the structure observed in the universe. Inflationary physics predict the statistical properties of the primordial perturbations and it is thought to be slightly non-Gaussian. The detection of this small deviation from Gaussianity represents the next frontier in early Universe physics. To measure it would provide direct, unique and quantitative insights about the physics at play when the Universe was only a fraction of a second old, thus probing energies untouchable otherwise. En par with the well-known relic gravitational wave radiation -- the famous ``B-modes'' -- it is one the few probes of inflation. This departure from Gaussianity leads to very specific signature in the large scale clustering of galaxies. Observing large-scale structure, we can thus establish a direct connection with fundamental theories of the early universe. In the post-Planck era, large-scale structures are our most promising pathway to measuring this primordial signal. Current estimates suggests that the next generation of space or ground based large scale structure surveys (e.g. the ESA EUCLID or NASA WFIRST missions) might enable a detection of this signal. This potential huge payoff requires us to solidify the theoretical predictions supporting these measurements. Even if the exact signal we are looking for is of unknown amplitude, it is obvious that we must measure it as well as these ground breaking data set will permit. We propose to develop the supporting theoretical work to the point where the complete non-gaussianian signature can be extracted from these data sets. We will do so by developing three complementary directions: - We will develop the appropriate formalism to measure and model galaxy clustering on the largest scales. - We will study the impact of non-Gaussianity on higher-order statistics, the most promising statistics for our purpose.. - We will explicit the connection between these observables and the microphysics of a large class of inflation models, but also identify fundamental limitations to this interpretation.

  1. Linking Local Scale Ecosystem Science to Regional Scale Management

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J.; Peiffer, S.

    2012-04-01

    Ecosystem management with respect to sufficient water yield, a quality water supply, habitat and biodiversity conservation, and climate change effects requires substantial observational data at a range of scales. Complex interactions of local physical processes oftentimes vary over space and time, particularly in locations with extreme meteorological conditions. Modifications to local conditions (ie: agricultural land use changes, nutrient additions, landscape management, water usage) can further affect regional ecosystem services. The international, inter-disciplinary TERRECO research group is intensively investigating a variety of local processes, parameters, and conditions to link complex physical, economic, and social interactions at the regional scale. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. The data are used to parameterize suite of models describing local to landscape level water, sediment, nutrient, and monetary relationships. We focus on using the agricultural and hydrological SWAT model to synthesize the experimental field data and local-scale models throughout the catchment. The approach of our study was to describe local scientific processes, link potential interrelationships between different processes, and predict environmentally efficient management efforts. The Haean catchment case study shows how research can be structured to provide cross-disciplinary scientific linkages describing complex ecosystems and landscapes that can be used for regional management evaluations and predictions.

  2. Evaluating crown fire rate of spread predictions from physics-based models

    Treesearch

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  3. Are Atmospheric Updrafts a Key to Unlocking Climate Forcing and Sensitivity?

    DOE PAGES

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...

    2016-06-08

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  4. Landscape-scale soil moisture heterogeneity and its influence on surface fluxes at the Jornada LTER site: Evaluating a new model parameterization for subgrid-scale soil moisture variability

    NASA Astrophysics Data System (ADS)

    Baker, I. T.; Prihodko, L.; Vivoni, E. R.; Denning, A. S.

    2017-12-01

    Arid and semiarid regions represent a large fraction of global land, with attendant importance of surface energy and trace gas flux to global totals. These regions are characterized by strong seasonality, especially in precipitation, that defines the level of ecosystem stress. Individual plants have been observed to respond non-linearly to increasing soil moisture stress, where plant function is generally maintained as soils dry down to a threshold at which rapid closure of stomates occurs. Incorporating this nonlinear mechanism into landscape-scale models can result in unrealistic binary "on-off" behavior that is especially problematic in arid landscapes. Subsequently, models have `relaxed' their simulation of soil moisture stress on evapotranspiration (ET). Unfortunately, these relaxations are not physically based, but are imposed upon model physics as a means to force a more realistic response. Previously, we have introduced a new method to represent soil moisture regulation of ET, whereby the landscape is partitioned into `BINS' of soil moisture wetness, each associated with a fractional area of the landscape or grid cell. A physically- and observationally-based nonlinear soil moisture stress function is applied, but when convolved with the relative area distribution represented by wetness BINS the system has the emergent property of `smoothing' the landscape-scale response without the need for non-physical impositions on model physics. In this research we confront BINS simulations of Bowen ratio, soil moisture variability and trace gas flux with soil moisture and eddy covariance observations taken at the Jornada LTER dryland site in southern New Mexico. We calculate the mean annual wetting cycle and associated variability about the mean state and evaluate model performance against this variability and time series of land surface fluxes from the highly instrumented Tromble Weir watershed. The BINS simulations capture the relatively rapid reaction to wetting events and more prolonged response to drying cycles, as opposed to binary behavior in the control.

  5. On the physically based modeling of surface tension and moving contact lines with dynamic contact angles on the continuum scale

    NASA Astrophysics Data System (ADS)

    Huber, M.; Keller, F.; Säckel, W.; Hirschler, M.; Kunz, P.; Hassanizadeh, S. M.; Nieken, U.

    2016-04-01

    The description of wetting phenomena is a challenging problem on every considerable length-scale. The behavior of interfaces and contact lines on the continuum scale is caused by intermolecular interactions like the Van der Waals forces. Therefore, to describe surface tension and the resulting dynamics of interfaces and contact lines on the continuum scale, appropriate formulations must be developed. While the Continuum Surface Force (CSF) model is well-engineered for the description of interfaces, there is still a lack of treatment of contact lines, which are defined by the intersection of an ending fluid interface and a solid boundary surface. In our approach we use a balance equation for the contact line and extend the Navier-Stokes equations in analogy to the extension of a two-phase interface in the CSF model. Since this model depicts a physically motivated approach on the continuum scale, no fitting parameters are introduced and the deterministic description leads to a dynamical evolution of the system. As verification of our theory, we show a Smoothed Particle Hydrodynamics (SPH) model and simulate the evolution of droplet shapes and their corresponding contact angles.

  6. Resolving the problem of galaxy clustering on small scales: any new physics needed?

    NASA Astrophysics Data System (ADS)

    Kang, X.

    2014-02-01

    Galaxy clustering sets strong constraints on the physics governing galaxy formation and evolution. However, most current models fail to reproduce the clustering of low-mass galaxies on small scales (r < 1 Mpc h-1). In this paper, we study the galaxy clusterings predicted from a few semi-analytical models. We first compare two Munich versions, Guo et al. and De Lucia & Blaizot. The Guo11 model well reproduces the galaxy stellar mass function, but overpredicts the clustering of low-mass galaxies on small scales. The DLB07 model provides a better fit to the clustering on small scales, but overpredicts the stellar mass function. These seem to be puzzling. The clustering on small scales is dominated by galaxies in the same dark matter halo, and there is slightly more fraction of satellite galaxies residing in massive haloes in the Guo11 model, which is the dominant contribution to the clustering discrepancy between the two models. However, both models still overpredict the clustering at 0.1 < r < 10 Mpc h-1 for low-mass galaxies. This is because both models overpredict the number of satellites by 30 per cent in massive haloes than the data. We show that the Guo11 model could be slightly modified to simultaneously fit the stellar mass function and clusterings, but that cannot be easily achieved in the DLB07 model. The better agreement of DLB07 model with the data actually comes as a coincidence as it predicts too many low-mass central galaxies which are less clustered and thus brings down the total clustering. Finally, we show the predictions from the semi-analytical models of Kang et al. We find that this model can simultaneously fit the stellar mass function and galaxy clustering if the supernova feedback in satellite galaxies is stronger. We conclude that semi-analytical models are now able to solve the small-scales clustering problem, without invoking of any other new physics or changing the dark matter properties, such as the recent favoured warm dark matter.

  7. Impacts of spectral nudging on the simulated surface air temperature in summer compared with the selection of shortwave radiation and land surface model physics parameterization in a high-resolution regional atmospheric model

    NASA Astrophysics Data System (ADS)

    Park, Jun; Hwang, Seung-On

    2017-11-01

    The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.

  8. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less

  9. Perspectives on integrated modeling of transport processes in semiconductor crystal growth

    NASA Technical Reports Server (NTRS)

    Brown, Robert A.

    1992-01-01

    The wide range of length and time scales involved in industrial scale solidification processes is demonstrated here by considering the Czochralski process for the growth of large diameter silicon crystals that become the substrate material for modern microelectronic devices. The scales range in time from microseconds to thousands of seconds and in space from microns to meters. The physics and chemistry needed to model processes on these different length scales are reviewed.

  10. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  11. Bridging the Gap Between the iLEAPS and GEWEX Land-Surface Modeling Communities

    NASA Technical Reports Server (NTRS)

    Bonan, Gordon; Santanello, Joseph A., Jr.

    2013-01-01

    Models of Earth's weather and climate require fluxes of momentum, energy, and moisture across the land-atmosphere interface to solve the equations of atmospheric physics and dynamics. Just as atmospheric models can, and do, differ between weather and climate applications, mostly related to issues of scale, resolved or parameterised physics,and computational requirements, so too can the land models that provide the required surface fluxes differ between weather and climate models. Here, however, the issue is less one of scale-dependent parameterisations.Computational demands can influence other minor land model differences, especially with respect to initialisation, data assimilation, and forecast skill. However, the distinction among land models (and their development and application) is largely driven by the different science and research needs of the weather and climate communities.

  12. Minimal Length Scale Scenarios for Quantum Gravity.

    PubMed

    Hossenfelder, Sabine

    2013-01-01

    We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.

  13. The Soccer Ball Model: A Useful Visualization Protocol for Scaling Concepts in Continua

    ERIC Educational Resources Information Center

    Arce, Pedro E.; Pascal, Jennifer; Torres, Cynthia

    2010-01-01

    When studying the physics of transport, it is necessary to develop conservation equations, and the concept of a continuum scale must be introduced. Most textbooks do not address this issue, assuming that the mathematical steps are familiar to the learner. In fact, students are introduced to physical concepts, such as mass, momentum, and energy for…

  14. Aspects of string phenomenology in particle physics and cosmology

    NASA Astrophysics Data System (ADS)

    Antoniadis, I.

    2017-12-01

    I discuss possible connections between several scales in particle physics and cosmology, such the the electroweak, inflation, dark energy and Planck scales. In particular, I discuss the physics of extra dimensions and low scale gravity that are motivated from the problem of mass hierarchy, providing an alternative to low energy supersymmetry. I describe their realization in type I string theory with D-branes and I present the main experimental predictions in particle accelerators and their implications in cosmology. I also show that low-mass-scale string compactifications, with a generic D-brane configuration that realizes the Standard Model by open strings, can explain the relatively broad peak in the diphoton invariant mass spectrum at 750 GeV recently reported by the ATLAS and CMS collaborations.

  15. Consistent Large-Eddy Simulation of a Temporal Mixing Layer Laden with Evaporating Drops. Part 2; A Posteriori Modelling

    NASA Technical Reports Server (NTRS)

    Leboissertier, Anthony; Okong'O, Nora; Bellan, Josette

    2005-01-01

    Large-eddy simulation (LES) is conducted of a three-dimensional temporal mixing layer whose lower stream is initially laden with liquid drops which may evaporate during the simulation. The gas-phase equations are written in an Eulerian frame for two perfect gas species (carrier gas and vapour emanating from the drops), while the liquid-phase equations are written in a Lagrangian frame. The effect of drop evaporation on the gas phase is considered through mass, species, momentum and energy source terms. The drop evolution is modelled using physical drops, or using computational drops to represent the physical drops. Simulations are performed using various LES models previously assessed on a database obtained from direct numerical simulations (DNS). These LES models are for: (i) the subgrid-scale (SGS) fluxes and (ii) the filtered source terms (FSTs) based on computational drops. The LES, which are compared to filtered-and-coarsened (FC) DNS results at the coarser LES grid, are conducted with 64 times fewer grid points than the DNS, and up to 64 times fewer computational than physical drops. It is found that both constant-coefficient and dynamic Smagorinsky SGS-flux models, though numerically stable, are overly dissipative and damp generated small-resolved-scale (SRS) turbulent structures. Although the global growth and mixing predictions of LES using Smagorinsky models are in good agreement with the FC-DNS, the spatial distributions of the drops differ significantly. In contrast, the constant-coefficient scale-similarity model and the dynamic gradient model perform well in predicting most flow features, with the latter model having the advantage of not requiring a priori calibration of the model coefficient. The ability of the dynamic models to determine the model coefficient during LES is found to be essential since the constant-coefficient gradient model, although more accurate than the Smagorinsky model, is not consistently numerically stable despite using DNS-calibrated coefficients. With accurate SGS-flux models, namely scale-similarity and dynamic gradient, the FST model allows up to a 32-fold reduction in computational drops compared to the number of physical drops, without degradation of accuracy; a 64-fold reduction leads to a slight decrease in accuracy.

  16. Acoustic Model Testing Chronology

    NASA Technical Reports Server (NTRS)

    Nesman, Tom

    2017-01-01

    Scale models have been used for decades to replicate liftoff environments and in particular acoustics for launch vehicles. It is assumed, and analyses supports, that the key characteristics of noise generation, propagation, and measurement can be scaled. Over time significant insight was gained not just towards understanding the effects of thruster details, pad geometry, and sound mitigation but also to the physical processes involved. An overview of a selected set of scale model tests are compiled here to illustrate the variety of configurations that have been tested and the fundamental knowledge gained. The selected scale model tests are presented chronologically.

  17. Conformal standard model, leptogenesis, and dark matter

    NASA Astrophysics Data System (ADS)

    Lewandowski, Adrian; Meissner, Krzysztof A.; Nicolai, Hermann

    2018-02-01

    The conformal standard model is a minimal extension of the Standard Model (SM) of particle physics based on the assumed absence of large intermediate scales between the TeV scale and the Planck scale, which incorporates only right-chiral neutrinos and a new complex scalar in addition to the usual SM degrees of freedom, but no other features such as supersymmetric partners. In this paper, we present a comprehensive quantitative analysis of this model, and show that all outstanding issues of particle physics proper can in principle be solved "in one go" within this framework. This includes in particular the stabilization of the electroweak scale, "minimal" leptogenesis and the explanation of dark matter, with a small mass and very weakly interacting Majoron as the dark matter candidate (for which we propose to use the name "minoron"). The main testable prediction of the model is a new and almost sterile scalar boson that would manifest itself as a narrow resonance in the TeV region. We give a representative range of parameter values consistent with our assumptions and with observation.

  18. From Global to Cloud Resolving Scale: Experiments with a Scale- and Aerosol-Aware Physics Package and Impact on Tracer Transport

    NASA Astrophysics Data System (ADS)

    Grell, G. A.; Freitas, S. R.; Olson, J.; Bela, M.

    2017-12-01

    We will start by providing a summary of the latest cumulus parameterization modeling efforts at NOAA's Earth System Research Laboratory (ESRL) will be presented on both regional and global scales. The physics package includes a scale-aware parameterization of subgrid cloudiness feedback to radiation (coupled PBL, microphysics, radiation, shallow and congestus type convection), the stochastic Grell-Freitas (GF) scale- and aerosol-aware convective parameterization, and an aerosol aware microphysics package. GF is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). It was expanded to include PDF's for vertical mass flux, as well as modifications to improve the diurnal cycle. This physics package will be used on different scales, spanning global to cloud resolving, to look at the impact on scalar transport and numerical weather prediction.

  19. Neutrino mass, dark matter, and Baryon asymmetry via TeV-scale physics without fine-tuning.

    PubMed

    Aoki, Mayumi; Kanemura, Shinya; Seto, Osamu

    2009-02-06

    We propose an extended version of the standard model, in which neutrino oscillation, dark matter, and the baryon asymmetry of the Universe can be simultaneously explained by the TeV-scale physics without assuming a large hierarchy among the mass scales. Tiny neutrino masses are generated at the three-loop level due to the exact Z2 symmetry, by which the stability of the dark matter candidate is guaranteed. The extra Higgs doublet is required not only for the tiny neutrino masses but also for successful electroweak baryogenesis. The model provides discriminative predictions especially in Higgs phenomenology, so that it is testable at current and future collider experiments.

  20. An Illustrative Guide to the Minerva Framework

    NASA Astrophysics Data System (ADS)

    Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration

    2017-10-01

    Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.

  1. Assessing the validity and reliability of family factors on physical activity: A case study in Turkey.

    PubMed

    Steenson, Sharalyn; Özcebe, Hilal; Arslan, Umut; Konşuk Ünlü, Hande; Araz, Özgür M; Yardim, Mahmut; Üner, Sarp; Bilir, Nazmi; Huang, Terry T-K

    2018-01-01

    Childhood obesity rates have been rising rapidly in developing countries. A better understanding of the risk factors and social context is necessary to inform public health interventions and policies. This paper describes the validation of several measurement scales for use in Turkey, which relate to child and parent perceptions of physical activity (PA) and enablers and barriers of physical activity in the home environment. The aim of this study was to assess the validity and reliability of several measurement scales in Turkey using a population sample across three socio-economic strata in the Turkish capital, Ankara. Surveys were conducted in Grade 4 children (mean age = 9.7 years for boys; 9.9 years for girls), and their parents, across 6 randomly selected schools, stratified by SES (n = 641 students, 483 parents). Construct validity of the scales was evaluated through exploratory and confirmatory factor analysis. Internal consistency of scales and test-retest reliability were assessed by Cronbach's alpha and intra-class correlation. The scales as a whole were found to have acceptable-to-good model fit statistics (PA Barriers: RMSEA = 0.076, SRMR = 0.0577, AGFI = 0.901; PA Outcome Expectancies: RMSEA = 0.054, SRMR = 0.0545, AGFI = 0.916, and PA Home Environment: RMSEA = 0.038, SRMR = 0.0233, AGFI = 0.976). The PA Barriers subscales showed good internal consistency and poor to fair test-retest reliability (personal α = 0.79, ICC = 0.29, environmental α = 0.73, ICC = 0.59). The PA Outcome Expectancies subscales showed good internal consistency and test-retest reliability (negative α = 0.77, ICC = 0.56; positive α = 0.74, ICC = 0.49). Only the PA Home Environment subscale on support for PA was validated in the final confirmatory model; it showed moderate internal consistency and test-retest reliability (α = 0.61, ICC = 0.48). This study is the first to validate measures of perceptions of physical activity and the physical activity home environment in Turkey. Our results support the originally hypothesized two-factor structures for Physical Activity Barriers and Physical Activity Outcome Expectancies. However, we found the one-factor rather than two-factor structure for Physical Activity Home Environment had the best model fit. This study provides general support for the use of these scales in Turkey in terms of validity, but test-retest reliability warrants further research.

  2. Shock Interaction of Metal Particles in Condensed Explosive Detonation

    NASA Astrophysics Data System (ADS)

    Ripley, Robert; Zhang, Fan; Lien, Fue-Sang

    2005-07-01

    For detonation propagation in a condensed explosive with metal particles, a macro-scale physical model describing the momentum transfer between the explosive and particles has yet to be completely established. Previous 1D and 2D meso-scale modeling studies indicated that significant momentum transfer from the explosive to the particles occurs as the leading shock front crosses the particles, thus influencing the initiation and detonation structure. In this work, 3D meso-scale modeling is conducted to further study the two-phase momentum transfer during the shock diffraction and subsequent detonation in liquid nitromethane containing packed metal particles. Detonation of the condensed explosive is computed using an Arrhenius reaction model and a hybrid EOS model that combines the Mie-Gruneisen equation for reactants and the JWL equation for products. The compressible particles are modeled using the Tait EOS, where the material strength is negligible. The effect of particle packing configuration and inter-particle spacing is shown by parametric studies. Finally, a physical description of the momentum transfer is discussed.

  3. A multiscale strength model for tantalum over an extended range of strain rates

    NASA Astrophysics Data System (ADS)

    Barton, N. R.; Rhee, M.

    2013-09-01

    A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].

  4. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  5. Observations and Modelling of Winds and Waves during the Surface Wave Dynamics Experiment. Report 1. Intensive Observation Period IOP-1, 20-31 October 1990

    DTIC Science & Technology

    1993-04-01

    wave buoy provided by SEATEX, Norway (Figure 3). The modified Mills-cross array was designed to provide spatial estimates of the variation in wave, wind... designed for SWADE to examine the wave physics at different spatial and temporal scales, and the usefulness of a nested system. Each grid is supposed to...field specification. SWADE Model This high-resolution grid was designed to simulate the small scale wave physics and to improve and verify the source

  6. AIR TOXICS MODELING FROM LOCAL TO REGIONAL SCALES TO SUPPORT THE 2002 MULTIPOLLUTANT ASSESSMENT

    EPA Science Inventory

    This research focuses on developing models that can describe the chemical and physical processes affecting concentrations of toxic air pollutants in the atmosphere, at spatial scales, ranging from local (< 1 km) to regional (36 km). One objective of this task is to extend the ca...

  7. Probing the frontiers of particle physics with tabletop-scale experiments.

    PubMed

    DeMille, David; Doyle, John M; Sushkov, Alexander O

    2017-09-08

    The field of particle physics is in a peculiar state. The standard model of particle theory successfully describes every fundamental particle and force observed in laboratories, yet fails to explain properties of the universe such as the existence of dark matter, the amount of dark energy, and the preponderance of matter over antimatter. Huge experiments, of increasing scale and cost, continue to search for new particles and forces that might explain these phenomena. However, these frontiers also are explored in certain smaller, laboratory-scale "tabletop" experiments. This approach uses precision measurement techniques and devices from atomic, quantum, and condensed-matter physics to detect tiny signals due to new particles or forces. Discoveries in fundamental physics may well come first from small-scale experiments of this type. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  8. Simulating the Physical World

    NASA Astrophysics Data System (ADS)

    Berendsen, Herman J. C.

    2004-06-01

    The simulation of physical systems requires a simplified, hierarchical approach which models each level from the atomistic to the macroscopic scale. From quantum mechanics to fluid dynamics, this book systematically treats the broad scope of computer modeling and simulations, describing the fundamental theory behind each level of approximation. Berendsen evaluates each stage in relation to its applications giving the reader insight into the possibilities and limitations of the models. Practical guidance for applications and sample programs in Python are provided. With a strong emphasis on molecular models in chemistry and biochemistry, this book will be suitable for advanced undergraduate and graduate courses on molecular modeling and simulation within physics, biophysics, physical chemistry and materials science. It will also be a useful reference to all those working in the field. Additional resources for this title including solutions for instructors and programs are available online at www.cambridge.org/9780521835275. The first book to cover the wide range of modeling and simulations, from atomistic to the macroscopic scale, in a systematic fashion Providing a wealth of background material, it does not assume advanced knowledge and is eminently suitable for course use Contains practical examples and sample programs in Python

  9. a Model Study of Small-Scale World Map Generalization

    NASA Astrophysics Data System (ADS)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  10. Novel dark matter phenomenology at colliders

    NASA Astrophysics Data System (ADS)

    Wardlow, Kyle Patrick

    While a suitable candidate particle for dark matter (DM) has yet to be discovered, it is possible one will be found by experiments currently investigating physics on the weak scale. If discovered on that energy scale, the dark matter will likely be producible in significant quantities at colliders like the LHC, allowing the properties of and underlying physical model characterizing the dark matter to be precisely determined. I assume that the dark matter will be produced as one of the decay products of a new massive resonance related to physics beyond the Standard Model, and using the energy distributions of the associated visible decay products, develop techniques for determining the symmetry protecting these potential dark matter candidates from decaying into lighter Standard Model (SM) particles and to simultaneously measure the masses of both the dark matter candidate and the particle from which it decays.

  11. The Relationship Between Reminiscence Functions, Optimism, Depressive Symptoms, Physical Activity, and Pain in Older Adults.

    PubMed

    McDonald, Deborah Dillon; Shellman, Juliette M; Graham, Lindsey; Harrison, Lisa

    2016-09-01

    The study purpose was to examine the association between reminiscence functions, optimism, depressive symptoms, physical activity, and pain in older adults with chronic lower extremity osteoarthritis pain. One hundred ninety-five community-dwelling adults were interviewed using the Modified Reminiscence Functions Scale, Brief Pain Inventory, Life Orientation Test-Revised, Center for Epidemiologic Studies Short Depression Scale, and Physical Activity Scale for the Elderly in random counterbalanced order. Structural equation modeling supported chronic pain as positively associated with depressive symptoms and comorbidities and unrelated to physical activity. Depressive symptoms were positively associated with self-negative reminiscence and negatively associated with optimism. Spontaneous reminiscence was not associated with increased physical activity or reduced pain. Individuals may require facilitated integrative reminiscence to assist them in reinterpreting negative memories in a more positive way. Facilitated integrative reminiscence about enjoyed past physical activity is a potential way to increase physical activity, but must be tested in future research. [Res Gerontol Nurs. 2016; 9(5):223-231.]. Copyright 2016, SLACK Incorporated.

  12. FY10 Report on Multi-scale Simulation of Solvent Extraction Processes: Molecular-scale and Continuum-scale Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wardle, Kent E.; Frey, Kurt; Pereira, Candido

    2014-02-02

    This task is aimed at predictive modeling of solvent extraction processes in typical extraction equipment through multiple simulation methods at various scales of resolution. We have conducted detailed continuum fluid dynamics simulation on the process unit level as well as simulations of the molecular-level physical interactions which govern extraction chemistry. Through combination of information gained through simulations at each of these two tiers along with advanced techniques such as the Lattice Boltzmann Method (LBM) which can bridge these two scales, we can develop the tools to work towards predictive simulation for solvent extraction on the equipment scale (Figure 1). Themore » goal of such a tool-along with enabling optimized design and operation of extraction units-would be to allow prediction of stage extraction effrciency under specified conditions. Simulation efforts on each of the two scales will be described below. As the initial application of FELBM in the work performed during FYl0 has been on annular mixing it will be discussed in context of the continuum-scale. In the future, however, it is anticipated that the real value of FELBM will be in its use as a tool for sub-grid model development through highly refined DNS-like multiphase simulations facilitating exploration and development of droplet models including breakup and coalescence which will be needed for the large-scale simulations where droplet level physics cannot be resolved. In this area, it can have a significant advantage over traditional CFD methods as its high computational efficiency allows exploration of significantly greater physical detail especially as computational resources increase in the future.« less

  13. Psychological factors related to physical education classes as predictors of students' intention to partake in leisure-time physical activity.

    PubMed

    Baena-Extremera, Antonio; Granero-Gallegos, Antonio; Ponce-de-León-Elizondo, Ana; Sanz-Arazuri, Eva; Valdemoros-San-Emeterio, María de Los Ángeles; Martínez-Molina, Marina

    2016-04-01

    In view of the rise in sedentary lifestyle amongst young people, knowledge regarding their intention to partake in physical activity can be decisive when it comes to instilling physical activity habits to improve the current and future health of school students. Therefore, the object of this study was to find a predictive model of the intention to partake in leisure- time physical activity based on motivation, satisfaction and competence. The sample consisted of 347 Spanish, male, high school students and 411 female students aged between 13 and 18 years old. We used a questionnaire made up of the Sport Motivation Scale, Sport Satisfaction Instrument, and the competence factor in the Basic Psychological Needs in Exercise Scale and Intention to Partake in Leisure-Time Physical Activity, all of them adapted to school Physical Education. We carried out confirmatory factor analyses and structural equation models. The intention to partake in leisure-time physical activity was predicted by competence and the latter by satisfaction/fun. Intrinsic motivation was revealed to be the best predictor of satisfaction/fun. Intrinsic motivation should be enhanced in order to predict an intention to partake in physical activity in Physical Education students.

  14. Physical controls and predictability of stream hyporheic flow evaluated with a multiscale model

    USGS Publications Warehouse

    Stonedahl, Susa H.; Harvey, Judson W.; Detty, Joel; Aubeneau, Antoine; Packman, Aaron I.

    2012-01-01

    Improved predictions of hyporheic exchange based on easily measured physical variables are needed to improve assessment of solute transport and reaction processes in watersheds. Here we compare physically based model predictions for an Indiana stream with stream tracer results interpreted using the Transient Storage Model (TSM). We parameterized the physically based, Multiscale Model (MSM) of stream-groundwater interactions with measured stream planform and discharge, stream velocity, streambed hydraulic conductivity and porosity, and topography of the streambed at distinct spatial scales (i.e., ripple, bar, and reach scales). We predicted hyporheic exchange fluxes and hyporheic residence times using the MSM. A Continuous Time Random Walk (CTRW) model was used to convert the MSM output into predictions of in stream solute transport, which we compared with field observations and TSM parameters obtained by fitting solute transport data. MSM simulations indicated that surface-subsurface exchange through smaller topographic features such as ripples was much faster than exchange through larger topographic features such as bars. However, hyporheic exchange varies nonlinearly with groundwater discharge owing to interactions between flows induced at different topographic scales. MSM simulations showed that groundwater discharge significantly decreased both the volume of water entering the subsurface and the time it spent in the subsurface. The MSM also characterized longer timescales of exchange than were observed by the tracer-injection approach. The tracer data, and corresponding TSM fits, were limited by tracer measurement sensitivity and uncertainty in estimates of background tracer concentrations. Our results indicate that rates and patterns of hyporheic exchange are strongly influenced by a continuum of surface-subsurface hydrologic interactions over a wide range of spatial and temporal scales rather than discrete processes.

  15. A haptic model of vibration modes in spherical geometry and its application in atomic physics, nuclear physics and beyond

    NASA Astrophysics Data System (ADS)

    Ubben, Malte; Heusler, Stefan

    2018-07-01

    Vibration modes in spherical geometry can be classified based on the number and position of nodal planes. However, the geometry of these planes is non-trivial and cannot be easily displayed in two dimensions. We present 3D-printed models of those vibration modes, enabling a haptic approach for understanding essential features of bound states in quantum physics and beyond. In particular, when applied to atomic physics, atomic orbitals are obtained in a natural manner. Applied to nuclear physics, the same patterns of vibration modes emerge as cornerstone for the nuclear shell model. These applications of the very same model in a range of more than 5 orders of magnitude in length scales leads to a general discussion of the applicability and limits of validity of physical models in general.

  16. Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale

    NASA Astrophysics Data System (ADS)

    Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-08-01

    This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.

  17. Stochastic Spatial Models in Ecology: A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.

    2018-07-01

    Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.

  18. Stochastic Spatial Models in Ecology: A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.

    2017-11-01

    Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.

  19. A watershed scale spatially-distributed model for streambank erosion rate driven by channel curvature

    NASA Astrophysics Data System (ADS)

    McMillan, Mitchell; Hu, Zhiyong

    2017-10-01

    Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.

  20. Persistent Homology fingerprinting of microstructural controls on larger-scale fluid flow in porous media

    NASA Astrophysics Data System (ADS)

    Moon, C.; Mitchell, S. A.; Callor, N.; Dewers, T. A.; Heath, J. E.; Yoon, H.; Conner, G. R.

    2017-12-01

    Traditional subsurface continuum multiphysics models include useful yet limiting geometrical assumptions: penny- or disc-shaped cracks, spherical or elliptical pores, bundles of capillary tubes, cubic law fracture permeability, etc. Each physics (flow, transport, mechanics) uses constitutive models with an increasing number of fit parameters that pertain to the microporous structure of the rock, but bear no inter-physics relationships or self-consistency. Recent advances in digital rock physics and pore-scale modeling link complex physics to detailed pore-level geometries, but measures for upscaling are somewhat unsatisfactory and come at a high computational cost. Continuum mechanics rely on a separation between small scale pore fluctuations and larger scale heterogeneity (and perhaps anisotropy), but this can break down (particularly for shales). Algebraic topology offers powerful mathematical tools for describing a local-to-global structure of shapes. Persistent homology, in particular, analyzes the dynamics of topological features and summarizes into numeric values. It offers a roadmap to both "fingerprint" topologies of pore structure and multiscale connectedness as well as links pore structure to physical behavior, thus potentially providing a means to relate the dependence of constitutive behaviors of pore structures in a self-consistent way. We present a persistence homology (PH) analysis framework of 3D image sets including a focused ion beam-scanning electron microscopy data set of the Selma Chalk. We extract structural characteristics of sampling volumes via persistence homology and fit a statistical model using the summarized values to estimate porosity, permeability, and connectivity—Lattice Boltzmann methods for single phase flow modeling are used to obtain the relationships. These PH methods allow for prediction of geophysical properties based on the geometry and connectivity in a computationally efficient way. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  1. Macroscopic modeling of heat and water vapor transfer with phase change in dry snow based on an upscaling method: Influence of air convection

    NASA Astrophysics Data System (ADS)

    Calonne, N.; Geindreau, C.; Flin, F.

    2015-12-01

    At the microscopic scale, i.e., pore scale, dry snow metamorphism is mainly driven by the heat and water vapor transfer and the sublimation-deposition process at the ice-air interface. Up to now, the description of these phenomena at the macroscopic scale, i.e., snow layer scale, in the snowpack models has been proposed in a phenomenological way. Here we used an upscaling method, namely, the homogenization of multiple-scale expansions, to derive theoretically the macroscopic equivalent modeling of heat and vapor transfer through a snow layer from the physics at the pore scale. The physical phenomena under consideration are steady state air flow, heat transfer by conduction and convection, water vapor transfer by diffusion and convection, and phase change (sublimation and deposition). We derived three different macroscopic models depending on the intensity of the air flow considered at the pore scale, i.e., on the order of magnitude of the pore Reynolds number and the Péclet numbers: (A) pure diffusion, (B) diffusion and moderate convection (Darcy's law), and (C) strong convection (nonlinear flow). The formulation of the models includes the exact expression of the macroscopic properties (effective thermal conductivity, effective vapor diffusion coefficient, and intrinsic permeability) and of the macroscopic source terms of heat and vapor arising from the phase change at the pore scale. Such definitions can be used to compute macroscopic snow properties from 3-D descriptions of snow microstructures. Finally, we illustrated the precision and the robustness of the proposed macroscopic models through 2-D numerical simulations.

  2. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  3. Physical-Biological-Optics Model Development and Simulation for the Pacific Ocean and Monterey Bay, California

    DTIC Science & Technology

    2011-09-30

    and easy to apply in large-scale physical-biogeochemical simulations. We also collaborate with Dr. Curt Mobley at Sequoia Scientific for the second...we are collaborating with Dr. Curtis Mobley of Sequoia Scientific on improving the link between the radiative transfer model (EcoLight) within the

  4. Physical Activity Motivation in Late Adolescence: Refinement of a Recent Multidimensional Model

    ERIC Educational Resources Information Center

    Martin, Andrew J.

    2010-01-01

    Recent research (Martin et al., 2006) presented a new, multidimensional approach to physical activity motivation (using the Physical Activity Motivation Scale [PAMS]) operationalized through four factors: adaptive cognition, adaptive behavior, impeding/maladaptive cognition, and maladaptive behavior. The present study extends this early research…

  5. Planck scale boundary conditions and the Higgs mass

    NASA Astrophysics Data System (ADS)

    Holthausen, Martin; Lim, Kher Sham; Lindner, Manfred

    2012-02-01

    If the LHC does only find a Higgs boson in the low mass region and no other new physics, then one should reconsider scenarios where the Standard Model with three right-handed neutrinos is valid up to Planck scale. We assume in this spirit that the Standard Model couplings are remnants of quantum gravity which implies certain generic boundary conditions for the Higgs quartic coupling at Planck scale. This leads to Higgs mass predictions at the electroweak scale via renormalization group equations. We find that several physically well motivated conditions yield a range of Higgs masses from 127 - 142 GeV. We also argue that a random quartic Higgs coupling at the Planck scale favours M H > 150 GeV, which is clearly excluded. We discuss also the prospects for differentiating different boundary conditions imposed for λ( M pl) at the LHC. A striking example is M H = 127 ± 5 GeV corresponding to λ( M pl) = 0, which would imply that the quartic Higgs coupling at the electroweak scale is entirely radiatively generated.

  6. Bridging Empirical and Physical Approaches for Landslide Monitoring and Early Warning

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Kumar, Sujay; Harrison, Ken

    2011-01-01

    Rainfall-triggered landslides typically occur and are evaluated at local scales, using slope-stability models to calculate coincident changes in driving and resisting forces at the hillslope level in order to anticipate slope failures. Over larger areas, detailed high resolution landslide modeling is often infeasible due to difficulties in quantifying the complex interaction between rainfall infiltration and surface materials as well as the dearth of available in situ soil and rainfall estimates and accurate landslide validation data. This presentation will discuss how satellite precipitation and surface information can be applied within a landslide hazard assessment framework to improve landslide monitoring and early warning by considering two disparate approaches to landslide hazard assessment: an empirical landslide forecasting algorithm and a physical slope-stability model. The goal of this research is to advance near real-time landslide hazard assessment and early warning at larger spatial scales. This is done by employing high resolution surface and precipitation information within a probabilistic framework to provide more physically-based grounding to empirical landslide triggering thresholds. The empirical landslide forecasting tool, running in near real-time at http://trmm.nasa.gov, considers potential landslide activity at the global scale and relies on Tropical Rainfall Measuring Mission (TRMM) precipitation data and surface products to provide a near real-time picture of where landslides may be triggered. The physical approach considers how rainfall infiltration on a hillslope affects the in situ hydro-mechanical processes that may lead to slope failure. Evaluation of these empirical and physical approaches are performed within the Land Information System (LIS), a high performance land surface model processing and data assimilation system developed within the Hydrological Sciences Branch at NASA's Goddard Space Flight Center. LIS provides the capabilities to quantify uncertainty from model inputs and calculate probabilistic estimates for slope failures. Results indicate that remote sensing data can provide many of the spatiotemporal requirements for accurate landslide monitoring and early warning; however, higher resolution precipitation inputs will help to better identify small-scale precipitation forcings that contribute to significant landslide triggering. Future missions, such as the Global Precipitation Measurement (GPM) mission will provide more frequent and extensive estimates of precipitation at the global scale, which will serve as key inputs to significantly advance the accuracy of landslide hazard assessment, particularly over larger spatial scales.

  7. Advances in cleavage fracture modelling in steels: Micromechanical, numerical and multiscale aspects

    NASA Astrophysics Data System (ADS)

    Pineau, André; Tanguy, Benoît

    2010-04-01

    Brittle cleavage fracture remains one of the major concerns for structural integrity assessment. The main characteristics of this mode of failure in relation to the stress field ahead of a crack, tip are described in the introduction. The emphasis is laid on the physical origins of scatter and the size effect observed in ferritic steels. It is shown that cleavage fracture is controlled by physical events occurring at different scales: initiation at (sub)micrometric particles, propagation across grain boundaries (10-50 microns) and final fracture at centimetric scale. The two first scales are detailed in this paper. The statistical origin of cleavage is described quantitatively from both microstructural defects and stress-strain heterogeneities due to crystalline plasticity at the grain scale. Existing models are applied to the prediction of the variation of Charpy fracture toughness with temperature.

  8. Classical Wave Model of Quantum-Like Processing in Brain

    NASA Astrophysics Data System (ADS)

    Khrennikov, A.

    2011-01-01

    We discuss the conjecture on quantum-like (QL) processing of information in the brain. It is not based on the physical quantum brain (e.g., Penrose) - quantum physical carriers of information. In our approach the brain created the QL representation (QLR) of information in Hilbert space. It uses quantum information rules in decision making. The existence of such QLR was (at least preliminary) confirmed by experimental data from cognitive psychology. The violation of the law of total probability in these experiments is an important sign of nonclassicality of data. In so called "constructive wave function approach" such data can be represented by complex amplitudes. We presented 1,2 the QL model of decision making. In this paper we speculate on a possible physical realization of QLR in the brain: a classical wave model producing QLR . It is based on variety of time scales in the brain. Each pair of scales (fine - the background fluctuations of electromagnetic field and rough - the cognitive image scale) induces the QL representation. The background field plays the crucial role in creation of "superstrong QL correlations" in the brain.

  9. Physical Model Study of Flowerpot Discharge Outlet, Western Closure Complex, New Orleans, Louisiana

    DTIC Science & Technology

    2013-05-01

    FPDO ........................................................................................ 12  3  Flowerpot Model with Straight Pipe Immediately...used at downstream end of 90-degree elbow. .................... 23  Figure 18. 1:20.377-scale preliminary FPDO model showing 7-ft-long PVC pipe ...27  Figure 23. 1:20.377-scale preliminary model with 1.3 in. lip. The black material at base of pipe was a sealant used to

  10. Confirmatory factorial analysis of the children´s attraction to physical activity scale (capa).

    PubMed

    Seabra, A C; Maia, J A; Parker, M; Seabra, A; Brustad, R; Fonseca, A M

    2015-03-27

    Attraction to physical activity (PA) is an important contributor to children´s intrinsic motivation to engage in games, and sports. Previous studies have supported the utility of the children´s attraction to PA scale (CAPA) (Brustad, 1996) but the validity of this measure for use in Portugal has not been established. The purpose of this study was to cross-validate the shorter version of the CAPA scale in the Portuguese cultural context. A sample of 342 children (8--10 years of age) was used. Confirmatory factor analyses using EQS software ( version 6.1) tested t hree competing measurement models: a single--factor model, a five factor model, and a second order factor model. The single--factor model and the second order model showed a poor fit to the data. It was found that a five-factor model similar to the original one revealed good fit to the data (S--B χ 2 (67) =94.27,p=0.02; NNFI=0.93; CFI=0.95; RMSEA=0.04; 90%CI=0.02;0.05). The results indicated that the CAPA scale is valid and appropriate for use in the Portuguese cultural context. The availability of a valid scale to evaluate attraction to PA at schools should provide improved opportunities for better assessment and understanding of children´s involvement in PA.

  11. Determinants of quality of life in patients with fibromyalgia: A structural equation modeling approach.

    PubMed

    Lee, Jeong-Won; Lee, Kyung-Eun; Park, Dong-Jin; Kim, Seong-Ho; Nah, Seong-Su; Lee, Ji Hyun; Kim, Seong-Kyu; Lee, Yeon-Ah; Hong, Seung-Jae; Kim, Hyun-Sook; Lee, Hye-Soon; Kim, Hyoun Ah; Joung, Chung-Il; Kim, Sang-Hyon; Lee, Shin-Seok

    2017-01-01

    Health-related quality of life (HRQOL) in patients with fibromyalgia (FM) is lower than in patients with other chronic diseases and the general population. Although various factors affect HRQOL, no study has examined a structural equation model of HRQOL as an outcome variable in FM patients. The present study assessed relationships among physical function, social factors, psychological factors, and HRQOL, and the effects of these variables on HRQOL in a hypothesized model using structural equation modeling (SEM). HRQOL was measured using SF-36, and the Fibromyalgia Impact Questionnaire (FIQ) was used to assess physical dysfunction. Social and psychological statuses were assessed using the Beck Depression Inventory (BDI), the State-Trait Anxiety Inventory (STAI), the Arthritis Self-Efficacy Scale (ASES), and the Social Support Scale. SEM analysis was used to test the structural relationships of the model using the AMOS software. Of the 336 patients, 301 (89.6%) were women with an average age of 47.9±10.9 years. The SEM results supported the hypothesized structural model (χ2 = 2.336, df = 3, p = 0.506). The final model showed that Physical Component Summary (PCS) was directly related to self-efficacy and inversely related to FIQ, and that Mental Component Summary (MCS) was inversely related to FIQ, BDI, and STAI. In our model of FM patients, HRQOL was affected by physical, social, and psychological variables. In these patients, higher levels of physical function and self-efficacy can improve the PCS of HRQOL, while physical function, depression, and anxiety negatively affect the MCS of HRQOL.

  12. Determinants of quality of life in patients with fibromyalgia: A structural equation modeling approach

    PubMed Central

    Lee, Jeong-Won; Lee, Kyung-Eun; Park, Dong-Jin; Kim, Seong-Ho; Nah, Seong-Su; Lee, Ji Hyun; Kim, Seong-Kyu; Lee, Yeon-Ah; Hong, Seung-Jae; Kim, Hyun-Sook; Lee, Hye-Soon; Kim, Hyoun Ah; Joung, Chung-Il; Kim, Sang-Hyon

    2017-01-01

    Objective Health-related quality of life (HRQOL) in patients with fibromyalgia (FM) is lower than in patients with other chronic diseases and the general population. Although various factors affect HRQOL, no study has examined a structural equation model of HRQOL as an outcome variable in FM patients. The present study assessed relationships among physical function, social factors, psychological factors, and HRQOL, and the effects of these variables on HRQOL in a hypothesized model using structural equation modeling (SEM). Methods HRQOL was measured using SF-36, and the Fibromyalgia Impact Questionnaire (FIQ) was used to assess physical dysfunction. Social and psychological statuses were assessed using the Beck Depression Inventory (BDI), the State-Trait Anxiety Inventory (STAI), the Arthritis Self-Efficacy Scale (ASES), and the Social Support Scale. SEM analysis was used to test the structural relationships of the model using the AMOS software. Results Of the 336 patients, 301 (89.6%) were women with an average age of 47.9±10.9 years. The SEM results supported the hypothesized structural model (χ2 = 2.336, df = 3, p = 0.506). The final model showed that Physical Component Summary (PCS) was directly related to self-efficacy and inversely related to FIQ, and that Mental Component Summary (MCS) was inversely related to FIQ, BDI, and STAI. Conclusions In our model of FM patients, HRQOL was affected by physical, social, and psychological variables. In these patients, higher levels of physical function and self-efficacy can improve the PCS of HRQOL, while physical function, depression, and anxiety negatively affect the MCS of HRQOL. PMID:28158289

  13. A Unified Multi-scale Model for Cross-Scale Evaluation and Integration of Hydrological and Biogeochemical Processes

    NASA Astrophysics Data System (ADS)

    Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.

    2013-12-01

    Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller scale simulations were then used to simulate coupled flow and moisture migration in soils in saturated and unsaturated zones, surface and groundwater exchange, and surface water flow in streams and lakes at the DWP site under dynamic precipitation conditions. Laboratory measurements of soil hydrological and biogeochemical properties are used to parameterize the UMSM at the small scales, and field measurements are used to evaluate the UMSM.

  14. An investigation of viscous-mediated coupling of crickets cercal hair sensors using a scaled up model

    NASA Astrophysics Data System (ADS)

    Alagirisamy, Pasupathy S.; Jeronimidis, George; Le Moàl, Valerie

    2009-08-01

    Viscous coupling between filiform hair sensors of insects and arthropods has gained considerable interest recently. Study of viscous coupling between hairs at micro scale with current technologies is proving difficult and hence the hair system has been physically scaled up by a factor of 100. For instance, a typical filiform hair of 10 μm diameter and 1000 μm length has been physically scaled up to 1 mm in diameter and 100mm in length. At the base, a rotational spring with a bonded strain gauge provides the restoring force and measures the angle of deflection of the model hair. These model hairs were used in a glycerol-filled aquarium where the velocity of flow and the fluid properties were determined by imposing the Reynolds numbers compatible with biological system. Experiments have been conducted by varying the separation distance and the relative position between the moveable model hairs, of different lengths and between the movable and rigid hairs of different lengths for the steady velocity flow with Reynolds numbers of 0.02 and 0.05. In this study, the viscous coupling between hairs has been characterised. The effect of the distance from the physical boundaries, such as tank walls has also been quantified (wall effect). The purpose of this investigation is to provide relevant information for the design of MEMS systems mimicking the cricket's hair array.

  15. Predicting chromatin architecture from models of polymer physics.

    PubMed

    Bianco, Simona; Chiariello, Andrea M; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2017-03-01

    We review the picture of chromatin large-scale 3D organization emerging from the analysis of Hi-C data and polymer modeling. In higher mammals, Hi-C contact maps reveal a complex higher-order organization, extending from the sub-Mb to chromosomal scales, hierarchically folded in a structure of domains-within-domains (metaTADs). The domain folding hierarchy is partially conserved throughout differentiation, and deeply correlated to epigenomic features. Rearrangements in the metaTAD topology relate to gene expression modifications: in particular, in neuronal differentiation models, topologically associated domains (TADs) tend to have coherent expression changes within architecturally conserved metaTAD niches. To identify the nature of architectural domains and their molecular determinants within a principled approach, we discuss models based on polymer physics. We show that basic concepts of interacting polymer physics explain chromatin spatial organization across chromosomal scales and cell types. The 3D structure of genomic loci can be derived with high accuracy and its molecular determinants identified by crossing information with epigenomic databases. In particular, we illustrate the case of the Sox9 locus, linked to human congenital disorders. The model in-silico predictions on the effects of genomic rearrangements are confirmed by available 5C data. That can help establishing new diagnostic tools for diseases linked to chromatin mis-folding, such as congenital disorders and cancer.

  16. Advanced Computing Tools and Models for Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  17. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    NASA Astrophysics Data System (ADS)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  18. Prototype of an Integrated Hurricane Information System for Research: Description and Illustration of its Use in Evaluating WRF Model Simulations

    NASA Astrophysics Data System (ADS)

    Hristova-Veleva, S.; Chao, Y.; Vane, D.; Lambrigtsen, B.; Li, P. P.; Knosp, B.; Vu, Q. A.; Su, H.; Dang, V.; Fovell, R.; Tanelli, S.; Garay, M.; Willis, J.; Poulsen, W.; Fishbein, E.; Ao, C. O.; Vazquez, J.; Park, K. J.; Callahan, P.; Marcus, S.; Haddad, Z.; Fetzer, E.; Kahn, R.

    2007-12-01

    In spite of recent improvements in hurricane track forecast accuracy, currently there are still many unanswered questions about the physical processes that determine hurricane genesis, intensity, track and impact on large- scale environment. Furthermore, a significant amount of work remains to be done in validating hurricane forecast models, understanding their sensitivities and improving their parameterizations. None of this can be accomplished without a comprehensive set of multiparameter observations that are relevant to both the large- scale and the storm-scale processes in the atmosphere and in the ocean. To address this need, we have developed a prototype of a comprehensive hurricane information system of high- resolution satellite, airborne and in-situ observations and model outputs pertaining to: i) the thermodynamic and microphysical structure of the storms; ii) the air-sea interaction processes; iii) the larger-scale environment as depicted by the SST, ocean heat content and the aerosol loading of the environment. Our goal was to create a one-stop place to provide the researchers with an extensive set of observed hurricane data, and their graphical representation, together with large-scale and convection-resolving model output, all organized in an easy way to determine when coincident observations from multiple instruments are available. Analysis tools will be developed in the next step. The analysis tools will be used to determine spatial, temporal and multiparameter covariances that are needed to evaluate model performance, provide information for data assimilation and characterize and compare observations from different platforms. We envision that the developed hurricane information system will help in the validation of the hurricane models, in the systematic understanding of their sensitivities and in the improvement of the physical parameterizations employed by the models. Furthermore, it will help in studying the physical processes that affect hurricane development and impact on large-scale environment. This talk will describe the developed prototype of the hurricane information systems. Furthermore, we will use a set of WRF hurricane simulations and compare simulated to observed structures to illustrate how the information system can be used to discriminate between simulations that employ different physical parameterizations. The work described here was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics ans Space Administration.

  19. Integrating Unified Gravity Wave Physics into the NOAA Next Generation Global Prediction System

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Yudin, V.; Fuller-Rowell, T. J.; Akmaev, R. A.

    2017-12-01

    The Unified Gravity Wave Physics (UGWP) project for the Next Generation Global Prediction System (NGGPS) is a NOAA collaborative effort between the National Centers for Environmental Prediction (NCEP), Environemntal Modeling Center (EMC) and the University of Colorado, Cooperative Institute for Research in Environmental Sciences (CU-CIRES) to support upgrades and improvements of GW dynamics (resolved scales) and physics (sub-grid scales) in the NOAA Environmental Modeling System (NEMS)†. As envisioned the global climate, weather and space weather models of NEMS will substantially improve their predictions and forecasts with the resolution-sensitive (scale-aware) formulations planned under the UGWP framework for both orographic and non-stationary waves. In particular, the planned improvements for the Global Forecast System (GFS) model of NEMS are: calibration of model physics for higher vertical and horizontal resolution and an extended vertical range of simulations, upgrades to GW schemes, including the turbulent heating and eddy mixing due to wave dissipation and breaking, and representation of the internally-generated QBO. The main priority of the UGWP project is unified parameterization of orographic and non-orographic GW effects including momentum deposition in the middle atmosphere and turbulent heating and eddies due to wave dissipation and breaking. The latter effects are not currently represented in NOAA atmosphere models. The team has tested and evaluated four candidate GW solvers integrating the selected GW schemes into the NGGPS model. Our current work and planned activity is to implement the UGWP schemes in the first available GFS/FV3 (open FV3) configuration including adapted GFDL modification for sub-grid orography in GFS. Initial global model results will be shown for the operational and research GFS configuration for spectral and FV3 dynamical cores. †http://www.emc.ncep.noaa.gov/index.php?branch=NEMS

  20. Inner space/outer space - The interface between cosmology and particle physics

    NASA Astrophysics Data System (ADS)

    Kolb, Edward W.; Turner, Michael S.; Lindley, David; Olive, Keith; Seckel, David

    A collection of papers covering the synthesis between particle physics and cosmology is presented. The general topics addressed include: standard models of particle physics and cosmology; microwave background radiation; origin and evolution of large-scale structure; inflation; massive magnetic monopoles; supersymmetry, supergravity, and quantum gravity; cosmological constraints on particle physics; Kaluza-Klein cosmology; and future directions and connections in particle physics and cosmology.

  1. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  2. A crystal plasticity model for slip in hexagonal close packed metals based on discrete dislocation simulations

    NASA Astrophysics Data System (ADS)

    Messner, Mark C.; Rhee, Moono; Arsenlis, Athanasios; Barton, Nathan R.

    2017-06-01

    This work develops a method for calibrating a crystal plasticity model to the results of discrete dislocation (DD) simulations. The crystal model explicitly represents junction formation and annihilation mechanisms and applies these mechanisms to describe hardening in hexagonal close packed metals. The model treats these dislocation mechanisms separately from elastic interactions among populations of dislocations, which the model represents through a conventional strength-interaction matrix. This split between elastic interactions and junction formation mechanisms more accurately reproduces the DD data and results in a multi-scale model that better represents the lower scale physics. The fitting procedure employs concepts of machine learning—feature selection by regularized regression and cross-validation—to develop a robust, physically accurate crystal model. The work also presents a method for ensuring the final, calibrated crystal model respects the physical symmetries of the crystal system. Calibrating the crystal model requires fitting two linear operators: one describing elastic dislocation interactions and another describing junction formation and annihilation dislocation reactions. The structure of these operators in the final, calibrated model reflect the crystal symmetry and slip system geometry of the DD simulations.

  3. Towards Characterization, Modeling, and Uncertainty Quantification in Multi-scale Mechanics of Oragnic-rich Shales

    NASA Astrophysics Data System (ADS)

    Abedi, S.; Mashhadian, M.; Noshadravan, A.

    2015-12-01

    Increasing the efficiency and sustainability in operation of hydrocarbon recovery from organic-rich shales requires a fundamental understanding of chemomechanical properties of organic-rich shales. This understanding is manifested in form of physics-bases predictive models capable of capturing highly heterogeneous and multi-scale structure of organic-rich shale materials. In this work we present a framework of experimental characterization, micromechanical modeling, and uncertainty quantification that spans from nanoscale to macroscale. Application of experiments such as coupled grid nano-indentation and energy dispersive x-ray spectroscopy and micromechanical modeling attributing the role of organic maturity to the texture of the material, allow us to identify unique clay mechanical properties among different samples that are independent of maturity of shale formations and total organic content. The results can then be used to inform the physically-based multiscale model for organic rich shales consisting of three levels that spans from the scale of elementary building blocks (e.g. clay minerals in clay-dominated formations) of organic rich shales to the scale of the macroscopic inorganic/organic hard/soft inclusion composite. Although this approach is powerful in capturing the effective properties of organic-rich shale in an average sense, it does not account for the uncertainty in compositional and mechanical model parameters. Thus, we take this model one step forward by systematically incorporating the main sources of uncertainty in modeling multiscale behavior of organic-rich shales. In particular we account for the uncertainty in main model parameters at different scales such as porosity, elastic properties and mineralogy mass percent. To that end, we use Maximum Entropy Principle and random matrix theory to construct probabilistic descriptions of model inputs based on available information. The Monte Carlo simulation is then carried out to propagate the uncertainty and consequently construct probabilistic descriptions of properties at multiple length-scales. The combination of experimental characterization and stochastic multi-scale modeling presented in this work improves the robustness in the prediction of essential subsurface parameters in engineering scale.

  4. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  5. Testing a self-determination theory model of children’s physical activity motivation: a cross-sectional study

    PubMed Central

    2013-01-01

    Background Understanding children’s physical activity motivation, its antecedents and associations with behavior is important and can be advanced by using self-determination theory. However, research among youth is largely restricted to adolescents and studies of motivation within certain contexts (e.g., physical education). There are no measures of self-determination theory constructs (physical activity motivation or psychological need satisfaction) for use among children and no previous studies have tested a self-determination theory-based model of children’s physical activity motivation. The purpose of this study was to test the reliability and validity of scores derived from scales adapted to measure self-determination theory constructs among children and test a motivational model predicting accelerometer-derived physical activity. Methods Cross-sectional data from 462 children aged 7 to 11 years from 20 primary schools in Bristol, UK were analysed. Confirmatory factor analysis was used to examine the construct validity of adapted behavioral regulation and psychological need satisfaction scales. Structural equation modelling was used to test cross-sectional associations between psychological need satisfaction, motivation types and physical activity assessed by accelerometer. Results The construct validity and reliability of the motivation and psychological need satisfaction measures were supported. Structural equation modelling provided evidence for a motivational model in which psychological need satisfaction was positively associated with intrinsic and identified motivation types and intrinsic motivation was positively associated with children’s minutes in moderate-to-vigorous physical activity. Conclusions The study provides evidence for the psychometric properties of measures of motivation aligned with self-determination theory among children. Children’s motivation that is based on enjoyment and inherent satisfaction of physical activity is associated with their objectively-assessed physical activity and such motivation is positively associated with perceptions of psychological need satisfaction. These psychological factors represent potential malleable targets for interventions to increase children’s physical activity. PMID:24067078

  6. Validation of two scales for measuring participation and perceived stigma in Chinese community-based rehabilitation programs.

    PubMed

    Chung, Eva Yin-Han; Lam, Gigi

    2018-05-29

    The World Health Organization has asserted the importance of enhancing participation of people with disabilities within the International Classification of Functioning, Disability and Health framework. Participation is regarded as a vital outcome in community-based rehabilitation. The actualization of the right to participate is limited by social stigma and discrimination. To date, there is no validated instrument for use in Chinese communities to measure participation restriction or self-perceived stigma. This study aimed to translate and validate the Participation Scale and the Explanatory Model Interview Catalogue (EMIC) Stigma Scale for use in Chinese communities with people with physical disabilities. The Chinese versions of the Participation Scale and the EMIC stigma scale were administered to 264 adults with physical disabilities. The two scales were examined separately. The reliability analysis was studied in conjunction with the construct validity. Reliability analysis was conducted to assess the internal consistency and item-total correlation. Exploratory factor analysis was conducted to investigate the latent patterns of relationships among variables. A Rasch model analysis was conducted to test the dimensionality, internal validity, item hierarchy, and scoring category structure of the two scales. Both the Participation Scale and the EMIC stigma scale were confirmed to have good internal consistency and high item-total correlation. Exploratory factor analysis revealed the factor structure of the two scales, which demonstrated the fitting of a pattern of variables within the studied construct. The Participation Scale was found to be multidimensional, whereas the EMIC stigma scale was confirmed to be unidimensional. The item hierarchies of the Participation Scale and the EMIC stigma scale were discussed and were regarded as compatible with the cultural characteristics of Chinese communities. The Chinese versions of the Participation Scale and the EMIC stigma scale were thoroughly tested in this study to demonstrate their robustness and feasibility in measuring the participation restriction and perceived stigma of people with physical disabilities in Chinese communities. This is crucial as it provides valid measurements to enable comprehensive understanding and assessment of the participation and stigma among people with physical disabilities in Chinese communities.

  7. Using Rasch Analysis to Examine the Dimensionality Structure and Differential Item Functioning of the Arabic Version of the Perceived Physical Ability Scale for Children

    ERIC Educational Resources Information Center

    Abd-El-Fattah, Sabry M.; AL-Sinani, Yousra; El Shourbagi, Sahar; Fakhroo, Hessa A.

    2014-01-01

    This study uses the Rasch model technique to examine the dimensionality structure and differential item functioning of the Arabic version of the Perceived Physical Ability Scale for Children (PPASC). A sample of 220 Omani fourth graders (120 males and 100 females) responded to an Arabic translated version of the PPASC. Data on students'…

  8. Charge frustration in complex fluids and in electronic systems

    NASA Astrophysics Data System (ADS)

    Carraro, Carlo

    1997-02-01

    The idea of charge frustration is applied to describe the properties of such diverse physical systems as oil-water-surfactant mixtures and metal-ammonia solutions. The minimalist charge-frustrated model possesses one energy scale and two length scales. For oil-water-surfactant mixtures, these parameters have been determined starting from the microscopic properties of the physical systems under study. Thus, microscopic properties are successfully related to the observed mesoscopic structure.

  9. Physical Biology of Axonal Damage.

    PubMed

    de Rooij, Rijk; Kuhl, Ellen

    2018-01-01

    Excessive physical impacts to the head have direct implications on the structural integrity at the axonal level. Increasing evidence suggests that tau, an intrinsically disordered protein that stabilizes axonal microtubules, plays a critical role in the physical biology of axonal injury. However, the precise mechanisms of axonal damage remain incompletely understood. Here we propose a biophysical model of the axon to correlate the dynamic behavior of individual tau proteins under external physical forces to the evolution of axonal damage. To propagate damage across the scales, we adopt a consistent three-step strategy: First, we characterize the axonal response to external stretches and stretch rates for varying tau crosslink bond strengths using a discrete axonal damage model. Then, for each combination of stretch rates and bond strengths, we average the axonal force-stretch response of n = 10 discrete simulations, from which we derive and calibrate a homogenized constitutive model. Finally, we embed this homogenized model into a continuum axonal damage model of [1-d]-type in which d is a scalar damage parameter that is driven by the axonal stretch and stretch rate. We demonstrate that axonal damage emerges naturally from the interplay of physical forces and biological crosslinking. Our study reveals an emergent feature of the crosslink dynamics: With increasing loading rate, the axonal failure stretch increases, but axonal damage evolves earlier in time. For a wide range of physical stretch rates, from 0.1 to 10 /s, and biological bond strengths, from 1 to 100 pN, our model predicts a relatively narrow window of critical damage stretch thresholds, from 1.01 to 1.30, which agrees well with experimental observations. Our biophysical damage model can help explain the development and progression of axonal damage across the scales and will provide useful guidelines to identify critical damage level thresholds in response to excessive physical forces.

  10. Scales and scaling in turbulent ocean sciences; physics-biology coupling

    NASA Astrophysics Data System (ADS)

    Schmitt, Francois

    2015-04-01

    Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.

  11. Modeling of metal thin film growth: Linking angstrom-scale molecular dynamics results to micron-scale film topographies

    NASA Astrophysics Data System (ADS)

    Hansen, U.; Rodgers, S.; Jensen, K. F.

    2000-07-01

    A general method for modeling ionized physical vapor deposition is presented. As an example, the method is applied to growth of an aluminum film in the presence of an ionized argon flux. Molecular dynamics techniques are used to examine the surface adsorption, reflection, and sputter reactions taking place during ionized physical vapor deposition. We predict their relative probabilities and discuss their dependence on energy and incident angle. Subsequently, we combine the information obtained from molecular dynamics with a line of sight transport model in a two-dimensional feature, incorporating all effects of reemission and resputtering. This provides a complete growth rate model that allows inclusion of energy- and angular-dependent reaction rates. Finally, a level-set approach is used to describe the morphology of the growing film. We thus arrive at a computationally highly efficient and accurate scheme to model the growth of thin films. We demonstrate the capabilities of the model predicting the major differences on Al film topographies between conventional and ionized sputter deposition techniques studying thin film growth under ionized physical vapor deposition conditions with different Ar fluxes.

  12. Multi-scale and multi-domain computational astrophysics.

    PubMed

    van Elteren, Arjen; Pelupessy, Inti; Zwart, Simon Portegies

    2014-08-06

    Astronomical phenomena are governed by processes on all spatial and temporal scales, ranging from days to the age of the Universe (13.8 Gyr) as well as from kilometre size up to the size of the Universe. This enormous range in scales is contrived, but as long as there is a physical connection between the smallest and largest scales it is important to be able to resolve them all, and for the study of many astronomical phenomena this governance is present. Although covering all these scales is a challenge for numerical modellers, the most challenging aspect is the equally broad and complex range in physics, and the way in which these processes propagate through all scales. In our recent effort to cover all scales and all relevant physical processes on these scales, we have designed the Astrophysics Multipurpose Software Environment (AMUSE). AMUSE is a Python-based framework with production quality community codes and provides a specialized environment to connect this plethora of solvers to a homogeneous problem-solving environment. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  13. Scale models: A proven cost-effective tool for outage planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, R.; Segroves, R.

    1995-03-01

    As generation costs for operating nuclear stations have risen, more nuclear utilities have initiated efforts to improve cost effectiveness. Nuclear plant owners are also being challenged with lower radiation exposure limits and new revised radiation protection related regulations (10 CFR 20), which places further stress on their budgets. As source term reduction activities continue to lower radiation fields, reducing the amount of time spent in radiation fields becomes one of the most cost-effective ways of reducing radiation exposure. An effective approach for minimizing time spent in radiation areas is to use a physical scale model for worker orientation planning andmore » monitoring maintenance, modifications, and outage activities. To meet the challenge of continued reduction in the annual cumulative radiation exposures, new cost-effective tools are required. One field-tested and proven tool is the physical scale model.« less

  14. A tilted cold dark matter cosmological scenario

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Gnedin, Nickolay Y.; Kofman, Lev A.; Ostriker, Jeremiah P.

    1992-01-01

    A new cosmological scenario based on CDM but with a power spectrum index of about 0.7-0.8 is suggested. This model is predicted by various inflationary models with no fine tuning. This tilted CDM model, if normalized to COBE, alleviates many problems of the standard CDM model related to both small-scale and large-scale power. A physical bias of galaxies over dark matter of about two is required to fit spatial observations.

  15. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.

  16. Bridging the physical scales in evolutionary biology: From protein sequence space to fitness of organisms and populations

    PubMed Central

    Bershtein, Shimon; Serohijos, Adrian W.R.; Shakhnovich, Eugene I.

    2016-01-01

    Bridging the gap between the molecular properties of proteins and organismal/population fitness is essential for understanding evolutionary processes. This task requires the integration of the several physical scales of biological organization, each defined by a distinct set of mechanisms and constraints, into a single unifying model. The molecular scale is dominated by the constraints imposed by the physico-chemical properties of proteins and their substrates, which give rise to trade-offs and epistatic (non-additive) effects of mutations. At the systems scale, biological networks modulate protein expression and can either buffer or enhance the fitness effects of mutations. The population scale is influenced by the mutational input, selection regimes, and stochastic changes affecting the size and structure of populations, which eventually determine the evolutionary fate of mutations. Here, we summarize the recent advances in theory, computer simulations, and experiments that advance our understanding of the links between various physical scales in biology. PMID:27810574

  17. Bridging the physical scales in evolutionary biology: from protein sequence space to fitness of organisms and populations.

    PubMed

    Bershtein, Shimon; Serohijos, Adrian Wr; Shakhnovich, Eugene I

    2017-02-01

    Bridging the gap between the molecular properties of proteins and organismal/population fitness is essential for understanding evolutionary processes. This task requires the integration of the several physical scales of biological organization, each defined by a distinct set of mechanisms and constraints, into a single unifying model. The molecular scale is dominated by the constraints imposed by the physico-chemical properties of proteins and their substrates, which give rise to trade-offs and epistatic (non-additive) effects of mutations. At the systems scale, biological networks modulate protein expression and can either buffer or enhance the fitness effects of mutations. The population scale is influenced by the mutational input, selection regimes, and stochastic changes affecting the size and structure of populations, which eventually determine the evolutionary fate of mutations. Here, we summarize the recent advances in theory, computer simulations, and experiments that advance our understanding of the links between various physical scales in biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Forest gradient response in Sierran landscapes: the physical template

    USGS Publications Warehouse

    Urban, Dean L.; Miller, Carol; Halpin, Patrick N.; Stephenson, Nathan L.

    2000-01-01

    Vegetation pattern on landscapes is the manifestation of physical gradients, biotic response to these gradients, and disturbances. Here we focus on the physical template as it governs the distribution of mixed-conifer forests in California's Sierra Nevada. We extended a forest simulation model to examine montane environmental gradients, emphasizing factors affecting the water balance in these summer-dry landscapes. The model simulates the soil moisture regime in terms of the interaction of water supply and demand: supply depends on precipitation and water storage, while evapotranspirational demand varies with solar radiation and temperature. The forest cover itself can affect the water balance via canopy interception and evapotranspiration. We simulated Sierran forests as slope facets, defined as gridded stands of homogeneous topographic exposure, and verified simulated gradient response against sample quadrats distributed across Sequoia National Park. We then performed a modified sensitivity analysis of abiotic factors governing the physical gradient. Importantly, the model's sensitivity to temperature, precipitation, and soil depth varies considerably over the physical template, particularly relative to elevation. The physical drivers of the water balance have characteristic spatial scales that differ by orders of magnitude. Across large spatial extents, temperature and precipitation as defined by elevation primarily govern the location of the mixed conifer zone. If the analysis is constrained to elevations within the mixed-conifer zone, local topography comes into play as it influences drainage. Soil depth varies considerably at all measured scales, and is especially dominant at fine (within-stand) scales. Physical site variables can influence soil moisture deficit either by affecting water supply or water demand; these effects have qualitatively different implications for forest response. These results have clear implications about purely inferential approaches to gradient analysis, and bear strongly on our ability to use correlative approaches in assessing the potential responses of montane forests to anthropogenic climatic change.

  19. Naturalness of Electroweak Symmetry Breaking

    NASA Astrophysics Data System (ADS)

    Espinosa, J. R.

    2007-02-01

    After revisiting the hierarchy problem of the Standard Model and its implications for the scale of New Physics, I consider the fine tuning problem of electroweak symmetry breaking in two main scenarios beyond the Standard Model: SUSY and Little Higgs models. The main conclusions are that New Physics should appear on the reach of the LHC; that some SUSY models can solve the hierarchy problem with acceptable residual fine tuning and, finally, that Little Higgs models generically suffer from large tunings, many times hidden.

  20. Developing a Psychometric Instrument to Measure Physical Education Teachers' Job Demands and Resources.

    PubMed

    Zhang, Tan; Chen, Ang

    2017-01-01

    Based on the job demands-resources model, the study developed and validated an instrument that measures physical education teachers' job demands-resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample ( n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from -.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers' perception of their working environment.

  1. Developing a Psychometric Instrument to Measure Physical Education Teachers’ Job Demands and Resources

    PubMed Central

    Zhang, Tan; Chen, Ang

    2017-01-01

    Based on the job demands–resources model, the study developed and validated an instrument that measures physical education teachers’ job demands–resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample (n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from −.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers’ perception of their working environment. PMID:29200808

  2. High scale impact in alignment and decoupling in two-Higgs-doublet models

    NASA Astrophysics Data System (ADS)

    Basler, Philipp; Ferreira, Pedro M.; Mühlleitner, Margarete; Santos, Rui

    2018-05-01

    The two-Higgs-doublet model (2HDM) provides an excellent benchmark to study physics beyond the Standard Model (SM). In this work, we discuss how the behavior of the model at high-energy scales causes it to have a scalar with properties very similar to those of the SM—which means the 2HDM can be seen to naturally favor a decoupling or alignment limit. For a type II 2HDM, we show that requiring the model to be theoretically valid up to a scale of 1 TeV, by studying the renormalization group equations (RGE) of the parameters of the model, causes a significant reduction in the allowed magnitude of the quartic couplings. This, combined with B -physics bounds, forces the model to be naturally decoupled. As a consequence, any nondecoupling limits in type II, like the wrong-sign scenario, are excluded. On the contrary, even with the very constraining limits for the Higgs couplings from the LHC, the type I model can deviate substantially from alignment. An RGE analysis similar to that made for type II shows, however, that requiring a single scalar to be heavier than about 500 GeV would be sufficient for the model to be decoupled. Finally, we show that the 2HDM is stable up to the Planck scale independently of which of the C P -even scalars is the discovered 125 GeV Higgs boson.

  3. A simulation study demonstrating the importance of large-scale trailing vortices in wake steering

    DOE PAGES

    Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew; ...

    2018-05-14

    In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less

  4. A simulation study demonstrating the importance of large-scale trailing vortices in wake steering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew

    In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less

  5. Stability of knotted vortices in wave chaos

    NASA Astrophysics Data System (ADS)

    Taylor, Alexander; Dennis, Mark

    Large scale tangles of disordered filaments occur in many diverse physical systems, from turbulent superfluids to optical volume speckle to liquid crystal phases. They can exhibit particular large scale random statistics despite very different local physics. We have previously used the topological statistics of knotting and linking to characterise the large scale tangling, using the vortices of three-dimensional wave chaos as a universal model system whose physical lengthscales are set only by the wavelength. Unlike geometrical quantities, the statistics of knotting depend strongly on the physical system and boundary conditions. Although knotting patterns characterise different systems, the topology of vortices is highly unstable to perturbation, under which they may reconnect with one another. In systems of constructed knots, these reconnections generally rapidly destroy the knot, but for vortex tangles the topological statistics must be stable. Using large scale simulations of chaotic eigenfunctions, we numerically investigate the prevalence and impact of reconnection events, and their effect on the topology of the tangle.

  6. Dimensionality and predictive validity of the HAM-Nat, a test of natural sciences for medical school admission

    PubMed Central

    2011-01-01

    Background Knowledge in natural sciences generally predicts study performance in the first two years of the medical curriculum. In order to reduce delay and dropout in the preclinical years, Hamburg Medical School decided to develop a natural science test (HAM-Nat) for student selection. In the present study, two different approaches to scale construction are presented: a unidimensional scale and a scale composed of three subject specific dimensions. Their psychometric properties and relations to academic success are compared. Methods 334 first year medical students of the 2006 cohort responded to 52 multiple choice items from biology, physics, and chemistry. For the construction of scales we generated two random subsamples, one for development and one for validation. In the development sample, unidimensional item sets were extracted from the item pool by means of weighted least squares (WLS) factor analysis, and subsequently fitted to the Rasch model. In the validation sample, the scales were subjected to confirmatory factor analysis and, again, Rasch modelling. The outcome measure was academic success after two years. Results Although the correlational structure within the item set is weak, a unidimensional scale could be fitted to the Rasch model. However, psychometric properties of this scale deteriorated in the validation sample. A model with three highly correlated subject specific factors performed better. All summary scales predicted academic success with an odds ratio of about 2.0. Prediction was independent of high school grades and there was a slight tendency for prediction to be better in females than in males. Conclusions A model separating biology, physics, and chemistry into different Rasch scales seems to be more suitable for item bank development than a unidimensional model, even when these scales are highly correlated and enter into a global score. When such a combination scale is used to select the upper quartile of applicants, the proportion of successful completion of the curriculum after two years is expected to rise substantially. PMID:21999767

  7. Dimensionality and predictive validity of the HAM-Nat, a test of natural sciences for medical school admission.

    PubMed

    Hissbach, Johanna C; Klusmann, Dietrich; Hampe, Wolfgang

    2011-10-14

    Knowledge in natural sciences generally predicts study performance in the first two years of the medical curriculum. In order to reduce delay and dropout in the preclinical years, Hamburg Medical School decided to develop a natural science test (HAM-Nat) for student selection. In the present study, two different approaches to scale construction are presented: a unidimensional scale and a scale composed of three subject specific dimensions. Their psychometric properties and relations to academic success are compared. 334 first year medical students of the 2006 cohort responded to 52 multiple choice items from biology, physics, and chemistry. For the construction of scales we generated two random subsamples, one for development and one for validation. In the development sample, unidimensional item sets were extracted from the item pool by means of weighted least squares (WLS) factor analysis, and subsequently fitted to the Rasch model. In the validation sample, the scales were subjected to confirmatory factor analysis and, again, Rasch modelling. The outcome measure was academic success after two years. Although the correlational structure within the item set is weak, a unidimensional scale could be fitted to the Rasch model. However, psychometric properties of this scale deteriorated in the validation sample. A model with three highly correlated subject specific factors performed better. All summary scales predicted academic success with an odds ratio of about 2.0. Prediction was independent of high school grades and there was a slight tendency for prediction to be better in females than in males. A model separating biology, physics, and chemistry into different Rasch scales seems to be more suitable for item bank development than a unidimensional model, even when these scales are highly correlated and enter into a global score. When such a combination scale is used to select the upper quartile of applicants, the proportion of successful completion of the curriculum after two years is expected to rise substantially.

  8. Unforced decadal fluctuations in a coupled model of the atmosphere and ocean mixed layer

    NASA Technical Reports Server (NTRS)

    Barnett, T. P.; Del Genio, A. D.; Ruedy, R. A.

    1992-01-01

    Global average temperature in a 100-year control run of a model used for greenhouse gas response simulations showed low-frequency natural variability comparable in magnitude to that observed over the last 100 years. The model variability was found to be barotropic in the atmosphere, and located in the tropical strip with largest values near the equator in the Pacific. The model variations were traced to complex, low-frequency interactions between the meridional sea surface temperature gradients in the eastern equatorial Pacific, clouds at both high and low levels, and features of the tropical atmospheric circulation. The variations in these and other model parameters appear to oscillate between two limiting climate states. The physical scenario accounting for the oscillations on decadal time scales is almost certainly not found in the real world on shorter time scales due to limited resolution and the omission of key physics (e.g., equatorial ocean dynamics) in the model. The real message is that models with dynamical limitations can still produce significant long-term variability. Only a thorough physical diagnosis of such simulations and comparisons with decadal-length data sets will allow one to decide if faith in the model results is, or is not, warranted.

  9. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2010-01-01

    A multi-scale modeling system with unified physics has been developed at NASA Goddard Space Flight Center (GSFC). The system consists of an MMF, the coupled NASA Goddard finite-volume GCM (fvGCM) and Goddard Cumulus Ensemble model (GCE, a CRM); the state-of-the-art Weather Research and Forecasting model (WRF) and the stand alone GCE. These models can share the same microphysical schemes, radiation (including explicitly calculated cloud optical properties), and surface models that have been developed, improved and tested for different environments. In this talk, I will present: (1) A brief review on GCE model and its applications on the impact of the aerosol on deep precipitation processes, (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications). We are also performing the inline tracer calculation to comprehend the physical processes (i.e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes and mesoscale convective systems. In addition, high - resolution (spatial. 2km, and temporal, I minute) visualization showing the model results will be presented.

  10. Psychometric Properties of the Physical Educators' Self-Efficacy Toward Including Students With Disabilities-Autism Among Chinese Preservice Physical Education Teachers.

    PubMed

    Li, Chunxiao; Wang, Lijuan; Block, Martin E; Sum, Raymond K W; Wu, Yandan

    2018-04-01

    Teachers' self-efficacy is a critical predictor for successful inclusive physical education. However, little is known about preservice physical educators' self-efficacy toward teaching students with autism spectrum disorders in China. A sound instrument is necessary to measure their self-efficacy level. This validation study examined the psychometric properties of the Chinese version of the Physical Educators' Self-Efficacy Toward Including Students with Disabilities-Autism. A multisection survey form was administered to preservice physical educators in Mainland China (n = 205) and Hong Kong (n = 227). The results of confirmatory factor analysis confirmed the one-factor model of the scale in the total sample and each of the two samples. Invariance tests across the two samples supported configural and metric invariance but not scalar invariance. The scale scores showed good internal reliability and were correlated with theoretically relevant constructs (i.e., burnout and life satisfaction) in the total sample and subsamples. These findings generally support the utility of the scale for use among Chinese preservice physical educators.

  11. A review of numerical models to predict the atmospheric dispersion of radionuclides.

    PubMed

    Leelőssy, Ádám; Lagzi, István; Kovács, Attila; Mészáros, Róbert

    2018-02-01

    The field of atmospheric dispersion modeling has evolved together with nuclear risk assessment and emergency response systems. Atmospheric concentration and deposition of radionuclides originating from an unintended release provide the basis of dose estimations and countermeasure strategies. To predict the atmospheric dispersion and deposition of radionuclides several numerical models are available coupled with numerical weather prediction (NWP) systems. This work provides a review of the main concepts and different approaches of atmospheric dispersion modeling. Key processes of the atmospheric transport of radionuclides are emission, advection, turbulent diffusion, dry and wet deposition, radioactive decay and other physical and chemical transformations. A wide range of modeling software are available to simulate these processes with different physical assumptions, numerical approaches and implementation. The most appropriate modeling tool for a specific purpose can be selected based on the spatial scale, the complexity of meteorology, land surface and physical and chemical transformations, also considering the available data and computational resource. For most regulatory and operational applications, offline coupled NWP-dispersion systems are used, either with a local scale Gaussian, or a regional to global scale Eulerian or Lagrangian approach. The dispersion model results show large sensitivity on the accuracy of the coupled NWP model, especially through the description of planetary boundary layer turbulence, deep convection and wet deposition. Improvement of dispersion predictions can be achieved by online coupling of mesoscale meteorology and atmospheric transport models. The 2011 Fukushima event was the first large-scale nuclear accident where real-time prognostic dispersion modeling provided decision support. Dozens of dispersion models with different approaches were used for prognostic and retrospective simulations of the Fukushima release. An unknown release rate proved to be the largest factor of uncertainty, underlining the importance of inverse modeling and data assimilation in future developments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Evaluation of WRF physical parameterizations against ARM/ASR Observations in the post-cold-frontal region to improve low-level clouds representation in CAM5

    NASA Astrophysics Data System (ADS)

    Lamraoui, F.; Booth, J. F.; Naud, C. M.

    2017-12-01

    The representation of subgrid-scale processes of low-level marine clouds located in the post-cold-frontal region poses a serious challenge for climate models. More precisely, the boundary layer parameterizations are predominantly designed for individual regimes that can evolve gradually over time and does not accommodate the cold front passage that can overly modify the boundary layer rapidly. Also, the microphysics schemes respond differently to the quick development of the boundary layer schemes, especially under unstable conditions. To improve the understanding of cloud physics in the post-cold frontal region, the present study focuses on exploring the relationship between cloud properties, the local processes and large-scale conditions. In order to address these questions, we explore the WRF sensitivity to the interaction between various combinations of the boundary layer and microphysics parameterizations, including the Community Atmospheric Model version 5 (CAM5) physical package in a perturbed physics ensemble. Then, we evaluate these simulations against ground-based ARM observations over the Azores. The WRF-based simulations demonstrate particular sensitivities of the marine cold front passage and the associated post-cold frontal clouds to the domain size, the resolution and the physical parameterizations. First, it is found that in multiple different case studies the model cannot generate the cold front passage when the domain size is larger than 3000 km2. Instead, the modeled cold front stalls, which shows the importance of properly capturing the synoptic scale conditions. The simulation reveals persistent delay in capturing the cold front passage and also an underestimated duration of the post-cold-frontal conditions. Analysis of the perturbed physics ensemble shows that changing the microphysics scheme leads to larger differences in the modeled clouds than changing the boundary layer scheme. The in-cloud heating tendencies are analyzed to explain this sensitivity.

  13. Modeling of Thermochemical Behavior in an Industrial-Scale Rotary Hearth Furnace for Metallurgical Dust Recycling

    NASA Astrophysics Data System (ADS)

    Wu, Yu-Liang; Jiang, Ze-Yi; Zhang, Xin-Xin; Xue, Qing-Guo; Yu, Ai-Bing; Shen, Yan-Song

    2017-10-01

    Metallurgical dusts can be recycled through direct reduction in rotary hearth furnaces (RHFs) via addition into carbon-based composite pellets. While iron in the dust is recycled, several heavy and alkali metal elements harmful for blast furnace operation, including Zn, Pb, K, and Na, can also be separated and then recycled. However, there is a lack of understanding on thermochemical behavior related to direct reduction in an industrial-scale RHF, especially removal behavior of Zn, Pb, K, and Na, leading to technical issues in industrial practice. In this work, an integrated model of the direct reduction process in an industrial-scale RHF is described. The integrated model includes three mathematical submodels and one physical model, specifically, a three-dimensional (3-D) CFD model of gas flow and heat transfer in an RHF chamber, a one-dimensional (1-D) CFD model of direct reduction inside a pellet, an energy/mass equilibrium model, and a reduction physical experiment using a Si-Mo furnace. The model is validated by comparing the simulation results with measurements in terms of furnace temperature, furnace pressure, and pellet indexes. The model is then used for describing in-furnace phenomena and pellet behavior in terms of heat transfer, direct reduction, and removal of a range of heavy and alkali metal elements under industrial-scale RHF conditions. The results show that the furnace temperature in the preheating section should be kept at a higher level in an industrial-scale RHF compared with that in a pilot-scale RHF. The removal rates of heavy and alkali metal elements inside the composite pellet are all faster than iron metallization, specifically in the order of Pb, Zn, K, and Na.

  14. Validation of the Neurological Fatigue Index for stroke (NFI-Stroke)

    PubMed Central

    2012-01-01

    Background Fatigue is a common symptom in Stroke. Several self-report scales are available to measure this debilitating symptom but concern has been expressed about their construct validity. Objective To examine the reliability and validity of a recently developed scale for multiple sclerosis (MS) fatigue, the Neurological Fatigue Index (NFI-MS), in a sample of stroke patients. Method Six patients with stroke participated in qualitative interviews which were analysed and the themes compared for equivalence to those derived from existing data on MS fatigue. 999 questionnaire packs were sent to those with a stroke within the past four years. Data from the four subscales, and the Summary scale of the NFI-MS were fitted to the Rasch measurement model. Results Themes identified by stroke patients were consistent with those identified by those with MS. 282 questionnaires were returned and respondents had a mean age of 67.3 years; 62% were male, and were on average 17.2 (SD 11.4, range 2–50) months post stroke. The Physical, Cognitive and Summary scales all showed good fit to the model, were unidimensional, and free of differential item functioning by age, sex and time. The sleep scales failed to show adequate fit in their current format. Conclusion Post stroke fatigue appears to be represented by a combination of physical and cognitive components, confirmed by both qualitative and quantitative processes. The NFI-Stroke, comprising a Physical and Cognitive subscale, and a 10-item Summary scale, meets the strictest measurement requirements. Fit to the Rasch model allows conversion of ordinal raw scores to a linear metric. PMID:22587411

  15. Influence of galactic arm scale dynamics on the molecular composition of the cold and dense ISM. I. Observed abundance gradients in dense clouds

    NASA Astrophysics Data System (ADS)

    Ruaud, M.; Wakelam, V.; Gratier, P.; Bonnell, I. A.

    2018-04-01

    Aim. We study the effect of large scale dynamics on the molecular composition of the dense interstellar medium during the transition between diffuse to dense clouds. Methods: We followed the formation of dense clouds (on sub-parsec scales) through the dynamics of the interstellar medium at galactic scales. We used results from smoothed particle hydrodynamics (SPH) simulations from which we extracted physical parameters that are used as inputs for our full gas-grain chemical model. In these simulations, the evolution of the interstellar matter is followed for 50 Myr. The warm low-density interstellar medium gas flows into spiral arms where orbit crowding produces the shock formation of dense clouds, which are held together temporarily by the external pressure. Results: We show that depending on the physical history of each SPH particle, the molecular composition of the modeled dense clouds presents a high dispersion in the computed abundances even if the local physical properties are similar. We find that carbon chains are the most affected species and show that these differences are directly connected to differences in (1) the electronic fraction, (2) the C/O ratio, and (3) the local physical conditions. We argue that differences in the dynamical evolution of the gas that formed dense clouds could account for the molecular diversity observed between and within these clouds. Conclusions: This study shows the importance of past physical conditions in establishing the chemical composition of the dense medium.

  16. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2016-01-05

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  17. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  18. Towards a physically-based multi-scale ecohydrological simulator for semi-arid regions

    NASA Astrophysics Data System (ADS)

    Caviedes-Voullième, Daniel; Josefik, Zoltan; Hinz, Christoph

    2017-04-01

    The use of numerical models as tools for describing and understanding complex ecohydrological systems has enabled to test hypothesis and propose fundamental, process-based explanations of the system system behaviour as a whole as well as its internal dynamics. Reaction-diffusion equations have been used to describe and generate organized pattern such as bands, spots, and labyrinths using simple feedback mechanisms and boundary conditions. Alternatively, pattern-matching cellular automaton models have been used to generate vegetation self-organization in arid and semi-arid regions also using simple description of surface hydrological processes. A key question is: How much physical realism is needed in order to adequately capture the pattern formation processes in semi-arid regions while reliably representing the water balance dynamics at the relevant time scales? In fact, redistribution of water by surface runoff at the hillslope scale occurs at temporal resolution of minutes while the vegetation development requires much lower temporal resolution and longer times spans. This generates a fundamental spatio-temporal multi-scale problem to be solved, for which high resolution rainfall and surface topography are required. Accordingly, the objective of this contribution is to provide proof-of-concept that governing processes can be described numerically at those multiple scales. The requirements for a simulating ecohydrological processes and pattern formation with increased physical realism are, amongst others: i. high resolution rainfall that adequately captures the triggers of growth as vegetation dynamics of arid regions respond as pulsed systems. ii. complex, natural topography in order to accurately model drainage patterns, as surface water redistribution is highly sensitive to topographic features. iii. microtopography and hydraulic roughness, as small scale variations do impact on large scale hillslope behaviour iv. moisture dependent infiltration as temporal dynamics of infiltration affects water storage under vegetation and in bare soil Despite the volume of research in this field, fundamental limitations still exist in the models regarding the aforementioned issues. Topography and hydrodynamics have been strongly simplified. Infiltration has been modelled as dependent on depth but independent of soil moisture. Temporal rainfall variability has only been addressed for seasonal rain. Spatial heterogenity of the topography as well as roughness and infiltration properties, has not been fully and explicitly represented. We hypothesize that physical processes must be robustly modelled and the drivers of complexity must be present with as much resolution as possible in order to provide the necessary realism to improve transient simulations, perhaps leading the way to virtual laboratories and, arguably, predictive tools. This work provides a first approach into a model with explicit hydrological processes represented by physically-based hydrodynamic models, coupled with well-accepted vegetation models. The model aims to enable new possibilities relating to spatiotemporal variability, arbitrary topography and representation of spatial heterogeneity, including sub-daily (in fact, arbitrary) temporal variability of rain as the main forcing of the model, explicit representation of infiltration processes, and various feedback mechanisms between the hydrodynamics and the vegetation. Preliminary testing strongly suggests that the model is viable, has the potential of producing new information of internal dynamics of the system, and allows to successfully aggregate many of the sources of complexity. Initial benchmarking of the model also reveals strengths to be exploited, thus providing an interesting research outlook, as well as weaknesses to be addressed in the immediate future.

  19. Comparison of Two Conceptually Different Physically-based Hydrological Models - Looking Beyond Streamflows

    NASA Astrophysics Data System (ADS)

    Rousseau, A. N.; Álvarez; Yu, X.; Savary, S.; Duffy, C.

    2015-12-01

    Most physically-based hydrological models simulate to various extents the relevant watershed processes occurring at different spatiotemporal scales. These models use different physical domain representations (e.g., hydrological response units, discretized control volumes) and numerical solution techniques (e.g., finite difference method, finite element method) as well as a variety of approximations for representing the physical processes. Despite the fact that several models have been developed so far, very few inter-comparison studies have been conducted to check beyond streamflows whether different modeling approaches could simulate in a similar fashion the other processes at the watershed scale. In this study, PIHM (Qu and Duffy, 2007), a fully coupled, distributed model, and HYDROTEL (Fortin et al., 2001; Turcotte et al., 2003, 2007), a pseudo-coupled, semi-distributed model, were compared to check whether the models could corroborate observed streamflows while equally representing other processes as well such as evapotranspiration, snow accumulation/melt or infiltration, etc. For this study, the Young Womans Creek watershed, PA, was used to compare: streamflows (channel routing), actual evapotranspiration, snow water equivalent (snow accumulation and melt), infiltration, recharge, shallow water depth above the soil surface (surface flow), lateral flow into the river (surface and subsurface flow) and height of the saturated soil column (subsurface flow). Despite a lack of observed data for contrasting most of the simulated processes, it can be said that the two models can be used as simulation tools for streamflows, actual evapotranspiration, infiltration, lateral flows into the river, and height of the saturated soil column. However, each process presents particular differences as a result of the physical parameters and the modeling approaches used by each model. Potentially, these differences should be object of further analyses to definitively confirm or reject modeling hypotheses.

  20. Coarse-Grained Models for Protein-Cell Membrane Interactions

    PubMed Central

    Bradley, Ryan; Radhakrishnan, Ravi

    2015-01-01

    The physiological properties of biological soft matter are the product of collective interactions, which span many time and length scales. Recent computational modeling efforts have helped illuminate experiments that characterize the ways in which proteins modulate membrane physics. Linking these models across time and length scales in a multiscale model explains how atomistic information propagates to larger scales. This paper reviews continuum modeling and coarse-grained molecular dynamics methods, which connect atomistic simulations and single-molecule experiments with the observed microscopic or mesoscale properties of soft-matter systems essential to our understanding of cells, particularly those involved in sculpting and remodeling cell membranes. PMID:26613047

  1. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  2. A Comparison of Curing Process-Induced Residual Stresses and Cure Shrinkage in Micro-Scale Composite Structures with Different Constitutive Laws

    NASA Astrophysics Data System (ADS)

    Li, Dongna; Li, Xudong; Dai, Jianfeng; Xi, Shangbin

    2018-02-01

    In this paper, three kinds of constitutive laws, elastic, "cure hardening instantaneously linear elastic (CHILE)" and viscoelastic law, are used to predict curing process-induced residual stress for the thermoset polymer composites. A multi-physics coupling finite element analysis (FEA) model implementing the proposed three approaches is established in COMSOL Multiphysics-Version 4.3b. The evolution of thermo-physical properties with temperature and degree of cure (DOC), which improved the accuracy of numerical simulations, and cure shrinkage are taken into account for the three models. Subsequently, these three proposed constitutive models are implemented respectively in a 3D micro-scale composite laminate structure. Compared the differences between these three numerical results, it indicates that big error in residual stress and cure shrinkage generates by elastic model, but the results calculated by the modified CHILE model are in excellent agreement with those estimated by the viscoelastic model.

  3. Unfolding an electronic integrate-and-fire circuit.

    PubMed

    Carrillo, Humberto; Hoppensteadt, Frank

    2010-01-01

    Many physical and biological phenomena involve accumulation and discharge processes that can occur on significantly different time scales. Models of these processes have contributed to understand excitability self-sustained oscillations and synchronization in arrays of oscillators. Integrate-and-fire (I+F) models are popular minimal fill-and-flush mathematical models. They are used in neuroscience to study spiking and phase locking in single neuron membranes, large scale neural networks, and in a variety of applications in physics and electrical engineering. We show here how the classical first-order I+F model fits into the theory of nonlinear oscillators of van der Pol type by demonstrating that a particular second-order oscillator having small parameters converges in a singular perturbation limit to the I+F model. In this sense, our study provides a novel unfolding of such models and it identifies a constructible electronic circuit that is closely related to I+F.

  4. Comparison of Local Scale Measured and Modeled Brightness Temperatures and Snow Parameters from the CLPX 2003 by Means of a Dense Medium Radiative Transfer Theory Model

    NASA Technical Reports Server (NTRS)

    Tedescol, Marco; Kim, Edward J.; Cline, Don; Graf, Tobias; Koike, Toshio; Armstrong, Richard; Brodzik, Mary J.; Hardy, Janet

    2004-01-01

    Microwave remote sensing offers distinct advantages for observing the cryosphere. Solar illumination is not required, and spatial and temporal coverage are excellent from polar-orbiting satellites. Passive microwave measurements are sensitive to the two most useful physical quantities for many hydrological applications: physical temperature and water content/state. Sensitivity to the latter is a direct result of the microwave sensitivity to the dielectric properties of natural media, including snow, ice, soil (frozen or thawed), and vegetation. These considerations are factors motivating the development of future cryospheric satellite remote sensing missions, continuing and improving on a 26-year microwave measurement legacy. Perhaps the biggest issues regarding the use of such satellite measurements involve how to relate parameter values at spatial scales as small as a hectare to observations with sensor footprints that may be up to 25 x 25 km. The NASA Cold-land Processes Field Experiment (CLPX) generated a dataset designed to enhance understanding of such scaling issues. CLPX observations were made in February (dry snow) and March (wet snow), 2003 in Colorado, USA, at scales ranging from plot scale to 25 x 25 km satellite footprints. Of interest here are passive microwave observations from ground-based, airborne, and satellite sensors, as well as meteorological and snowpack measurements that will enable studies of the effects of spatial heterogeneity of surface conditions on the observations. Prior to performing such scaling studies, an evaluation of snowpack forward modelling at the plot scale (least heterogeneous scale) is in order. This is the focus of this paper. Many forward models of snow signatures (brightness temperatures) have been developed over the years. It is now recognized that a dense medium radiative transfer (DMRT) treatment represents a high degree of physical fidelity for snow modeling, yet dense medium models are particularly sensitive to snowpack structural parameters such as grain size, density, and depth---parameters that may vary substantially within a snowpack. Microwave radiometric data and snow pit measurements collected at the Local-Scale Observation Site (LSOS) during the third Intensive Observation Period (IOP3) of the CLPX have been used to test the capabilities of a DMRT model using the Quasi Crystalline Approximation with Coherent Potential (QCA-CP). The radiometric measurements were made by the University of Tokyo s Ground Based Microwave Radiometer-7 (GBMR-7) system. We evaluate the degree to which a DMRT-based model can accurately reproduce the GBMR-7 brightness temperatures at different frequencies and incidence angles.

  5. ECOHAB - HYDROGRAPHY AND BIOLOGY TO PROVIDE INFORMATION FOR THE CONSTRUCTION OF A MODEL TO PREDICT THE INITIATION, MAINTANENCE AND DISPERSAL OF RED TIDE ON THE WEST COAST OF FLORIDA

    EPA Science Inventory

    This program is part of a larger program called ECOHAB: Florida that includes this study as well as physical oceanography, circulation patterns, and shelf scale modeling for predicting the occurrence and transport of Karenia brevis (=Gymnodinium breve) red tides. The physical par...

  6. Nicholas Metropolis Award for Outstanding Doctoral Thesis Work in Computational Physics Talk: Understanding Nano-scale Electronic Systems via Large-scale Computation

    NASA Astrophysics Data System (ADS)

    Cao, Chao

    2009-03-01

    Nano-scale physical phenomena and processes, especially those in electronics, have drawn great attention in the past decade. Experiments have shown that electronic and transport properties of functionalized carbon nanotubes are sensitive to adsorption of gas molecules such as H2, NO2, and NH3. Similar measurements have also been performed to study adsorption of proteins on other semiconductor nano-wires. These experiments suggest that nano-scale systems can be useful for making future chemical and biological sensors. Aiming to understand the physical mechanisms underlying and governing property changes at nano-scale, we start off by investigating, via first-principles method, the electronic structure of Pd-CNT before and after hydrogen adsorption, and continue with coherent electronic transport using non-equilibrium Green’s function techniques combined with density functional theory. Once our results are fully analyzed they can be used to interpret and understand experimental data, with a few difficult issues to be addressed. Finally, we discuss a newly developed multi-scale computing architecture, OPAL, that coordinates simultaneous execution of multiple codes. Inspired by the capabilities of this computing framework, we present a scenario of future modeling and simulation of multi-scale, multi-physical processes.

  7. New Models and Methods for the Electroweak Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Linda

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently beingmore » measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac Gaugino Models.« less

  8. Understanding the West African Monsoon from the analysis of diabatic heating distributions as simulated by climate models

    NASA Astrophysics Data System (ADS)

    Martin, G. M.; Peyrillé, P.; Roehrig, R.; Rio, C.; Caian, M.; Bellon, G.; Codron, F.; Lafore, J.-P.; Poan, D. E.; Idelkadi, A.

    2017-03-01

    Vertical and horizontal distributions of diabatic heating in the West African monsoon (WAM) region as simulated by four model families are analyzed in order to assess the physical processes that affect the WAM circulation. For each model family, atmosphere-only runs of their CMIP5 configurations are compared with more recent configurations which are on the development path toward CMIP6. The various configurations of these models exhibit significant differences in their heating/moistening profiles, related to the different representation of physical processes such as boundary layer mixing, convection, large-scale condensation and radiative heating/cooling. There are also significant differences in the models' simulation of WAM rainfall patterns and circulations. The weaker the radiative cooling in the Saharan region, the larger the ascent in the rainband and the more intense the monsoon flow, while the latitude of the rainband is related to heating in the Gulf of Guinea region and on the northern side of the Saharan heat low. Overall, this work illustrates the difficulty experienced by current climate models in representing the characteristics of monsoon systems, but also that we can still use them to understand the interactions between local subgrid physical processes and the WAM circulation. Moreover, our conclusions regarding the relationship between errors in the large-scale circulation of the WAM and the structure of the heating by small-scale processes will motivate future studies and model development.

  9. Subgrid-scale Condensation Modeling for Entropy-based Large Eddy Simulations of Clouds

    NASA Astrophysics Data System (ADS)

    Kaul, C. M.; Schneider, T.; Pressel, K. G.; Tan, Z.

    2015-12-01

    An entropy- and total water-based formulation of LES thermodynamics, such as that used by the recently developed code PyCLES, is advantageous from physical and numerical perspectives. However, existing closures for subgrid-scale thermodynamic fluctuations assume more traditional choices for prognostic thermodynamic variables, such as liquid potential temperature, and are not directly applicable to entropy-based modeling. Since entropy and total water are generally nonlinearly related to diagnosed quantities like temperature and condensate amounts, neglecting their small-scale variability can lead to bias in simulation results. Here we present the development of a subgrid-scale condensation model suitable for use with entropy-based thermodynamic formulations.

  10. Probing high scale physics with top quarks at the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Dong, Zhe

    With the Large Hadron Collider (LHC) running at TeV scale, we are expecting to find the deviations from the Standard Model in the experiments, and understanding what is the origin of these deviations. Being the heaviest elementary particle observed so far in the experiments with the mass at the electroweak scale, top quark is a powerful probe for new phenomena of high scale physics at the LHC. Therefore, we concentrate on studying the high scale physics phenomena with top quark pair production or decay at the LHC. In this thesis, we study the discovery potential of string resonances decaying to t/tbar final state, and examine the possibility of observing baryon-number-violating top-quark production or decay, at the LHC. We point out that string resonances for a string scale below 4 TeV can be detected via the t/tbar channel, by reconstructing center-of-mass frame kinematics of the resonances from either the t/tbar semi-leptonic decay or recent techniques of identifying highly boosted tops. For the study of baryon-number-violating processes, by a model independent effective approach and focusing on operators with minimal mass-dimension, we find that corresponding effective coefficients could be directly probed at the LHC already with an integrated luminosity of 1 inverse femtobarns at 7 TeV, and further constrained with 30 (100) inverse femtobarns at 7 (14) TeV.

  11. Upscaling soil saturated hydraulic conductivity from pore throat characteristics

    NASA Astrophysics Data System (ADS)

    Ghanbarian, Behzad; Hunt, Allen G.; Skaggs, Todd H.; Jarvis, Nicholas

    2017-06-01

    Upscaling and/or estimating saturated hydraulic conductivity Ksat at the core scale from microscopic/macroscopic soil characteristics has been actively under investigation in the hydrology and soil physics communities for several decades. Numerous models have been developed based on different approaches, such as the bundle of capillary tubes model, pedotransfer functions, etc. In this study, we apply concepts from critical path analysis, an upscaling technique first developed in the physics literature, to estimate saturated hydraulic conductivity at the core scale from microscopic pore throat characteristics reflected in capillary pressure data. With this new model, we find Ksat estimations to be within a factor of 3 of the average measured saturated hydraulic conductivities reported by Rawls et al. (1982) for the eleven USDA soil texture classes.

  12. Incorporation of a Generalized Data Assimilation Module within a Global Photospheric Flux Transport Model

    DTIC Science & Technology

    2016-03-31

    22 4.5.2.2 Sources and Physics of F10.7...INTRODUCTION The Sun’s strong photospheric magnetic field plays a key role in the plasma physics of the solar atmosphere and thus significantly influences...coronal and solar wind physics ; it is also the sole large-scale physical observable readily measured from Earth or spacecraft. The photospheric magnetic

  13. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.

  14. Computational Thermomechanical Modelling of Early-Age Silicate Composites

    NASA Astrophysics Data System (ADS)

    Vala, J.; Št'astník, S.; Kozák, V.

    2009-09-01

    Strains and stresses in early-age silicate composites, widely used in civil engineering, especially in fresh concrete mixtures, in addition to those caused by exterior mechanical loads, are results of complicated non-deterministic physical and chemical processes. Their numerical prediction at the macro-scale level requires the non-trivial physical analysis based on the thermodynamic principles, making use of micro-structural information from both theoretical and experimental research. The paper introduces a computational model, based on a nonlinear system of macroscopic equations of evolution, supplied with certain effective material characteristics, coming from the micro-scale analysis, and sketches the algorithm for its numerical analysis.

  15. Short‐term time step convergence in a climate model

    PubMed Central

    Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane

    2015-01-01

    Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669

  16. Tests of the Weak Equivalence Principal Below Fifty Microns

    NASA Astrophysics Data System (ADS)

    Leopardi, Holly; Hoyle, C. D.; Smith, Dave; Cardenas, Crystal; Harter, Andrew Conrad

    2014-03-01

    Due to the incompatibility of the Standard Model and General Relativity, tests of gravity remain at the forefront of experimental physics research. The Weak Equivalence Principle (WEP), which states that in a uniform gravitational field all objects fall with the same acceleration regardless of composition, total mass, or structure, is fundamentally the result of the equality of inertial mass and gravitational mass. The WEP has been effectively studied since the time of Galileo, and is a central feature of General Relativity; its violation at any length scale would bring into question fundamental aspects of the current model of gravitational physics. A variety of scenarios predict possible mechanisms that could result in a violation of the WEP. The Humboldt State University Gravitational Physics Laboratory is using a torsion pendulum with equal masses of different materials (a ``composition dipole'' configuration) to determine whether the WEP holds below the 50-micron distance scale. The experiment will measure the twist of a torsion pendulum as an attractor mass is oscillated nearby in a parallel-plate configuration, providing a time varying torque on the pendulum. The size and distance dependence of the torque variation will provide means to determine deviations from accepted models of gravity on untested distance scales. P.I.

  17. Correlation lengths in hydrodynamic models of active nematics.

    PubMed

    Hemingway, Ewan J; Mishra, Prashant; Marchetti, M Cristina; Fielding, Suzanne M

    2016-09-28

    We examine the scaling with activity of the emergent length scales that control the nonequilibrium dynamics of an active nematic liquid crystal, using two popular hydrodynamic models that have been employed in previous studies. In both models we find that the chaotic spatio-temporal dynamics in the regime of fully developed active turbulence is controlled by a single active scale determined by the balance of active and elastic stresses, regardless of whether the active stress is extensile or contractile in nature. The observed scaling of the kinetic energy and enstrophy with activity is consistent with our single-length scale argument and simple dimensional analysis. Our results provide a unified understanding of apparent discrepancies in the previous literature and demonstrate that the essential physics is robust to the choice of model.

  18. Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan

    2017-04-01

    Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).

  19. Rheological behavior of the crust and mantle in subduction zones in the time-scale range from earthquake (minute) to mln years inferred from thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to earthquake scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous earthquake sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) earthquakes. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-4year time range.

  20. Advances in understanding and parameterization of small-scale physical processes in the marine Arctic climate system: a review

    NASA Astrophysics Data System (ADS)

    Vihma, T.; Pirazzini, R.; Fer, I.; Renfrew, I. A.; Sedlar, J.; Tjernström, M.; Lüpkes, C.; Nygård, T.; Notz, D.; Weiss, J.; Marsan, D.; Cheng, B.; Birnbaum, G.; Gerland, S.; Chechin, D.; Gascard, J. C.

    2014-09-01

    The Arctic climate system includes numerous highly interactive small-scale physical processes in the atmosphere, sea ice, and ocean. During and since the International Polar Year 2007-2009, significant advances have been made in understanding these processes. Here, these recent advances are reviewed, synthesized, and discussed. In atmospheric physics, the primary advances have been in cloud physics, radiative transfer, mesoscale cyclones, coastal, and fjordic processes as well as in boundary layer processes and surface fluxes. In sea ice and its snow cover, advances have been made in understanding of the surface albedo and its relationships with snow properties, the internal structure of sea ice, the heat and salt transfer in ice, the formation of superimposed ice and snow ice, and the small-scale dynamics of sea ice. For the ocean, significant advances have been related to exchange processes at the ice-ocean interface, diapycnal mixing, double-diffusive convection, tidal currents and diurnal resonance. Despite this recent progress, some of these small-scale physical processes are still not sufficiently understood: these include wave-turbulence interactions in the atmosphere and ocean, the exchange of heat and salt at the ice-ocean interface, and the mechanical weakening of sea ice. Many other processes are reasonably well understood as stand-alone processes but the challenge is to understand their interactions with and impacts and feedbacks on other processes. Uncertainty in the parameterization of small-scale processes continues to be among the greatest challenges facing climate modelling, particularly in high latitudes. Further improvements in parameterization require new year-round field campaigns on the Arctic sea ice, closely combined with satellite remote sensing studies and numerical model experiments.

  1. Advances in understanding and parameterization of small-scale physical processes in the marine Arctic climate system: a review

    NASA Astrophysics Data System (ADS)

    Vihma, T.; Pirazzini, R.; Renfrew, I. A.; Sedlar, J.; Tjernström, M.; Nygård, T.; Fer, I.; Lüpkes, C.; Notz, D.; Weiss, J.; Marsan, D.; Cheng, B.; Birnbaum, G.; Gerland, S.; Chechin, D.; Gascard, J. C.

    2013-12-01

    The Arctic climate system includes numerous highly interactive small-scale physical processes in the atmosphere, sea ice, and ocean. During and since the International Polar Year 2007-2008, significant advances have been made in understanding these processes. Here these advances are reviewed, synthesized and discussed. In atmospheric physics, the primary advances have been in cloud physics, radiative transfer, mesoscale cyclones, coastal and fjordic processes, as well as in boundary-layer processes and surface fluxes. In sea ice and its snow cover, advances have been made in understanding of the surface albedo and its relationships with snow properties, the internal structure of sea ice, the heat and salt transfer in ice, the formation of super-imposed ice and snow ice, and the small-scale dynamics of sea ice. In the ocean, significant advances have been related to exchange processes at the ice-ocean interface, diapycnal mixing, tidal currents and diurnal resonance. Despite this recent progress, some of these small-scale physical processes are still not sufficiently understood: these include wave-turbulence interactions in the atmosphere and ocean, the exchange of heat and salt at the ice-ocean interface, and the mechanical weakening of sea ice. Many other processes are reasonably well understood as stand-alone processes but challenge is to understand their interactions with, and impacts and feedbacks on, other processes. Uncertainty in the parameterization of small-scale processes continues to be among the largest challenges facing climate modeling, and nowhere is this more true than in the Arctic. Further improvements in parameterization require new year-round field campaigns on the Arctic sea ice, closely combined with satellite remote sensing studies and numerical model experiments.

  2. Serum Vitamin E Concentrations and Recovery of Physical Function During the Year After Hip Fracture

    PubMed Central

    Miller, Ram R.; Hicks, Gregory E.; Orwig, Denise L.; Hochberg, Marc C.; Semba, Richard D.; Yu-Yahiro, Janet A.; Ferrucci, Luigi; Magaziner, Jay; Shardell, Michelle D.

    2011-01-01

    Background. Poor nutritional status after hip fracture is common and may contribute to physical function decline. Low serum concentrations of vitamin E have been associated with decline in physical function among older adults, but the role of vitamin E in physical recovery from hip fracture has never been explored. Methods. Serum concentrations of α- and γ-tocopherol, the two major forms of vitamin E, were measured in female hip fracture patients from the Baltimore Hip Studies cohort 4 at baseline and at 2-, 6-, and 12-month postfracture follow-up visits. Four physical function measures—Six-Minute Walk Distance, Lower Extremity Gain Scale, Short Form-36 Physical Functioning Domain, and Yale Physical Activity Survey—were assessed at 2, 6, and 12 months postfracture. Generalized estimating equations modeled the relationship between baseline and time-varying serum tocopherol concentrations and physical function after hip fracture. Results. A total of 148 women aged 65 years and older were studied. After adjusting for covariates, baseline vitamin E concentrations were positively associated with Six-Minute Walk Distance, Lower Extremity Gain Scale, and Yale Physical Activity Survey scores (p < .1) and faster improvement in Lower Extremity Gain Scale and Yale Physical Activity Survey scores (p < .008). Time-varying vitamin E was also positively associated with Six-Minute Walk Distance, Lower Extremity Gain Scale, Yale Physical Activity Survey, and Short Form-36 Physical Functioning Domain (p < .03) and faster improvement in Six-Minute Walk Distance and Short Form-36 Physical Functioning Domain (p < .07). Conclusions. Serum concentrations of both α- and γ-tocopherol were associated with better physical function after hip fracture. Vitamin E may represent a potentially modifiable factor related to recovery of postfracture physical function. PMID:21486921

  3. Advances in modelling of biomimetic fluid flow at different scales

    PubMed Central

    2011-01-01

    The biomimetic flow at different scales has been discussed at length. The need of looking into the biological surfaces and morphologies and both geometrical and physical similarities to imitate the technological products and processes has been emphasized. The complex fluid flow and heat transfer problems, the fluid-interface and the physics involved at multiscale and macro-, meso-, micro- and nano-scales have been discussed. The flow and heat transfer simulation is done by various CFD solvers including Navier-Stokes and energy equations, lattice Boltzmann method and molecular dynamics method. Combined continuum-molecular dynamics method is also reviewed. PMID:21711847

  4. A Self-Critique of Self-Organized Criticality in Astrophysics

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    2015-08-01

    The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.

  5. A Self-Critique of Self-Organized Criticality in Astrophysics

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.

  6. Inference from the small scales of cosmic shear with current and future Dark Energy Survey data

    DOE PAGES

    MacCrann, N.; Aleksić, J.; Amara, A.; ...

    2016-11-05

    Cosmic shear is sensitive to fluctuations in the cosmological matter density field, including on small physical scales, where matter clustering is affected by baryonic physics in galaxies and galaxy clusters, such as star formation, supernovae feedback and AGN feedback. While muddying any cosmological information that is contained in small scale cosmic shear measurements, this does mean that cosmic shear has the potential to constrain baryonic physics and galaxy formation. We perform an analysis of the Dark Energy Survey (DES) Science Verification (SV) cosmic shear measurements, now extended to smaller scales, and using the Mead et al. 2015 halo model tomore » account for baryonic feedback. While the SV data has limited statistical power, we demonstrate using a simulated likelihood analysis that the final DES data will have the statistical power to differentiate among baryonic feedback scenarios. We also explore some of the difficulties in interpreting the small scales in cosmic shear measurements, presenting estimates of the size of several other systematic effects that make inference from small scales difficult, including uncertainty in the modelling of intrinsic alignment on nonlinear scales, `lensing bias', and shape measurement selection effects. For the latter two, we make use of novel image simulations. While future cosmic shear datasets have the statistical power to constrain baryonic feedback scenarios, there are several systematic effects that require improved treatments, in order to make robust conclusions about baryonic feedback.« less

  7. Application of the Transtheoretical Model of behavior change to the physical activity behavior of WIC mothers.

    PubMed

    Fahrenwald, Nancy L; Walker, Susan Noble

    2003-01-01

    This descriptive-correlational study examined the Transtheoretical Model (TTM) of behavior change in relationship to the physical activity behavior of mothers receiving assistance from the Women, Infants, and Children program. A purposive sample (N = 30) of six women at each of the five stages of readiness for behavior change was used. Relationships between stage of behavior change (measured using the Stage of Exercise Adoption tool) and other TTM constructs were examined. The constructs and corresponding instruments included physical activity behavior (Seven-Day Physical Activity Recall), pros, cons, decisional balance (Exercise Benefits/Barriers Scale and two open-ended questions), self-efficacy (Self-efficacy for Exercise scale), and processes of behavior change (Processes of Exercise Adoption tool and the Social Support for Exercise scale). Significant relationships were found between stage of behavior change and two physical activity energy expenditure indices (rs = 0.71-0.73, p < 0.01), daily minutes of moderate to very hard physical activity (rs = 0.81, p < 0.01), pros (rs = 0.56, p < 0.01), cons (rs = -0.52, p < 0.05), decisional balance (rs = 0.56, p < 0.01), and self-efficacy (rs = 0.56, p < 0.01). Use of the 10 processes of change differed by stage of change. Pros to physical activity included a sense of accomplishment, increased strength, stress relief, and getting in shape after pregnancy. Cons included fatigue, childcare, and cold weather. Results support the TTM as relevant to WIC mothers and suggest strategies to increase physical activity in this population.

  8. Factor Structure and Measurement Invariance of a 10-Item Decisional Balance Scale: Longitudinal and Subgroup Examination within an Adult Diabetic Sample

    ERIC Educational Resources Information Center

    Pickering, Michael A.; Plotnikoff, Ronald C.

    2009-01-01

    This study explores the longitudinal and subgroup measurement properties of a 10-item, physical activity decisional balance scale, previously published by Plotnikoff, Blanchard, Hotz, and Rhodes (2001), within a diabetic sample of Canadian adults. Results indicated that a three-factor measurement model consistently improved model fit compared to…

  9. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  10. Combination of statistical and physically based methods to assess shallow slide susceptibility at the basin scale

    NASA Astrophysics Data System (ADS)

    Oliveira, Sérgio C.; Zêzere, José L.; Lajas, Sara; Melo, Raquel

    2017-07-01

    Approaches used to assess shallow slide susceptibility at the basin scale are conceptually different depending on the use of statistical or physically based methods. The former are based on the assumption that the same causes are more likely to produce the same effects, whereas the latter are based on the comparison between forces which tend to promote movement along the slope and the counteracting forces that are resistant to motion. Within this general framework, this work tests two hypotheses: (i) although conceptually and methodologically distinct, the statistical and deterministic methods generate similar shallow slide susceptibility results regarding the model's predictive capacity and spatial agreement; and (ii) the combination of shallow slide susceptibility maps obtained with statistical and physically based methods, for the same study area, generate a more reliable susceptibility model for shallow slide occurrence. These hypotheses were tested at a small test site (13.9 km2) located north of Lisbon (Portugal), using a statistical method (the information value method, IV) and a physically based method (the infinite slope method, IS). The landslide susceptibility maps produced with the statistical and deterministic methods were combined into a new landslide susceptibility map. The latter was based on a set of integration rules defined by the cross tabulation of the susceptibility classes of both maps and analysis of the corresponding contingency tables. The results demonstrate a higher predictive capacity of the new shallow slide susceptibility map, which combines the independent results obtained with statistical and physically based models. Moreover, the combination of the two models allowed the identification of areas where the results of the information value and the infinite slope methods are contradictory. Thus, these areas were classified as uncertain and deserve additional investigation at a more detailed scale.

  11. Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Abdol-Hamid, Khaled S.

    2005-01-01

    Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.

  12. Hidden sector behind the CKM matrix

    NASA Astrophysics Data System (ADS)

    Okawa, Shohei; Omura, Yuji

    2017-08-01

    The small quark mixing, described by the Cabibbo-Kobayashi-Maskawa (CKM) matrix in the standard model, may be a clue to reveal new physics around the TeV scale. We consider a simple scenario that extra particles in a hidden sector radiatively mediate the flavor violation to the quark sector around the TeV scale and effectively realize the observed CKM matrix. The lightest particle in the hidden sector, whose contribution to the CKM matrix is expected to be dominant, is a good dark matter (DM) candidate. There are many possible setups to describe this scenario, so that we investigate some universal predictions of this kind of model, focusing on the contribution of DM to the quark mixing and flavor physics. In this scenario, there is an explicit relation between the CKM matrix and flavor violating couplings, such as four-quark couplings, because both are radiatively induced by the particles in the hidden sector. Then, we can explicitly find the DM mass region and the size of Yukawa couplings between the DM and quarks, based on the study of flavor physics and DM physics. In conclusion, we show that DM mass in our scenario is around the TeV scale, and the Yukawa couplings are between O (0.01 ) and O (1 ). The spin-independent DM scattering cross section is estimated as O (10-9) [pb]. An extra colored particle is also predicted at the O (10 ) TeV scale.

  13. Sixty-Year Career in Solar Physics

    NASA Astrophysics Data System (ADS)

    Fang, C.

    2018-05-01

    This memoir reviews my academic career in solar physics for 60 years, including my research on non-LTE modeling, white-light flares, and small-scale solar activities. Through this narrative, the reader can catch a glimpse of the development of solar physics research in mainland China from scratch. In the end, some prospects for future development are given.

  14. Physics on the Smallest Scales: An Introduction to Minimal Length Phenomenology

    ERIC Educational Resources Information Center

    Sprenger, Martin; Nicolini, Piero; Bleicher, Marcus

    2012-01-01

    Many modern theories which try to unify gravity with the Standard Model of particle physics, such as e.g. string theory, propose two key modifications to the commonly known physical theories: the existence of additional space dimensions; the existence of a minimal length distance or maximal resolution. While extra dimensions have received a wide…

  15. Strain localization in models and nature: bridging the gaps.

    NASA Astrophysics Data System (ADS)

    Burov, E.; Francois, T.; Leguille, J.

    2012-04-01

    Mechanisms of strain localization and their role in tectonic evolution are still largely debated. Indeed, the laboratory data on strain localization processes are not abundant, they do not cover the entire range of possible mechanisms and have to be extrapolated, sometimes with greatest uncertainties, to geological scales while the observations of localization processes at outcrop scale are scarce, not always representative, and usually are difficult to quantify. Numerical thermo-mechanical models allow us to investigate the relative importance of some of the localization processes whether they are hypothesized or observed at laboratory or outcrop scale. The numerical models can test different observationally or analytically derived laws in terms of their applicability to natural scales and tectonic processes. The models are limited, however, in their capacity of reproduction of physical mechanisms, and necessary simplify the softening laws leading to "numerical" localization. Numerical strain localization is also limited by grid resolution and the ability of specific numerical codes to handle large strains and the complexity of the associated physical phenomena. Hence, multiple iterations between observations and models are needed to elucidate the causes of strain localization in nature. We here investigate the relative impact of different weakening laws on localization of deformation using large-strain thermo-mechanical models. We test using several "generic" rifting and collision settings, the implications of structural softening, tectonic heritage, shear heating, friction angle and cohesion softening, ductile softening (mimicking grain-size reduction) as well as of a number of other mechanisms such as fluid-assisted phase changes. The results suggest that different mechanisms of strain localization may interfere in nature, yet it most cases it is not evident to establish quantifiable links between the laboratory data and the best-fitting parameters of the effective softening laws that allow to reproduce large scale tectonic evolution. For example, one of most effective and widely used mechanisms of "numerical" strain localization is friction angle softening. Yet, namely this law appears to be most difficult to justify from physical and observational grounds.

  16. The Nature of Global Large-scale Sea Level Variability in Relation to Atmospheric Forcing: A Modeling Study

    NASA Technical Reports Server (NTRS)

    Fukumori, I.; Raghunath, R.; Fu, L. L.

    1996-01-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.

  17. The biology and polymer physics underlying large‐scale chromosome organization

    PubMed Central

    2017-01-01

    Chromosome large‐scale organization is a beautiful example of the interplay between physics and biology. DNA molecules are polymers and thus belong to the class of molecules for which physicists have developed models and formulated testable hypotheses to understand their arrangement and dynamic properties in solution, based on the principles of polymer physics. Biologists documented and discovered the biochemical basis for the structure, function and dynamic spatial organization of chromosomes in cells. The underlying principles of chromosome organization have recently been revealed in unprecedented detail using high‐resolution chromosome capture technology that can simultaneously detect chromosome contact sites throughout the genome. These independent lines of investigation have now converged on a model in which DNA loops, generated by the loop extrusion mechanism, are the basic organizational and functional units of the chromosome. PMID:29105235

  18. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    NASA Astrophysics Data System (ADS)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that accounts for all the remaining phenomena (also for the incompleteness of the cross term model), that is dynamic and that adjusts the overall dissipation. The model is tested, both a priori and a posteriori, and is compared to the direct numerical simulation and to the exact SGS terms, also in time. The model is seen to provide accurate energy spectra, also in comparison to the dynamic Smagorinsky model. It also provides significant backscatter (although four times less than the real SGS stress), while remaining stable.

  19. Censored rainfall modelling for estimation of fine-scale extremes

    NASA Astrophysics Data System (ADS)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  20. [Association of the physical activity of community-dwelling older adults with transportation modes, depression and social networks].

    PubMed

    Tsunoda, Kenji; Mitsuishi, Yasuhiro; Tsuji, Taishi; Yoon, Ji-Yeong; Muraki, Toshiaki; Hotta, Kazushi; Okura, Tomohiro

    2011-01-01

    The purpose of this study was to cross-sectionally examine the relationships among leisure, household and occupational physical activity with the frequency of going out by various transportation modes, depression and social networks in older adults. We randomly selected a total of 2,100 community-dwelling adults aged 65 to 85 years of age from the Basic Resident Register. Of these, 340 people were the subjects of this study. The scales of measurement used were the Physical Activity Scale for the Elderly, the Lubben Social Network Scale (LSNS) and the Geriatric Depression Scale (GDS). In a regression model, leisure-time physical activity significantly correlated with frequency of going out by bicycle (β=0.17) and LSNS score (β=0.17). Household physical activity and occupational physical activity were significantly correlated with LSNS score (β=0.21) and frequency of going out by motor vehicle (β=0.25), respectively. For total physical activity, in the 3 above-mentioned activities a significant correlation was observed among frequency of going out by bicycle (β=0.10), by motor vehicle (β=0.23), GDS score (β=-0.16) and LSNS score (β=0.23). These results indicate that the frequency of going out by bicycle and by motor vehicle were significant factors to predict leisure and occupational physical activity. Furthermore, social networks appear to be important determiners in leisure and household physical activity in community-dwelling older adults.

  1. Disposal of Industrial and Domestic Wastes: Land and Sea Alternatives.

    DTIC Science & Technology

    1984-01-01

    square kilometers. The rough classification of physical, chemical , and biological processes into near field versus far field and short term versus...contaminants by sedimentation is slowed. Chemical Precipitation and Dissolution During the few minutes of the initial dilution of a buoyant plume ...model. Time and space scales of physical, chemical , and biological processes often provide natural divisions in such modeling. Near -field and far-field

  2. Do Items that Measure Self-Perceived Physical Appearance Function Differentially across Gender Groups? An Application of the MACS Model

    ERIC Educational Resources Information Center

    Gonzalez-Roma, Vicente; Tomas, Ines; Ferreres, Doris; Hernandez, Ana

    2005-01-01

    The aims of this study were to investigate whether the 6 items of the Physical Appearance Scale (Marsh, Richards, Johnson, Roche, & Tremayne, 1994) show differential item functioning (DIF) across gender groups of adolescents, and to show how this can be done using the multigroup mean and covariance structure (MG-MACS) analysis model. Two samples…

  3. Anomalous transport theory for the reversed field pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, P.W.; Hegna, C.C; Sovinec, C.R.

    1996-09-01

    Physically motivated transport models with predictive capabilities and significance beyond the reversed field pinch (RFP) are presented. It is shown that the ambipolar constrained electron heat loss observed in MST can be quantitatively modeled by taking account of the clumping in parallel streaming electrons and the resultant self-consistent interaction with collective modes; that the discrete dynamo process is a relaxation oscillation whose dependence on the tearing instability and profile relaxation physics leads to amplitude and period scaling predictions consistent with experiment; that the Lundquist number scaling in relaxed plasmas driven by magnetic turbulence has a weak S{sup {minus}1/4} scaling; andmore » that radial E{times}B shear flow can lead to large reductions in the edge particle flux with little change in the heat flux, as observed in the RFP and tokamak. 24 refs.« less

  4. Everyday cognitive functioning and global cognitive performance are differentially associated with physical frailty and chronological age in older Chinese men and women.

    PubMed

    Liu, Tianyin; Wong, Gloria Hy; Luo, Hao; Tang, Jennifer Ym; Xu, Jiaqi; Choy, Jacky Cp; Lum, Terry Ys

    2017-05-02

    Intact cognition is a key determinant of quality of life. Here, we investigated the relative contribution of age and physical frailty to global and everyday cognition in older adults. Data came from 1396 community-dwelling, healthy Chinese older adults aged 65 or above. We measured their global cognition using the Cantonese Chinese Montreal Cognitive Assessment, everyday cognition with the short Chinese Lawton Instrumental Activities Daily Living scale, and physical frailty using the Fatigue, Resistance, Ambulation, Illness, and Loss of Weight Scale and grip strength. Multiple regression analysis was used to evaluate the comparative roles of age and physical frailty. In the global cognition model, age explained 12% and physical frailty explained 8% of the unique variance. This pattern was only evident in women, while the reverse (physical frailty explains a greater extent of variance) was evident in men. In the everyday cognition model, physical frailty explained 18% and chronological age explained 9% of the unique variance, with similar results across both genders. Physical frailty is a stronger indicator than age for everyday cognition in both genders and for global cognition in men. Our findings suggest that there are alternative indexes of cognitive aging than chronological age.

  5. Overview of Icing Physics Relevant to Scaling

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2005-01-01

    An understanding of icing physics is required for the development of both scaling methods and ice-accretion prediction codes. This paper gives an overview of our present understanding of the important physical processes and the associated similarity parameters that determine the shape of Appendix C ice accretions. For many years it has been recognized that ice accretion processes depend on flow effects over the model, on droplet trajectories, on the rate of water collection and time of exposure, and, for glaze ice, on a heat balance. For scaling applications, equations describing these events have been based on analyses at the stagnation line of the model and have resulted in the identification of several non-dimensional similarity parameters. The parameters include the modified inertia parameter of the water drop, the accumulation parameter and the freezing fraction. Other parameters dealing with the leading edge heat balance have also been used for convenience. By equating scale expressions for these parameters to the values to be simulated a set of equations is produced which can be solved for the scale test conditions. Studies in the past few years have shown that at least one parameter in addition to those mentioned above is needed to describe surface-water effects, and some of the traditional parameters may not be as significant as once thought. Insight into the importance of each parameter, and the physical processes it represents, can be made by viewing whether ice shapes change, and the extent of the change, when each parameter is varied. Experimental evidence is presented to establish the importance of each of the traditionally used parameters and to identify the possible form of a new similarity parameter to be used for scaling.

  6. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2010-09-30

    Application of Earth Sciences Products” supports improvements in NAAPS physics and model initialization. The implementation of NAAPS, NAVDAS-AOD, FLAMBE ...Forecasting of Biomass-Burning Smoke: Description of and Lessons From the Fire Locating and Modeling of Burning Emissions ( FLAMBE ) Program, IEEE Journal of

  7. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  8. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  9. From model conception to verification and validation, a global approach to multiphase Navier-Stoke models with an emphasis on volcanic explosive phenomenology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dartevelle, Sebastian

    2007-10-01

    Large-scale volcanic eruptions are hazardous events that cannot be described by detailed and accurate in situ measurement: hence, little to no real-time data exists to rigorously validate current computer models of these events. In addition, such phenomenology involves highly complex, nonlinear, and unsteady physical behaviors upon many spatial and time scales. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics, and inadequately constrained in terms of initial, boundary, and inflow conditions. Nevertheless, code verification and validation become even more critical because more and more volcanologists use numerical data for assessment and mitigation of volcanic hazards.more » In this report, we evaluate the process of model and code development in the context of geophysical multiphase flows. We describe: (1) the conception of a theoretical, multiphase, Navier-Stokes model, (2) its implementation into a numerical code, (3) the verification of the code, and (4) the validation of such a model within the context of turbulent and underexpanded jet physics. Within the validation framework, we suggest focusing on the key physics that control the volcanic clouds—namely, momentum-driven supersonic jet and buoyancy-driven turbulent plume. For instance, we propose to compare numerical results against a set of simple and well-constrained analog experiments, which uniquely and unambiguously represent each of the key-phenomenology. Key« less

  10. Polyelectrolyte scaling laws for microgel yielding near jamming.

    PubMed

    Bhattacharjee, Tapomoy; Kabb, Christopher P; O'Bryan, Christopher S; Urueña, Juan M; Sumerlin, Brent S; Sawyer, W Gregory; Angelini, Thomas E

    2018-02-28

    Micro-scale hydrogel particles, known as microgels, are used in industry to control the rheology of numerous different products, and are also used in experimental research to study the origins of jamming and glassy behavior in soft-sphere model systems. At the macro-scale, the rheological behaviour of densely packed microgels has been thoroughly characterized; at the particle-scale, careful investigations of jamming, yielding, and glassy-dynamics have been performed through experiment, theory, and simulation. However, at low packing fractions near jamming, the connection between microgel yielding phenomena and the physics of their constituent polymer chains has not been made. Here we investigate whether basic polymer physics scaling laws predict macroscopic yielding behaviours in packed microgels. We measure the yield stress and cross-over shear-rate in several different anionic microgel systems prepared at packing fractions just above the jamming transition, and show that our data can be predicted from classic polyelectrolyte physics scaling laws. We find that diffusive relaxations of microgel deformation during particle re-arrangements can predict the shear-rate at which microgels yield, and the elastic stress associated with these particle deformations predict the yield stress.

  11. Large-eddy Simulation of Stratocumulus-topped Atmospheric Boundary Layers with Dynamic Subgrid-scale Models

    NASA Technical Reports Server (NTRS)

    Senocak, Inane

    2003-01-01

    The objective of the present study is to evaluate the dynamic procedure in LES of stratocumulus topped atmospheric boundary layer and assess the relative importance of subgrid-scale modeling, cloud microphysics and radiation modeling on the predictions. The simulations will also be used to gain insight into the processes leading to cloud top entrainment instability and cloud breakup. In this report we document the governing equations, numerical schemes and physical models that are employed in the Goddard Cumulus Ensemble model (GCEM3D). We also present the subgrid-scale dynamic procedures that have been implemented in the GCEM3D code for the purpose of the present study.

  12. Constructing constitutive relationships for seismic and aseismic fault slip

    USGS Publications Warehouse

    Beeler, N.M.

    2009-01-01

    For the purpose of modeling natural fault slip, a useful result from an experimental fault mechanics study would be a physically-based constitutive relation that well characterizes all the relevant observations. This report describes an approach for constructing such equations. Where possible the construction intends to identify or, at least, attribute physical processes and contact scale physics to the observations such that the resulting relations can be extrapolated in conditions and scale between the laboratory and the Earth. The approach is developed as an alternative but is based on Ruina (1983) and is illustrated initially by constructing a couple of relations from that study. In addition, two example constitutive relationships are constructed; these describe laboratory observations not well-modeled by Ruina's equations: the unexpected shear-induced weakening of silica-rich rocks at high slip speed (Goldsby and Tullis, 2002) and fault strength in the brittle ductile transition zone (Shimamoto, 1986). The examples, provided as illustration, may also be useful for quantitative modeling.

  13. Stochastic downscaling of numerically simulated spatial rain and cloud fields using a transient multifractal approach

    NASA Astrophysics Data System (ADS)

    Nogueira, M.; Barros, A. P.; Miranda, P. M.

    2012-04-01

    Atmospheric fields can be extremely variable over wide ranges of spatial scales, with a scale ratio of 109-1010 between largest (planetary) and smallest (viscous dissipation) scale. Furthermore atmospheric fields with strong variability over wide ranges in scale most likely should not be artificially split apart into large and small scales, as in reality there is no scale separation between resolved and unresolved motions. Usually the effects of the unresolved scales are modeled by a deterministic bulk formula representing an ensemble of incoherent subgrid processes on the resolved flow. This is a pragmatic approach to the problem and not the complete solution to it. These models are expected to underrepresent the small-scale spatial variability of both dynamical and scalar fields due to implicit and explicit numerical diffusion as well as physically based subgrid scale turbulent mixing, resulting in smoother and less intermittent fields as compared to observations. Thus, a fundamental change in the way we formulate our models is required. Stochastic approaches equipped with a possible realization of subgrid processes and potentially coupled to the resolved scales over the range of significant scale interactions range provide one alternative to address the problem. Stochastic multifractal models based on the cascade phenomenology of the atmosphere and its governing equations in particular are the focus of this research. Previous results have shown that rain and cloud fields resulting from both idealized and realistic numerical simulations display multifractal behavior in the resolved scales. This result is observed even in the absence of scaling in the initial conditions or terrain forcing, suggesting that multiscaling is a general property of the nonlinear solutions of the Navier-Stokes equations governing atmospheric dynamics. Our results also show that the corresponding multiscaling parameters for rain and cloud fields exhibit complex nonlinear behavior depending on large scale parameters such as terrain forcing and mean atmospheric conditions at each location, particularly mean wind speed and moist stability. A particularly robust behavior found is the transition of the multiscaling parameters between stable and unstable cases, which has a clear physical correspondence to the transition from stratiform to organized (banded) convective regime. Thus multifractal diagnostics of moist processes are fundamentally transient and should provide a physically robust basis for the downscaling and sub-grid scale parameterizations of moist processes. Here, we investigate the possibility of using a simplified computationally efficient multifractal downscaling methodology based on turbulent cascades to produce statistically consistent fields at scales higher than the ones resolved by the model. Specifically, we are interested in producing rainfall and cloud fields at spatial resolutions necessary for effective flash flood and earth flows forecasting. The results are examined by comparing downscaled field against observations, and tendency error budgets are used to diagnose the evolution of transient errors in the numerical model prediction which can be attributed to aliasing.

  14. Extending the Community Multiscale Air Quality (CMAQ) Modeling System to Hemispheric Scales: Overview of Process Considerations and Initial Applications

    PubMed Central

    Mathur, Rohit; Xing, Jia; Gilliam, Robert; Sarwar, Golam; Hogrefe, Christian; Pleim, Jonathan; Pouliot, George; Roselle, Shawn; Spero, Tanya L.; Wong, David C.; Young, Jeffrey

    2018-01-01

    The Community Multiscale Air Quality (CMAQ) modeling system is extended to simulate ozone, particulate matter, and related precursor distributions throughout the Northern Hemisphere. Modelled processes were examined and enhanced to suitably represent the extended space and time scales for such applications. Hemispheric scale simulations with CMAQ and the Weather Research and Forecasting (WRF) model are performed for multiple years. Model capabilities for a range of applications including episodic long-range pollutant transport, long-term trends in air pollution across the Northern Hemisphere, and air pollution-climate interactions are evaluated through detailed comparison with available surface, aloft, and remotely sensed observations. The expansion of CMAQ to simulate the hemispheric scales provides a framework to examine interactions between atmospheric processes occurring at various spatial and temporal scales with physical, chemical, and dynamical consistency. PMID:29681922

  15. A perspective on bridging scales and design of models using low-dimensional manifolds and data-driven model inference

    PubMed Central

    Zenil, Hector; Kiani, Narsis A.; Ball, Gordon; Gomez-Cabrero, David

    2016-01-01

    Systems in nature capable of collective behaviour are nonlinear, operating across several scales. Yet our ability to account for their collective dynamics differs in physics, chemistry and biology. Here, we briefly review the similarities and differences between mathematical modelling of adaptive living systems versus physico-chemical systems. We find that physics-based chemistry modelling and computational neuroscience have a shared interest in developing techniques for model reductions aiming at the identification of a reduced subsystem or slow manifold, capturing the effective dynamics. By contrast, as relations and kinetics between biological molecules are less characterized, current quantitative analysis under the umbrella of bioinformatics focuses on signal extraction, correlation, regression and machine-learning analysis. We argue that model reduction analysis and the ensuing identification of manifolds bridges physics and biology. Furthermore, modelling living systems presents deep challenges as how to reconcile rich molecular data with inherent modelling uncertainties (formalism, variables selection and model parameters). We anticipate a new generative data-driven modelling paradigm constrained by identified governing principles extracted from low-dimensional manifold analysis. The rise of a new generation of models will ultimately connect biology to quantitative mechanistic descriptions, thereby setting the stage for investigating the character of the model language and principles driving living systems. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698038

  16. An integrated approach coupling physically based models and probabilistic method to assess quantitatively landslide susceptibility at different scale: application to different geomorphological environments

    NASA Astrophysics Data System (ADS)

    Vandromme, Rosalie; Thiéry, Yannick; Sedan, Olivier; Bernardie, Séverine

    2016-04-01

    Landslide hazard assessment is the estimation of a target area where landslides of a particular type, volume, runout and intensity may occur within a given period. The first step to analyze landslide hazard consists in assessing the spatial and temporal failure probability (when the information is available, i.e. susceptibility assessment). Two types of approach are generally recommended to achieve this goal: (i) qualitative approach (i.e. inventory based methods and knowledge data driven methods) and (ii) quantitative approach (i.e. data-driven methods or deterministic physically based methods). Among quantitative approaches, deterministic physically based methods (PBM) are generally used at local and/or site-specific scales (1:5,000-1:25,000 and >1:5,000, respectively). The main advantage of these methods is the calculation of probability of failure (safety factor) following some specific environmental conditions. For some models it is possible to integrate the land-uses and climatic change. At the opposite, major drawbacks are the large amounts of reliable and detailed data (especially materials type, their thickness and the geotechnical parameters heterogeneity over a large area) and the fact that only shallow landslides are taking into account. This is why they are often used at site-specific scales (> 1:5,000). Thus, to take into account (i) materials' heterogeneity , (ii) spatial variation of physical parameters, (iii) different landslide types, the French Geological Survey (i.e. BRGM) has developed a physically based model (PBM) implemented in a GIS environment. This PBM couples a global hydrological model (GARDENIA®) including a transient unsaturated/saturated hydrological component with a physically based model computing the stability of slopes (ALICE®, Assessment of Landslides Induced by Climatic Events) based on the Morgenstern-Price method for any slip surface. The variability of mechanical parameters is handled by Monte Carlo approach. The probability to obtain a safety factor below 1 represents the probability of occurrence of a landslide for a given triggering event. The dispersion of the distribution gives the uncertainty of the result. Finally, a map is created, displaying a probability of occurrence for each computing cell of the studied area. In order to take into account the land-uses change, a complementary module integrating the vegetation effects on soil properties has been recently developed. Last years, the model has been applied at different scales for different geomorphological environments: (i) at regional scale (1:50,000-1:25,000) in French West Indies and French Polynesian islands (ii) at local scale (i.e.1:10,000) for two complex mountainous areas; (iii) at the site-specific scale (1:2,000) for one landslide. For each study the 3D geotechnical model has been adapted. The different studies have allowed : (i) to discuss the different factors included in the model especially the initial 3D geotechnical models; (ii) to precise the location of probable failure following different hydrological scenarii; (iii) to test the effects of climatic change and land-use on slopes for two cases. In that way, future changes in temperature, precipitation and vegetation cover can be analyzed, permitting to address the impacts of global change on landslides. Finally, results show that it is possible to obtain reliable information about future slope failures at different scale of work for different scenarii with an integrated approach. The final information about landslide susceptibility (i.e. probability of failure) can be integrated in landslide hazard assessment and could be an essential information source for future land planning. As it has been performed in the ANR Project SAMCO (Society Adaptation for coping with Mountain risks in a global change COntext), this analysis constitutes a first step in the chain for risk assessment for different climate and economical development scenarios, to evaluate the resilience of mountainous areas.

  17. Physical activity and quality of life in older women with a history of depressive symptoms.

    PubMed

    Heesch, Kristiann C; van Gellecum, Yolanda R; Burton, Nicola W; van Uffelen, Jannique G Z; Brown, Wendy J

    2016-10-01

    Physical activity (PA) is positively associated with health-related quality of life (HRQL) in older adults. It is not evident whether this association applies to older adults with poor mental health. This study examined associations between PA and HRQL in older women with a history of depressive symptoms. Participants were 555 Australian women born in 1921-1926 who reported depressive symptoms in 1999 on a postal survey for the Australian Longitudinal Study on Women's Health. They completed additional surveys in 2002, 2005 and 2008 that assessed HRQL and weekly minutes walking, in moderate PA, and in vigorous PA. Random effects mixed models were used to examine concurrent and prospective associations between PA and each of 10 HRQL measures (eight SF-36 subscales; two composite scales). In concurrent models, higher levels of PA were associated with better HRQL (p<0.001). The strongest associations were found for the bodily pain, physical functioning, general health perceptions, social functioning and vitality measures. Associations were attenuated in prospective models, more so for mental HRQL-related scales than for physical HRQL-related scales. However, strong associations (>3 point differences) were evident for physical functioning, general health, vitality and social functioning. For women in their 70s-80s with a history of depressive symptoms, PA is positively associated with HRQL concurrently, and to a lesser extent prospectively. This study extends previous work by showing significant associations in older women with a history of depressive symptoms. Incorporating PA into depression management of older women may improve their HRQL. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Using Models at the Mesoscopic Scale in Teaching Physics: Two Experimental Interventions in Solid Friction and Fluid Statics

    ERIC Educational Resources Information Center

    Besson, Ugo; Viennot, Laurence

    2004-01-01

    This article examines the didactic suitability of introducing models at an intermediate (i.e. mesoscopic) scale in teaching certain subjects, at an early stage. The design and evaluation of two short sequences based on this rationale will be outlined: one bears on propulsion by solid friction, the other on fluid statics in the presence of gravity.…

  19. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  20. Interfacial mixing in high-energy-density matter with a multiphysics kinetic model

    NASA Astrophysics Data System (ADS)

    Haack, Jeffrey R.; Hauck, Cory D.; Murillo, Michael S.

    2017-12-01

    We have extended a recently developed multispecies, multitemperature Bhatnagar-Gross-Krook model [Haack et al., J. Stat. Phys. 168, 822 (2017), 10.1007/s10955-017-1824-9], to include multiphysics capabilities that enable modeling of a wider range of physical conditions. In terms of geometry, we have extended from the spatially homogeneous setting to one spatial dimension. In terms of the physics, we have included an atomic ionization model, accurate collision physics across coupling regimes, self-consistent electric fields, and degeneracy in the electronic screening. We apply the model to a warm dense matter scenario in which the ablator-fuel interface of an inertial confinement fusion target is heated, but for larger length and time scales and for much higher temperatures than can be simulated using molecular dynamics. Relative to molecular dynamics, the kinetic model greatly extends the temperature regime and the spatiotemporal scales over which we are able to model. In our numerical results we observe hydrogen from the ablator material jetting into the fuel during the early stages of the implosion and compare the relative size of various diffusion components (Fickean diffusion, electrodiffusion, and barodiffusion) that drive this process. We also examine kinetic effects, such as anisotropic distributions and velocity separation, in order to determine when this problem can be described with a hydrodynamic model.

  1. Application of experiential learning model using simple physical kit to increase attitude toward physics student senior high school in fluid

    NASA Astrophysics Data System (ADS)

    Johari, A. H.; Muslim

    2018-05-01

    Experiential learning model using simple physics kit has been implemented to get a picture of improving attitude toward physics senior high school students on Fluid. This study aims to obtain a description of the increase attitudes toward physics senior high school students. The research method used was quasi experiment with non-equivalent pretest -posttest control group design. Two class of tenth grade were involved in this research 28, 26 students respectively experiment class and control class. Increased Attitude toward physics of senior high school students is calculated using an attitude scale consisting of 18 questions. Based on the experimental class test average of 86.5% with the criteria of almost all students there is an increase and in the control class of 53.75% with the criteria of half students. This result shows that the influence of experiential learning model using simple physics kit can improve attitude toward physics compared to experiential learning without using simple physics kit.

  2. The ABC model: a non-hydrostatic toy model for use in convective-scale data assimilation investigations

    NASA Astrophysics Data System (ADS)

    Petrie, Ruth Elizabeth; Bannister, Ross Noel; Priestley Cullen, Michael John

    2017-12-01

    In developing methods for convective-scale data assimilation (DA), it is necessary to consider the full range of motions governed by the compressible Navier-Stokes equations (including non-hydrostatic and ageostrophic flow). These equations describe motion on a wide range of timescales with non-linear coupling. For the purpose of developing new DA techniques that suit the convective-scale problem, it is helpful to use so-called toy models that are easy to run and contain the same types of motion as the full equation set. Such a model needs to permit hydrostatic and geostrophic balance at large scales but allow imbalance at small scales, and in particular, it needs to exhibit intermittent convection-like behaviour. Existing toy models are not always sufficient for investigating these issues. A simplified system of intermediate complexity derived from the Euler equations is presented, which supports dispersive gravity and acoustic modes. In this system, the separation of timescales can be greatly reduced by changing the physical parameters. Unlike in existing toy models, this allows the acoustic modes to be treated explicitly and hence inexpensively. In addition, the non-linear coupling induced by the equation of state is simplified. This means that the gravity and acoustic modes are less coupled than in conventional models. A vertical slice formulation is used which contains only dry dynamics. The model is shown to give physically reasonable results, and convective behaviour is generated by localised compressible effects. This model provides an affordable and flexible framework within which some of the complex issues of convective-scale DA can later be investigated. The model is called the ABC model after the three tunable parameters introduced: A (the pure gravity wave frequency), B (the modulation of the divergent term in the continuity equation), and C (defining the compressibility).

  3. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    NASA Astrophysics Data System (ADS)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  4. Multi-Scale Multi-Physics Modeling of Matrix Transport Properties in Fractured Shale Reservoirs

    NASA Astrophysics Data System (ADS)

    Mehmani, A.; Prodanovic, M.

    2014-12-01

    Understanding the shale matrix flow behavior is imperative in successful reservoir development for hydrocarbon production and carbon storage. Without a predictive model, significant uncertainties in flowback from the formation, the communication between the fracture and matrix as well as proper fracturing practice will ensue. Informed by SEM images, we develop deterministic network models that couple pores from multiple scales and their respective fluid physics. The models are used to investigate sorption hysteresis as an affordable way of inferring the nanoscale pore structure in core scale. In addition, restricted diffusion as a function of pore shape, pore-throat size ratios and network connectivity is computed to make correct interpretation of the 2D NMR maps possible. Our novel pore network models have the ability to match sorption hysteresis measurements without any tuning parameters. The results clarify a common misconception of linking type 3 nitrogen hysteresis curves to only the shale pore shape and show promising sensitivty for nanopore structre inference in core scale. The results on restricted diffusion shed light on the importance of including shape factors in 2D NMR interpretations. A priori "weighting factors" as a function of pore-throat and throat-length ratio are presented and the effect of network connectivity on diffusion is quantitatively assessed. We are currently working on verifying our models with experimental data gathered from the Eagleford formation.

  5. Laboratory Modelling of Volcano Plumbing Systems: a review

    NASA Astrophysics Data System (ADS)

    Galland, Olivier; Holohan, Eoghan P.; van Wyk de Vries, Benjamin; Burchardt, Steffi

    2015-04-01

    Earth scientists have, since the XIX century, tried to replicate or model geological processes in controlled laboratory experiments. In particular, laboratory modelling has been used study the development of volcanic plumbing systems, which sets the stage for volcanic eruptions. Volcanic plumbing systems involve complex processes that act at length scales of microns to thousands of kilometres and at time scales from milliseconds to billions of years, and laboratory models appear very suitable to address them. This contribution reviews laboratory models dedicated to study the dynamics of volcano plumbing systems (Galland et al., Accepted). The foundation of laboratory models is the choice of relevant model materials, both for rock and magma. We outline a broad range of suitable model materials used in the literature. These materials exhibit very diverse rheological behaviours, so their careful choice is a crucial first step for the proper experiment design. The second step is model scaling, which successively calls upon: (1) the principle of dimensional analysis, and (2) the principle of similarity. The dimensional analysis aims to identify the dimensionless physical parameters that govern the underlying processes. The principle of similarity states that "a laboratory model is equivalent to his geological analogue if the dimensionless parameters identified in the dimensional analysis are identical, even if the values of the governing dimensional parameters differ greatly" (Barenblatt, 2003). The application of these two steps ensures a solid understanding and geological relevance of the laboratory models. In addition, this procedure shows that laboratory models are not designed to exactly mimic a given geological system, but to understand underlying generic processes, either individually or in combination, and to identify or demonstrate physical laws that govern these processes. From this perspective, we review the numerous applications of laboratory models to understand the distinct key features of volcanic plumbing systems: dykes, cone sheets, sills, laccoliths, caldera-related structures, ground deformation, magma/fault interactions, and explosive vents. Barenblatt, G.I., 2003. Scaling. Cambridge University Press, Cambridge. Galland, O., Holohan, E.P., van Wyk de Vries, B., Burchardt, S., Accepted. Laboratory modelling of volcanic plumbing systems: A review, in: Breitkreuz, C., Rocchi, S. (Eds.), Laccoliths, sills and dykes: Physical geology of shallow level magmatic systems. Springer.

  6. Hybrid network modeling and the effect of image resolution on digitally-obtained petrophysical and two-phase flow properties

    NASA Astrophysics Data System (ADS)

    Aghaei, A.

    2017-12-01

    Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.

  7. Rock.XML - Towards a library of rock physics models

    NASA Astrophysics Data System (ADS)

    Jensen, Erling Hugo; Hauge, Ragnar; Ulvmoen, Marit; Johansen, Tor Arne; Drottning, Åsmund

    2016-08-01

    Rock physics modelling provides tools for correlating physical properties of rocks and their constituents to the geophysical observations we measure on a larger scale. Many different theoretical and empirical models exist, to cover the range of different types of rocks. However, upon reviewing these, we see that they are all built around a few main concepts. Based on this observation, we propose a format for digitally storing the specifications for rock physics models which we have named Rock.XML. It does not only contain data about the various constituents, but also the theories and how they are used to combine these building blocks to make a representative model for a particular rock. The format is based on the Extensible Markup Language XML, making it flexible enough to handle complex models as well as scalable towards extending it with new theories and models. This technology has great advantages as far as documenting and exchanging models in an unambiguous way between people and between software. Rock.XML can become a platform for creating a library of rock physics models; making them more accessible to everyone.

  8. Evaluation of the feasibility of scale modeling to quantify wind and terrain effects on low-angle sound propagation

    NASA Technical Reports Server (NTRS)

    Anderson, G. S.; Hayden, R. E.; Thompson, A. R.; Madden, R.

    1985-01-01

    The feasibility of acoustical scale modeling techniques for modeling wind effects on long range, low frequency outdoor sound propagation was evaluated. Upwind and downwind propagation was studied in 1/100 scale for flat ground and simple hills with both rigid and finite ground impedance over a full scale frequency range from 20 to 500 Hz. Results are presented as 1/3-octave frequency spectra of differences in propagation loss between the case studied and a free-field condition. Selected sets of these results were compared with validated analytical models for propagation loss, when such models were available. When they were not, results were compared with predictions from approximate models developed. Comparisons were encouraging in many cases considering the approximations involved in both the physical modeling and analysis methods. Of particular importance was the favorable comparison between theory and experiment for propagation over soft ground.

  9. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  10. Multi-scale Drivers of Variations in Atmospheric Evaporative Demand Based on Observations and Physically-based Modeling

    NASA Astrophysics Data System (ADS)

    Peng, L.; Sheffield, J.; Li, D.

    2015-12-01

    Evapotranspiration (ET) is a key link between the availability of water resources and climate change and climate variability. Variability of ET has important environmental and socioeconomic implications for managing hydrological hazards, food and energy production. Although there have been many observational and modeling studies of ET, how ET has varied and the drivers of the variations at different temporal scales remain elusive. Much of the uncertainty comes from the atmospheric evaporative demand (AED), which is the combined effect of radiative and aerodynamic controls. The inconsistencies among modeled AED estimates and the limited observational data may originate from multiple sources including the limited time span and uncertainties in the data. To fully investigate and untangle the intertwined drivers of AED, we present a spectrum analysis to identify key controls of AED across multiple temporal scales. We use long-term records of observed pan evaporation for 1961-2006 from 317 weather stations across China and physically-based model estimates of potential evapotranspiration (PET). The model estimates are based on surface meteorology and radiation derived from reanalysis, satellite retrievals and station data. Our analyses show that temperature plays a dominant role in regulating variability of AED at the inter-annual scale. At the monthly and seasonal scales, the primary control of AED shifts from radiation in humid regions to humidity in dry regions. Unlike many studies focusing on the spatial pattern of ET drivers based on a traditional supply and demand framework, this study underlines the importance of temporal scales when discussing controls of ET variations.

  11. Multi-scale analysis of the effect of nano-filler particle diameter on the physical properties of CAD/CAM composite resin blocks.

    PubMed

    Yamaguchi, Satoshi; Inoue, Sayuri; Sakai, Takahiko; Abe, Tomohiro; Kitagawa, Haruaki; Imazato, Satoshi

    2017-05-01

    The objective of this study was to assess the effect of silica nano-filler particle diameters in a computer-aided design/manufacturing (CAD/CAM) composite resin (CR) block on physical properties at the multi-scale in silico. CAD/CAM CR blocks were modeled, consisting of silica nano-filler particles (20, 40, 60, 80, and 100 nm) and matrix (Bis-GMA/TEGDMA), with filler volume contents of 55.161%. Calculation of Young's moduli and Poisson's ratios for the block at macro-scale were analyzed by homogenization. Macro-scale CAD/CAM CR blocks (3 × 3 × 3 mm) were modeled and compressive strengths were defined when the fracture loads exceeded 6075 N. MPS values of the nano-scale models were compared by localization analysis. As the filler size decreased, Young's moduli and compressive strength increased, while Poisson's ratios and MPS decreased. All parameters were significantly correlated with the diameters of the filler particles (Pearson's correlation test, r = -0.949, 0.943, -0.951, 0.976, p < 0.05). The in silico multi-scale model established in this study demonstrates that the Young's moduli, Poisson's ratios, and compressive strengths of CAD/CAM CR blocks can be enhanced by loading silica nanofiller particles of smaller diameter. CAD/CAM CR blocks by using smaller silica nano-filler particles have a potential to increase fracture resistance.

  12. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  13. Modelling of runoff generation and soil moisture dynamics for hillslopes and micro-catchments

    NASA Astrophysics Data System (ADS)

    Bronstert, Axel; Plate, Erich J.

    1997-11-01

    The modelling of hillslope hydrology is of great importance not only for the reason that all non-plain, i.e. hilly or mountainous, landscapes can be considered as being composed of a mosaic of hillslopes. A hillslope model may also be used for both research purposes and for application-oriented, detailed, hillslope-scale hydrological studies in conjunction with related scientific disciplines such as geotechnics, geo-chemistry and environmental technology. Despite the current limited application of multi-process and multi-dimensional hydrological models (particularly at the hillslope scale), hardly any comprehensive model has been available for operational use. In this paper we introduce a model which considers most of the relevant hillslope hydrological processes. Some recent applications are described which demonstrate its ability to narrow the stated gap in hillslope hydrological modelling. The modelling system accounts for the hydrological processes of interception, evapotranspiration, infiltration, soil-moisture movement (where the flow processes can be modelled in three dimensions), surface runoff, subsurface stormflow and streamflow discharge. The relevant process interactions are also included. Special regard has been given to consideration of state-of-the-art knowledge concerning rapid soilwater flow processes during storm conditions (e.g. macropore infiltration, lateral subsurface stormflow, return flow) and to its transfer to and inclusion within an operational modelling scheme. The model is "physically based" in the sense that its parameters have a physical meaning and can be obtained or derived from field measurements. This somewhat weaker than usual definition of a physical basis implies that some of the sub-models (still) contain empirical components, that the effects of the high spatial and temporal variability found in nature cannot always be expressed within the various physical laws, i.e. that the laws are scale dependent, and that due to limitations of measurements and data processing, one can express only averaged and incomplete data conditions. Several applications demonstrate the reliable performance of the model for one-, two- and three-dimensional simulations. The described examples of application are part of a comprehensive erosion and agro-chemical transport study in a loessy agricultural catchment in southwestern Germany, and of a study on the sealing efficacy of capillary barriers in landfill covers.

  14. From coastal barriers to mountain belts - commonalities in fundamental geomorphic scaling laws

    NASA Astrophysics Data System (ADS)

    Lazarus, E.

    2016-12-01

    Overwash is a sediment-transport process essential to the form and resilience of coastal barrier landscapes. Driven by storm events, overwash leaves behind distinctive sedimentary features that, although intensively studied, have lacked unifying quantitative descriptions with which to compare their morphological attributes across documented examples or relate them to other morphodynamic phenomena. Geomorphic scaling laws quantify how measures of shape and size change with respect to another - information that helps to constrain predictions of future change and reconstructions of past environmental conditions. Here, a physical model of erosional and depositional overwash morphology yields intrinsic, allometric scaling laws involving length, width, area, volume, and alongshore spacing. Corroborative comparisons with natural washover morphology indicate scale invariance spanning several orders of magnitude. Several observers of the physical model remarked that the overwashed barrier resembled a dissected linear mountain front with an alluvial apron - an intriguing reimagining of the intended analog. Indeed, that resemblance is reflected quantitatively in these new scaling relationships, which align with canonical scaling laws for terrestrial and marine drainage basins and alluvial fans on Earth and Mars. This finding suggests disparate geomorphic systems that share common allometric properties may be related dynamically, perhaps by an influence more fundamental than characteristic erosion and deposition processes. Such an influence could come from emergent behavior at the intersection of advection and diffusion. Geomorphic behaviors at advection-diffusion transitions (and vice versa), specifically, could be the key to disentangling mechanistic causality from acausality in physical landscape patterns.

  15. Naturalness of Electroweak Symmetry Breaking while Waiting for the LHC

    NASA Astrophysics Data System (ADS)

    Espinosa, J. R.

    2007-06-01

    After revisiting the hierarchy problem of the Standard Model and its implications for the scale of New Physics, I consider the finetuning problem of electroweak symmetry breaking in several scenarios beyond the Standard Model: SUSY, Little Higgs and "improved naturalness" models. The main conclusions are that: New Physics should appear on the reach of the LHC; some SUSY models can solve the hierarchy problem with acceptable residual tuning; Little Higgs models generically suffer from large tunings, many times hidden; and, finally, that "improved naturalness" models do not generically improve the naturalness of the SM.

  16. Examining Chaotic Convection with Super-Parameterization Ensembles

    NASA Astrophysics Data System (ADS)

    Jones, Todd R.

    This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.

  17. A Coupled GCM-Cloud Resolving Modeling System, and a Regional Scale Model to Study Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2006-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CFWs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1 998 and 1999). In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).

  18. Sakurai Prize: The Future of Higgs Physics

    NASA Astrophysics Data System (ADS)

    Dawson, Sally

    2017-01-01

    The discovery of the Higgs boson relied critically on precision calculations. The quantum contributions from the Higgs boson to the W and top quark masses suggested long before the Higgs discovery that a Standard Model Higgs boson should have a mass in the 100-200 GeV range. The experimental extraction of Higgs properties requires normalization to the predicted Higgs production and decay rates, for which higher order corrections are also essential. As Higgs physics becomes a mature subject, more and more precise calculations will be required. If there is new physics at high scales, it will contribute to the predictions and precision Higgs physics will be a window to beyond the Standard Model physics.

  19. A Method for Estimating Noise from Full-Scale Distributed Exhaust Nozzles

    NASA Technical Reports Server (NTRS)

    Kinzie, Kevin W.; Schein, David B.

    2004-01-01

    A method to estimate the full-scale noise suppression from a scale model distributed exhaust nozzle (DEN) is presented. For a conventional scale model exhaust nozzle, Strouhal number scaling using a scale factor related to the nozzle exit area is typically applied that shifts model scale frequency in proportion to the geometric scale factor. However, model scale DEN designs have two inherent length scales. One is associated with the mini-nozzles, whose size do not change in going from model scale to full scale. The other is associated with the overall nozzle exit area which is much smaller than full size. Consequently, lower frequency energy that is generated by the coalesced jet plume should scale to lower frequency, but higher frequency energy generated by individual mini-jets does not shift frequency. In addition, jet-jet acoustic shielding by the array of mini-nozzles is a significant noise reduction effect that may change with DEN model size. A technique has been developed to scale laboratory model spectral data based on the premise that high and low frequency content must be treated differently during the scaling process. The model-scale distributed exhaust spectra are divided into low and high frequency regions that are then adjusted to full scale separately based on different physics-based scaling laws. The regions are then recombined to create an estimate of the full-scale acoustic spectra. These spectra can then be converted to perceived noise levels (PNL). The paper presents the details of this methodology and provides an example of the estimated noise suppression by a distributed exhaust nozzle compared to a round conic nozzle.

  20. Regional Community Climate Simulations with variable resolution meshes in the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.; Gettelman, A.; Callaghan, P.

    2017-12-01

    Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.

  1. EDITORIAL: Fracture: from the atomic to the geophysical scale Fracture: from the atomic to the geophysical scale

    NASA Astrophysics Data System (ADS)

    Bouchaud, Elisabeth; Soukiassian, Patrick

    2009-11-01

    Although fracture is a very common experience in every day life, it still harbours many unanswered questions. New avenues of investigation arise concerning the basic mechanisms leading to deformation and failure in heterogeneous materials, particularly in non-metals. The processes involved are even more complex when plasticity, thermal fluctuations or chemical interactions between the material and its environment introduce a specific time scale. Sub-critical failure, which may be reached at unexpectedly low loads, is particularly important for silicate glasses. Another source of complications originates from dynamic fracture, when loading rates become so high that the acoustic waves produced by the crack interact with the material heterogeneities, in turn producing new waves that modify the propagation. Recent progress in experimental techniques, allowing one to test and probe materials at sufficiently small length or time scales or in three dimensions, has led to a quantitative understanding of the physical processes involved. In parallel, simulations have also progressed, by extending the time and length scales they are able to reach, and thus attaining experimentally accessible conditions. However, one central question remains the inclusion of these basic mechanisms into a statistical description. This is not an easy task, mostly because of the strong stress gradients present at the tip of a crack, and because the averaging of fracture properties over a heterogeneous material, containing more or less brittle phases, requires rare event statistics. Substantial progress has been made in models and simulations based on accurate experiments. From these models, scaling laws have been derived, linking the behaviour at a micro- or even nano-scale to the macroscopic and even to geophysical scales. The reviews in this Cluster Issue of Journal of Physics D: Applied Physics cover several of these important topics, including the physical processes in fracture mechanisms, the sub-critical failure issue, the dynamical fracture propagation, and the scaling laws from the micro- to the geophysical scales. Achievements and progress are reported, and the many open questions are discussed, which should provide a sound basis for present and future prospects.

  2. On physical scales of dark matter halos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemp, Marcel, E-mail: mzemp@pku.edu.cn

    2014-09-10

    It is common practice to describe formal size and mass scales of dark matter halos as spherical overdensities with respect to an evolving density threshold. Here, we critically investigate the evolutionary effects of several such commonly used definitions and compare them to the halo evolution within fixed physical scales as well as to the evolution of other intrinsic physical properties of dark matter halos. It is shown that, in general, the traditional way of characterizing sizes and masses of halos dramatically overpredicts the degree of evolution in the last 10 Gyr, especially for low-mass halos. This pseudo-evolution leads to themore » illusion of growth even though there are no major changes within fixed physical scales. Such formal size definitions also serve as proxies for the virialized region of a halo in the literature. In general, those spherical overdensity scales do not coincide with the virialized region. A physically more precise nomenclature would be to simply characterize them by their very definition instead of calling such formal size and mass definitions 'virial'. In general, we find a discrepancy between the evolution of the underlying physical structure of dark matter halos seen in cosmological structure formation simulations and pseudo-evolving formal virial quantities. We question the importance of the role of formal virial quantities currently ubiquitously used in descriptions, models, and relations that involve properties of dark matter structures. Concepts and relations based on pseudo-evolving formal virial quantities do not properly reflect the actual evolution of dark matter halos and lead to an inaccurate picture of the physical evolution of our universe.« less

  3. Multiple Scales in Fluid Dynamics and Meteorology: The DFG Priority Programme 1276 MetStröm

    NASA Astrophysics Data System (ADS)

    von Larcher, Th; Klein, R.

    2012-04-01

    Geophysical fluid motions are characterized by a very wide range of length and time scales, and by a rich collection of varying physical phenomena. The mathematical description of these motions reflects this multitude of scales and mechanisms in that it involves strong non-linearities and various scale-dependent singular limit regimes. Considerable progress has been made in recent years in the mathematical modelling and numerical simulation of such flows in detailed process studies, numerical weather forecasting, and climate research. One task of outstanding importance in this context has been and will remain for the foreseeable future the subgrid scale parameterization of the net effects of non-resolved processes that take place on spacio-temporal scales not resolvable even by the largest most recent supercomputers. Since the advent of numerical weather forecasting some 60 years ago, one simple but efficient means to achieve improved forecasting skills has been increased spacio-temporal resolution. This seems quite consistent with the concept of convergence of numerical methods in Applied Mathematics and Computational Fluid Dynamics (CFD) at a first glance. Yet, the very notion of increased resolution in atmosphere-ocean science is very different from the one used in Applied Mathematics: For the mathematician, increased resolution provides the benefit of getting closer to the ideal of a converged solution of some given partial differential equations. On the other hand, the atmosphere-ocean scientist would naturally refine the computational grid and adjust his mathematical model, such that it better represents the relevant physical processes that occur at smaller scales. This conceptual contradiction remains largely irrelevant as long as geophysical flow models operate with fixed computational grids and time steps and with subgrid scale parameterizations being optimized accordingly. The picture changes fundamentally when modern techniques from CFD involving spacio-temporal grid adaptivity get invoked in order to further improve the net efficiency in exploiting the given computational resources. In the setting of geophysical flow simulation one must then employ subgrid scale parameterizations that dynamically adapt to the changing grid sizes and time steps, implement ways to judiciously control and steer the newly available flexibility of resolution, and invent novel ways of quantifying the remaining errors. The DFG priority program MetStröm covers the expertise of Meteorology, Fluid Dynamics, and Applied Mathematics to develop model- as well as grid-adaptive numerical simulation concepts in multidisciplinary projects. The goal of this priority programme is to provide simulation models which combine scale-dependent (mathematical) descriptions of key physical processes with adaptive flow discretization schemes. Deterministic continuous approaches and discrete and/or stochastic closures and their possible interplay are taken into consideration. Research focuses on the theory and methodology of multiscale meteorological-fluid mechanics modelling. Accompanying reference experiments support model validation.

  4. Testing the Standard Model by precision measurement of the weak charges of quarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross Young; Roger Carlini; Anthony Thomas

    In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low-energy. The precision of this new result, combined with earlier atomic parity-violation measurements, limits the magnitude of possible contributions from physics beyond the Standard Model - setting a model-independent, lower-bound on the scale of new physics at ~1 TeV.

  5. Physical models of polarization mode dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menyuk, C.R.; Wai, P.K.A.

    The effect of randomly varying birefringence on light propagation in optical fibers is studied theoretically in the parameter regime that will be used for long-distance communications. In this regime, the birefringence is large and varies very rapidly in comparison to the nonlinear and dispersive scale lengths. We determine the polarization mode dispersion, and we show that physically realistic models yield the same result for polarization mode dispersion as earlier heuristic models that were introduced by Poole. We also prove an ergodic theorem.

  6. How to Make Our Models More Physically-based

    NASA Astrophysics Data System (ADS)

    Savenije, H. H. G.

    2016-12-01

    Models that are generally called "physically-based" unfortunately only have a partial view of the physical processes at play in hydrology. Although the coupled partial differential equations in these models reflect the water balance equations and the flow descriptors at laboratory scale, they miss essential characteristics of what determines the functioning of catchments. The most important active agent in catchments is the ecosystem (and sometimes people). What these agents do is manipulate the substrate in a way that it supports the essential functions of survival and productivity: infiltration of water, retention of moisture, mobilization and retention of nutrients, and drainage. Ecosystems do this in the most efficient way, in agreement with the landscape, and in response to climatic drivers. In brief, our hydrological system is alive and has a strong capacity to adjust to prevailing and changing circumstances. Although most physically based models take Newtonian theory at heart, as best they can, what they generally miss is Darwinian thinking on how an ecosystem evolves and adjusts its environment to maintain crucial hydrological functions. If this active agent is not reflected in our models, then they miss essential physics. Through a Darwinian approach, we can determine the root zone storage capacity of ecosystems, as a crucial component of hydrological models, determining the partitioning of fluxes and the conservation of moisture to bridge periods of drought. Another crucial element of physical systems is the evolution of drainage patterns, both on and below the surface. On the surface, such patterns facilitate infiltration or surface drainage with minimal erosion; in the unsaturated zone, patterns facilitate efficient replenishment of moisture deficits and preferential drainage when there is excess moisture; in the groundwater, patterns facilitate the efficient and gradual drainage of groundwater, resulting in linear reservoir recession. Models that do not incorporate these patterns are not physical. The parameters in the equations may be adjusted to compensate for the lake of patterns, but this involves scale-dependent calibration. In contrast to what is widely believed, relatively simple conceptual models can accommodate these physical processes accurately and very efficiently.

  7. Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zabaras, Nicolas J.

    2016-11-08

    Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.

  8. Linear discrete systems with memory: a generalization of the Langmuir model

    NASA Astrophysics Data System (ADS)

    Băleanu, Dumitru; Nigmatullin, Raoul R.

    2013-10-01

    In this manuscript we analyzed a general solution of the linear nonlocal Langmuir model within time scale calculus. Several generalizations of the Langmuir model are presented together with their exact corresponding solutions. The physical meaning of the proposed models are investigated and their corresponding geometries are reported.

  9. Scaling and modeling of turbulent suspension flows

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1989-01-01

    Scaling factors determining various aspects of particle-fluid interactions and the development of physical models to predict gas-solid turbulent suspension flow fields are discussed based on two-fluid, continua formulation. The modes of particle-fluid interactions are discussed based on the length and time scale ratio, which depends on the properties of the particles and the characteristics of the flow turbulence. For particle size smaller than or comparable with the Kolmogorov length scale and concentration low enough for neglecting direct particle-particle interaction, scaling rules can be established in various parameter ranges. The various particle-fluid interactions give rise to additional mechanisms which affect the fluid mechanics of the conveying gas phase. These extra mechanisms are incorporated into a turbulence modeling method based on the scaling rules. A multiple-scale two-phase turbulence model is developed, which gives reasonable predictions for dilute suspension flow. Much work still needs to be done to account for the poly-dispersed effects and the extension to dense suspension flows.

  10. Calibration of Noah soil hydraulic property parameters using surface soil moisture from SMOS and basin-wide in situ observations

    USDA-ARS?s Scientific Manuscript database

    Soil hydraulic properties can be retrieved from physical sampling of soil, via surveys, but this is time consuming and only as accurate as the scale of the sample. Remote sensing provides an opportunity to get pertinent soil properties at large scales, which is very useful for large scale modeling....

  11. New Physical Algorithms for Downscaling SMAP Soil Moisture

    NASA Astrophysics Data System (ADS)

    Sadeghi, M.; Ghafari, E.; Babaeian, E.; Davary, K.; Farid, A.; Jones, S. B.; Tuller, M.

    2017-12-01

    The NASA Soil Moisture Active Passive (SMAP) mission provides new means for estimation of surface soil moisture at the global scale. However, for many hydrological and agricultural applications the spatial SMAP resolution is too low. To address this scale issue we fused SMAP data with MODIS observations to generate soil moisture maps at 1-km spatial resolution. In course of this study we have improved several existing empirical algorithms and introduced a new physical approach for downscaling SMAP data. The universal triangle/trapezoid model was applied to relate soil moisture to optical/thermal observations such as NDVI, land surface temperature and surface reflectance. These algorithms were evaluated with in situ data measured at 5-cm depth. Our results demonstrate that downscaling SMAP soil moisture data based on physical indicators of soil moisture derived from the MODIS satellite leads to higher accuracy than that achievable with empirical downscaling algorithms. Keywords: Soil moisture, microwave data, downscaling, MODIS, triangle/trapezoid model.

  12. The biology and polymer physics underlying large-scale chromosome organization.

    PubMed

    Sazer, Shelley; Schiessel, Helmut

    2018-02-01

    Chromosome large-scale organization is a beautiful example of the interplay between physics and biology. DNA molecules are polymers and thus belong to the class of molecules for which physicists have developed models and formulated testable hypotheses to understand their arrangement and dynamic properties in solution, based on the principles of polymer physics. Biologists documented and discovered the biochemical basis for the structure, function and dynamic spatial organization of chromosomes in cells. The underlying principles of chromosome organization have recently been revealed in unprecedented detail using high-resolution chromosome capture technology that can simultaneously detect chromosome contact sites throughout the genome. These independent lines of investigation have now converged on a model in which DNA loops, generated by the loop extrusion mechanism, are the basic organizational and functional units of the chromosome. © 2017 The Authors. Traffic published by John Wiley & Sons Ltd.

  13. Maxwell Prize Talk: Scaling Laws for the Dynamical Plasma Phenomena

    NASA Astrophysics Data System (ADS)

    Ryutov, Livermore, Ca 94550, Usa, D. D.

    2017-10-01

    The scaling and similarity technique is a powerful tool for developing and testing reduced models of complex phenomena, including plasma phenomena. The technique has been successfully used in identifying appropriate simplified models of transport in quasistationary plasmas. In this talk, the similarity and scaling arguments will be applied to highly dynamical systems, in which temporal evolution of the plasma leads to a significant change of plasma dimensions, shapes, densities, and other parameters with respect to initial state. The scaling and similarity techniques for dynamical plasma systems will be presented as a set of case studies of problems from various domains of the plasma physics, beginning with collisonless plasmas, through intermediate collisionalities, to highly collisional plasmas describable by the single-fluid MHD. Basic concepts of the similarity theory will be introduced along the way. Among the results discussed are: self-similarity of Langmuir turbulence driven by a hot electron cloud expanding into a cold background plasma; generation of particle beams in disrupting pinches; interference between collisionless and collisional phenomena in the shock physics; similarity for liner-imploded plasmas; MHD similarities with an emphasis on the effect of small-scale (turbulent) structures on global dynamics. Relations between astrophysical phenomena and scaled laboratory experiments will be discussed.

  14. On the Subgrid-Scale Modeling of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Squires, Kyle; Zeman, Otto

    1990-01-01

    A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.

  15. Time scale variations of the CIV resonance lines in HD 24534

    NASA Astrophysics Data System (ADS)

    Tsatsi, A.

    2012-01-01

    Many lines in the spectra of hot emission stars (Be and Oe) present peculiar and very complex profiles. As a result we can not find a classical theoretical distribution to fit these physical profiles; hence many physical parameters of the regions where these lines are created are difficult to be determined. In this paper, we adopt the Gauss-Rotation model (GR-model), that proposed the idea that these complex profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs). The model is applied for CIV (λλ 1548.187, 1550.772 A) resonance lines in the spectra of HD 24534 (X Persei), taken by I.U.E. at three different periods. From this analysis we can calculate the values of a group of physical parameters, such as the apparent rotational and radial velocities, the random velocities of the thermal motions of the ions, as well as the Full Width at Half Maximum (FWHM) and the absorbed energy of the independent regions of matter which produce the main and the satellite components of the studied spectral lines. Finally, we calculate the time scale variation of the above physical parameters.

  16. Computational Cosmology: From the Early Universe to the Large Scale Structure.

    PubMed

    Anninos, Peter

    2001-01-01

    In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations (and numerical methods applied to specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  17. Computational Cosmology: from the Early Universe to the Large Scale Structure.

    PubMed

    Anninos, Peter

    1998-01-01

    In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations addressing specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  18. Some predictions of the attached eddy model for a high Reynolds number boundary layer.

    PubMed

    Nickels, T B; Marusic, I; Hafez, S; Hutchins, N; Chong, M S

    2007-03-15

    Many flows of practical interest occur at high Reynolds number, at which the flow in most of the boundary layer is turbulent, showing apparently random fluctuations in velocity across a wide range of scales. The range of scales over which these fluctuations occur increases with the Reynolds number and hence high Reynolds number flows are difficult to compute or predict. In this paper, we discuss the structure of these flows and describe a physical model, based on the attached eddy hypothesis, which makes predictions for the statistical properties of these flows and their variation with Reynolds number. The predictions are shown to compare well with the results from recent experiments in a new purpose-built high Reynolds number facility. The model is also shown to provide a clear physical explanation for the trends in the data. The limits of applicability of the model are also discussed.

  19. From Wake Steering to Flow Control

    DOE PAGES

    Fleming, Paul A.; Annoni, Jennifer; Churchfield, Matthew J.; ...

    2017-11-22

    In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less

  20. Hostility and quality of life among Hispanics/Latinos in the HCHS/SOL Sociocultural Ancillary Study.

    PubMed

    Moncrieft, Ashley E; Llabre, Maria M; Gallo, Linda C; Cai, Jianwen; Gonzalez, Franklyn; Gonzalez, Patricia; Ostrovsky, Natania W; Schneiderman, Neil; Penedo, Frank J

    2016-11-01

    The purpose of this study was to determine if hostility is associated with physical and mental health-related quality of life (QoL) in US. Hispanics/Latinos after accounting for depression and anxiety. Analyses included 5313 adults (62% women, 18-75 years) who completed the ancillary sociocultural assessment of the Hispanic Community Health Study/Study of Latinos. Participants completed the Center for Epidemiological Studies Depression Scale, Spielberger Trait Anxiety Scale, Spielberger Trait Anger Scale, Cook-Medley Hostility cynicism subscale and Short Form Health Survey. In a structural regression model, associations of hostility with mental and physical QoL were examined. In a model adjusting for age, sex, disease burden, income, education and years in the US., hostility was related to worse mental QoL, and was marginally associated with worse physical QoL. However, when adjusting for the influence of depression and anxiety, greater hostility was associated with better mental QoL, and was not associated with physical QoL. Results indicate observed associations between hostility and QoL are confounded by symptoms of anxiety and depression, and suggest hostility is independently associated with better mental QoL in this population. Findings also highlight the importance of differentiating shared and unique associations of specific emotions with health outcomes.

  1. Perceived neighborhood problems: multilevel analysis to evaluate psychometric properties in a Southern adult Brazilian population

    PubMed Central

    2013-01-01

    Background Physical attributes of the places in which people live, as well as their perceptions of them, may be important health determinants. The perception of place in which people dwell may impact on individual health and may be a more telling indicator for individual health than objective neighborhood characteristics. This paper aims to evaluate psychometric and ecometric properties of a scale on the perceptions of neighborhood problems in adults from Florianopolis, Southern Brazil. Methods Individual, census tract level (per capita monthly familiar income) and neighborhood problems perception (physical and social disorders) variables were investigated. Multilevel models (items nested within persons, persons nested within neighborhoods) were run to assess ecometric properties of variables assessing neighborhood problems. Results The response rate was 85.3%, (1,720 adults). Participants were distributed in 63 census tracts. Two scales were identified using 16 items: Physical Problems and Social Disorder. The ecometric properties of the scales satisfactory: 0.24 to 0.28 for the intra-class correlation and 0.94 to 0.96 for reliability. Higher values on the scales of problems in the physical and social domains were associated with younger age, more length of time residing in the same neighborhood and lower census tract income level. Conclusions The findings support the usefulness of these scales to measure physical and social disorder problems in neighborhoods. PMID:24256619

  2. Mapping Soil Age at Continental Scales

    NASA Astrophysics Data System (ADS)

    Slessarev, E.; Feng, X.

    2017-12-01

    Soil age controls the balance between weathered and unweathered minerals in soil, and thus strongly influences many of the biological, geochemical, and hydrological functions of the critical zone. However, most quantitative models of soil development do not represent soil age. Instead, they rely on a steady-state assumption: physical erosion controls the residence time of unweathered minerals in soil, and thus fixes the chemical weathering rate. This assumption may hold true in mountainous landscapes, where physical erosion rates are high. However, the steady-state assumption may fail in low-relief landscapes, where physical erosion rates have been insufficient to remove unweathered minerals left by glaciation and dust deposition since the Last Glacial Maximum (LGM). To test the applicability of the steady-state assumption at continental scales, we developed an empirical predictor for physical erosion, and then simulated soil development since LGM with a numerical model. We calibrated the physical erosion predictor using a compilation of watershed-scale sediment yield data, and in-situ 10Be denudation measurements corrected for weathering by Zr/Ti mass-balance. Physical erosion rates can be predicted using a power-law function of local relief and peak ground acceleration, a proxy for tectonic activity. Coupling physical erosion rates with the numerical model reveals that extensive low-relief areas of North America may depart from steady-state because they were glaciated, or received high dust fluxes during LGM. These LGM legacy effects are reflected in topsoil Ca:Al and Quartz:Feldspar ratios derived from United States Geological Survey data, and in a global compilation of soil pH measurements. Our results quantitatively support the classic idea that soils in the mid-high latitudes of the Northern Hemisphere are "young", in the sense that they are undergoing transient response to LGM conditions. Where they occur, such departures from steady-state likely increase mineral weathering rates and the supply of rock-derived nutrients to ecosystems.

  3. Physics Beyond the Standard Model: Exotic Leptons and Black Holes at Future Colliders

    NASA Astrophysics Data System (ADS)

    Harris, Christopher M.

    2005-02-01

    The Standard Model of particle physics has been remarkably successful in describing present experimental results. However, it is assumed to be only a low-energy effective theory which will break down at higher energy scales, theoretically motivated to be around 1 TeV. There are a variety of proposed models of new physics beyond the Standard Model, most notably supersymmetric and extra dimension models. New charged and neutral heavy leptons are a feature of a number of theories of new physics, including the `intermediate scale' class of supersymmetric models. Using a time-of-flight technique to detect the charged leptons at the Large Hadron Collider, the discovery range (in the particular scenario studied in the first part of this thesis) is found to extend up to masses of 950 GeV. Extra dimension models, particularly those with large extra dimensions, allow the possible experimental production of black holes. The remainder of the thesis describes some theoretical results and computational tools necessary to model the production and decay of these miniature black holes at future particle colliders. The grey-body factors which describe the Hawking radiation emitted by higher-dimensional black holes are calculated numerically for the first time and then incorporated in a Monte Carlo black hole event generator; this can be used to model black hole production and decay at next-generation colliders. It is hoped that this generator will allow more detailed examination of black hole signatures and help to devise a method for extracting the number of extra dimensions present in nature.

  4. A boundary condition for layer to level ocean model interaction

    NASA Astrophysics Data System (ADS)

    Mask, A.; O'Brien, J.; Preller, R.

    2003-04-01

    A radiation boundary condition based on vertical normal modes is introduced to allow a physical transition between nested/coupled ocean models that are of differing vertical structure and/or differing physics. In this particular study, a fine resolution regional/coastal sigma-coordinate Naval Coastal Ocean Model (NCOM) has been successfully nested to a coarse resolution (in the horizontal and vertical) basin scale NCOM and a coarse resolution basin scale Navy Layered Ocean Model (NLOM). Both of these models were developed at the Naval Research Laboratory (NRL) at Stennis Space Center, Mississippi, USA. This new method, which decomposes the vertical structure of the models into barotropic and baroclinic modes, gives improved results in the coastal domain over Orlanski radiation boundary conditions for the test cases. The principle reason for the improvement is that each mode has the radiation boundary condition applied individually; therefore, the packet of information passing through the boundary is allowed to have multiple phase speeds instead of a single-phase speed. Allowing multiple phase speeds reduces boundary reflections, thus improving results.

  5. WE-DE-202-00: Connecting Radiation Physics with Computational Biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are themore » most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological processes are too complex for a mechanistic approach. Can computer simulations be used to guide future biological research? We will debate the feasibility of explaining biology from a physicists’ perspective. Learning Objectives: Understand the potential applications and limitations of computational methods for dose-response modeling at the molecular, cellular and tissue levels Learn about mechanism of action underlying the induction, repair and biological processing of damage to DNA and other constituents Understand how effects and processes at one biological scale impact on biological processes and outcomes on other scales J. Schuemann, NCI/NIH grantsS. McMahon, Funding: European Commission FP7 (grant EC FP7 MC-IOF-623630)« less

  6. WE-DE-202-01: Connecting Nanoscale Physics to Initial DNA Damage Through Track Structure Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, J.

    Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are themore » most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological processes are too complex for a mechanistic approach. Can computer simulations be used to guide future biological research? We will debate the feasibility of explaining biology from a physicists’ perspective. Learning Objectives: Understand the potential applications and limitations of computational methods for dose-response modeling at the molecular, cellular and tissue levels Learn about mechanism of action underlying the induction, repair and biological processing of damage to DNA and other constituents Understand how effects and processes at one biological scale impact on biological processes and outcomes on other scales J. Schuemann, NCI/NIH grantsS. McMahon, Funding: European Commission FP7 (grant EC FP7 MC-IOF-623630)« less

  7. Physics of chewing in terrestrial mammals.

    PubMed

    Virot, Emmanuel; Ma, Grace; Clanet, Christophe; Jung, Sunghwan

    2017-03-07

    Previous studies on chewing frequency across animal species have focused on finding a single universal scaling law. Controversy between the different models has been aroused without elucidating the variations in chewing frequency. In the present study we show that vigorous chewing is limited by the maximum force of muscle, so that the upper chewing frequency scales as the -1/3 power of body mass for large animals and as a constant frequency for small animals. On the other hand, gentle chewing to mix food uniformly without excess of saliva describes the lower limit of chewing frequency, scaling approximately as the -1/6 power of body mass. These physical constraints frame the -1/4 power law classically inferred from allometry of animal metabolic rates. All of our experimental data stay within these physical boundaries over six orders of magnitude of body mass regardless of food types.

  8. Physics of chewing in terrestrial mammals

    NASA Astrophysics Data System (ADS)

    Virot, Emmanuel; Ma, Grace; Clanet, Christophe; Jung, Sunghwan

    2017-03-01

    Previous studies on chewing frequency across animal species have focused on finding a single universal scaling law. Controversy between the different models has been aroused without elucidating the variations in chewing frequency. In the present study we show that vigorous chewing is limited by the maximum force of muscle, so that the upper chewing frequency scales as the -1/3 power of body mass for large animals and as a constant frequency for small animals. On the other hand, gentle chewing to mix food uniformly without excess of saliva describes the lower limit of chewing frequency, scaling approximately as the -1/6 power of body mass. These physical constraints frame the -1/4 power law classically inferred from allometry of animal metabolic rates. All of our experimental data stay within these physical boundaries over six orders of magnitude of body mass regardless of food types.

  9. Singularity-free next-to-leading order ΔS = 1 renormalization group evolution and ɛ K ' /ɛK in the Standard Model and beyond

    NASA Astrophysics Data System (ADS)

    Kitahara, Teppei; Nierste, Ulrich; Tremper, Paul

    2016-12-01

    The standard analytic solution of the renormalization group (RG) evolution for the Δ S = 1 Wilson coefficients involves several singularities, which complicate analytic solutions. In this paper we derive a singularity-free solution of the next-to-leading order (NLO) RG equations, which greatly facilitates the calculation of ɛ K ' , the measure of direct CP violation in K → ππ decays. Using our new RG evolution and the latest lattice results for the hadronic matrix elements, we calculate the ratio ɛ K ' /ɛ K (with ɛ K quantifying indirect CP violation) in the Standard Model (SM) at NLO to ɛ K ' /ɛ K = (1.06 ± 5.07) × 10- 4, which is 2 .8 σ below the experimental value. We also present the evolution matrix in the high-energy regime for calculations of new physics contributions and derive easy-to-use approximate formulae. We find that the RG amplification of new-physics contributions to Wilson coefficients of the electroweak penguin operators is further enhanced by the NLO corrections: if the new contribution is generated at the scale of 1-10 TeV, the RG evolution between the new-physics scale and the electroweak scale enhances these coefficients by 50-100%. Our solution contains a term of order α EM 2 / α s 2 , which is numerically unimportant for the SM case but should be included in studies of high-scale new-physics.

  10. Analytical model of flame spread in full-scale room/corner tests (ISO9705)

    Treesearch

    Mark Dietenberger; Ondrej Grexa

    1999-01-01

    A physical, yet analytical, model of fire growth has predicted flame spread and rate of heat release (RHR) for an ISO9705 test scenario using bench-scale data from the cone calorimeter. The test scenario simulated was the propane ignition burner at the comer with a 100/300 kW program and the specimen lined on the walls only. Four phases of fire growth were simulated....

  11. On Efficient Multigrid Methods for Materials Processing Flows with Small Particles

    NASA Technical Reports Server (NTRS)

    Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael

    2004-01-01

    Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.

  12. Quantum metabolism explains the allometric scaling of metabolic rates.

    PubMed

    Demetrius, Lloyd; Tuszynski, J A

    2010-03-06

    A general model explaining the origin of allometric laws of physiology is proposed based on coupled energy-transducing oscillator networks embedded in a physical d-dimensional space (d = 1, 2, 3). This approach integrates Mitchell's theory of chemi-osmosis with the Debye model of the thermal properties of solids. We derive a scaling rule that relates the energy generated by redox reactions in cells, the dimensionality of the physical space and the mean cycle time. Two major regimes are found corresponding to classical and quantum behaviour. The classical behaviour leads to allometric isometry while the quantum regime leads to scaling laws relating metabolic rate and body size that cover a broad range of exponents that depend on dimensionality and specific parameter values. The regimes are consistent with a range of behaviours encountered in micelles, plants and animals and provide a conceptual framework for a theory of the metabolic function of living systems.

  13. Inflatable Dark Matter.

    PubMed

    Davoudiasl, Hooman; Hooper, Dan; McDermott, Samuel D

    2016-01-22

    We describe a general scenario, dubbed "inflatable dark matter," in which the density of dark matter particles can be reduced through a short period of late-time inflation in the early Universe. The overproduction of dark matter that is predicted within many, otherwise, well-motivated models of new physics can be elegantly remedied within this context. Thermal relics that would, otherwise, be disfavored can easily be accommodated within this class of scenarios, including dark matter candidates that are very heavy or very light. Furthermore, the nonthermal abundance of grand unified theory or Planck scale axions can be brought to acceptable levels without invoking anthropic tuning of initial conditions. A period of late-time inflation could have occurred over a wide range of scales from ∼MeV to the weak scale or above, and could have been triggered by physics within a hidden sector, with small but not necessarily negligible couplings to the standard model.

  14. Regional and climate forcing on forage fish and apex predators in the California Current: new insights from a fully coupled ecosystem model.

    NASA Astrophysics Data System (ADS)

    Fiechter, J.; Rose, K.; Curchitser, E. N.; Huckstadt, L. A.; Costa, D. P.; Hedstrom, K.

    2016-12-01

    A fully coupled ecosystem model is used to describe the impact of regional and climate variability on changes in abundance and distribution of forage fish and apex predators in the California Current Large Marine Ecosystem. The ecosystem model consists of a biogeochemical submodel (NEMURO) embedded in a regional ocean circulation submodel (ROMS), and both coupled with a multi-species individual-based submodel for two forage fish species (sardine and anchovy) and one apex predator (California sea lion). Sardine and anchovy are specifically included in the model as they exhibit significant interannual and decadal variability in population abundances, and are commonly found in the diet of California sea lions. Output from the model demonstrates how regional-scale (i.e., upwelling intensity) and basin-scale (i.e., PDO and ENSO signals) physical processes control species distributions and predator-prey interactions on interannual time scales. The results also illustrate how variability in environmental conditions leads to the formation of seasonal hotspots where prey and predator spatially overlap. While specifically focused on sardine, anchovy and sea lions, the modeling framework presented here can provide new insights into the physical and biological mechanisms controlling trophic interactions in the California Current, or other regions where similar end-to-end ecosystem models may be implemented.

  15. Application of Buckingham π theorem for scaling-up oriented fast modelling of Proton Exchange Membrane Fuel Cell impedance

    NASA Astrophysics Data System (ADS)

    Russo, Luigi; Sorrentino, Marco; Polverino, Pierpaolo; Pianese, Cesare

    2017-06-01

    This work focuses on the development of a fast PEMFC impedance model, built starting from both physical and geometrical variables. Buckingham's π theorem is proposed to define non-dimensional parameters that allow suitably describing the relationships linking the physical variables involved in the process under-study to the fundamental dimensions. This approach is a useful solution for those problems, whose first principles-based models are not known, difficult to build or computationally unfeasible. The key contributions of the proposed similarity theory-based modelling approach are presented and discussed. The major advantage resides in its straightforward online applicability, thanks to very low computational burden, while preserving good level of accuracy. This makes the model suitable for several purposes, such as design, control, diagnostics, state of health monitoring and prognostics. Experimental data, collected in different operating conditions, have been analysed to demonstrate the capability of the model to reproduce PEMFC impedance at different loads and temperatures. This results in a reduction of the experimental effort for the FCS lab characterization. Moreover, it is highlighted the possibility to use the model with scaling-up purposes to reproduce the full stack impedance from single-cell one, thus supporting FC design and development from lab-to commercial system-scale.

  16. Characterisation of physical environmental factors on an intertidal sandflat, Manukau Harbour, New Zealand

    USGS Publications Warehouse

    Bell, R.G.; Hume, T.M.; Dolphin, T.J.; Green, M.O.; Walters, R.A.

    1997-01-01

    Physical environmental factors, including sediment characteristics, inundation time, tidal currents and wind waves, likely to influence the structure of the benthic community at meso-scales (1-100 m) were characterised for a sandflat off Wiroa Island (Manukau Harbour, New Zealand). In a 500 x 250 m study site, sediment characteristics and bed topography were mostly homogenous apart from patches of low-relief ridges and runnels. Field measurements and hydrodynamic modelling portray a complex picture of sediment or particulate transport on the intertidal flat, involving interactions between the larger scale tidal processes and the smaller scale wave dynamics (1-4 s; 1-15 m). Peak tidal currents in isolation are incapable of eroding bottom sediments, but in combination with near-bed orbital currents generated by only very small wind waves, sediment transport can be initiated. Work done on the bed integrated over an entire tidal cycle by prevailing wind waves is greatest on the elevated and flatter slopes of the study site, where waves shoal over a wider surf zone and water depths remain shallow e enough for wave-orbital currents to disturb the bed. The study also provided physical descriptors quantifying static and hydrodynamic (tidal and wave) factors which were used in companion studies on ecological spatial modelling of bivalve distributions and micro-scale sediment reworking and transport.

  17. Towards the identification of new physics through quark flavour violating processes.

    PubMed

    Buras, Andrzej J; Girrbach, Jennifer

    2014-08-01

    We outline a systematic strategy that should help in this decade to identify new physics (NP) beyond the standard model (SM) by means of quark flavour violating processes, and thereby extend the picture of short distance physics down to scales as short as 10(-20) m and even shorter distance scales corresponding to energies of 100 TeV. Rather than using all of the possible flavour-violating observables that will be measured in the coming years at the LHC, SuperKEKB and in Kaon physics dedicated experiments at CERN, J-PARC and Fermilab, we concentrate on those observables that are theoretically clean and very sensitive to NP. Assuming that the data on the selected observables will be very precise, we stress the importance of correlations between these observables as well as of future precise calculations of non-perturbative parameters by means of lattice QCD simulations with dynamical fermions. Our strategy consists of twelve steps, which we will discuss in detail while illustrating the possible outcomes with the help of the SM, models with constrained minimal flavour violation (CMFV), MFV at large and models with tree-level flavour changing neutral currents mediated by neutral gauge bosons and scalars. We will also briefly summarize the status of a number of concrete models. We propose DNA charts that exhibit correlations between flavour observables in different NP scenarios. Models with new left-handed and/or right-handed currents and non-MFV interactions can be distinguished transparently in this manner. We emphasize the important role of the stringent CMFV relations between various observables as standard candles of flavour physics. The pattern of deviations from these relations may help in identifying the correct NP scenario. The success of this program will be very much facilitated through direct signals of NP at the LHC, even if the LHC will not be able to probe the physics at scales shorter than 4 × 10(-20) m. We also emphasize the importance of lepton flavour violation, electric dipole moments, and (g - 2)e, μ in these studies.

  18. Developing Local Scale, High Resolution, Data to Interface with Numerical Storm Models

    NASA Astrophysics Data System (ADS)

    Witkop, R.; Becker, A.; Stempel, P.

    2017-12-01

    High resolution, physical storm models that can rapidly predict storm surge, inundation, rainfall, wind velocity and wave height at the intra-facility scale for any storm affecting Rhode Island have been developed by Researchers at the University of Rhode Island's (URI's) Graduate School of Oceanography (GSO) (Ginis et al., 2017). At the same time, URI's Marine Affairs Department has developed methods that inhere individual geographic points into GSO's models and enable the models to accurately incorporate local scale, high resolution data (Stempel et al., 2017). This combination allows URI's storm models to predict any storm's impacts on individual Rhode Island facilities in near real time. The research presented here determines how a coastal Rhode Island town's critical facility managers (FMs) perceive their assets as being vulnerable to quantifiable hurricane-related forces at the individual facility scale and explores methods to elicit this information from FMs in a format usable for incorporation into URI's storm models.

  19. The Material Point Method and Simulation of Wave Propagation in Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Bardenhagen, S. G.; Greening, D. R.; Roessig, K. M.

    2004-07-01

    The mechanical response of polycrystalline materials, particularly under shock loading, is of significant interest in a variety of munitions and industrial applications. Homogeneous continuum models have been developed to describe material response, including Equation of State, strength, and reactive burn models. These models provide good estimates of bulk material response. However, there is little connection to underlying physics and, consequently, they cannot be applied far from their calibrated regime with confidence. Both explosives and metals have important structure at the (energetic or single crystal) grain scale. The anisotropic properties of the individual grains and the presence of interfaces result in the localization of energy during deformation. In explosives energy localization can lead to initiation under weak shock loading, and in metals to material ejecta under strong shock loading. To develop accurate, quantitative and predictive models it is imperative to develop a sound physical understanding of the grain-scale material response. Numerical simulations are performed to gain insight into grain-scale material response. The Generalized Interpolation Material Point Method family of numerical algorithms, selected for their robust treatment of large deformation problems and convenient framework for implementing material interface models, are reviewed. A three-dimensional simulation of wave propagation through a granular material indicates the scale and complexity of a representative grain-scale computation. Verification and validation calculations on model bimaterial systems indicate the minimum numerical algorithm complexity required for accurate simulation of wave propagation across material interfaces and demonstrate the importance of interfacial decohesion. Preliminary results are presented which predict energy localization at the grain boundary in a metallic bicrystal.

  20. Land surface hydrology parameterization for atmospheric general circulation models including subgrid scale spatial variability

    NASA Technical Reports Server (NTRS)

    Entekhabi, D.; Eagleson, P. S.

    1989-01-01

    Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.

  1. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  2. Hierarchy problem and BSM physics

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Gautam

    2017-10-01

    The `hierarchy problem' plagues the Standard Model of particle physics. The source of this problem is our inability to answer the following question: Why is the Higgs mass so much below the GUT or Planck scale? A brief description about how `supersymmetry' and `composite Higgs' address this problem is given here.

  3. Astronomy Demonstrations and Models.

    ERIC Educational Resources Information Center

    Eckroth, Charles A.

    Demonstrations in astronomy classes seem to be more necessary than in physics classes for three reasons. First, many of the events are very large scale and impossibly remote from human senses. Secondly, while physics courses use discussions of one- and two-dimensional motion, three-dimensional motion is the normal situation in astronomy; thus,…

  4. Precision measurement of the weak charge of the proton.

    PubMed

    2018-05-01

    Large experimental programmes in the fields of nuclear and particle physics search for evidence of physics beyond that explained by current theories. The observation of the Higgs boson completed the set of particles predicted by the standard model, which currently provides the best description of fundamental particles and forces. However, this theory's limitations include a failure to predict fundamental parameters, such as the mass of the Higgs boson, and the inability to account for dark matter and energy, gravity, and the matter-antimatter asymmetry in the Universe, among other phenomena. These limitations have inspired searches for physics beyond the standard model in the post-Higgs era through the direct production of additional particles at high-energy accelerators, which have so far been unsuccessful. Examples include searches for supersymmetric particles, which connect bosons (integer-spin particles) with fermions (half-integer-spin particles), and for leptoquarks, which mix the fundamental quarks with leptons. Alternatively, indirect searches using precise measurements of well predicted standard-model observables allow highly targeted alternative tests for physics beyond the standard model because they can reach mass and energy scales beyond those directly accessible by today's high-energy accelerators. Such an indirect search aims to determine the weak charge of the proton, which defines the strength of the proton's interaction with other particles via the well known neutral electroweak force. Because parity symmetry (invariance under the spatial inversion (x, y, z) → (-x, -y, -z)) is violated only in the weak interaction, it provides a tool with which to isolate the weak interaction and thus to measure the proton's weak charge 1 . Here we report the value 0.0719 ± 0.0045, where the uncertainty is one standard deviation, derived from our measured parity-violating asymmetry in the scattering of polarized electrons on protons, which is -226.5 ± 9.3 parts per billion (the uncertainty is one standard deviation). Our value for the proton's weak charge is in excellent agreement with the standard model 2 and sets multi-teraelectronvolt-scale constraints on any semi-leptonic parity-violating physics not described within the standard model. Our results show that precision parity-violating measurements enable searches for physics beyond the standard model that can compete with direct searches at high-energy accelerators and, together with astronomical observations, can provide fertile approaches to probing higher mass scales.

  5. Airframe Noise Prediction of a Full Aircraft in Model and Full Scale Using a Lattice Boltzmann Approach

    NASA Technical Reports Server (NTRS)

    Fares, Ehab; Duda, Benjamin; Khorrami, Mehdi R.

    2016-01-01

    Unsteady flow computations are presented for a Gulfstream aircraft model in landing configuration, i.e., flap deflected 39deg and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW(Trademark) to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. Two geometry representations of the same aircraft are analyzed: an 18% scale, high-fidelity, semi-span model at wind tunnel Reynolds number and a full-scale, full-span model at half-flight Reynolds number. Previously published and newly generated model-scale results are presented; all full-scale data are disclosed here for the first time. Reynolds number and geometrical fidelity effects are carefully examined to discern aerodynamic and aeroacoustic trends with a special focus on the scaling of surface pressure fluctuations and farfield noise. An additional study of the effects of geometrical detail on farfield noise is also documented. The present investigation reveals that, overall, the model-scale and full-scale aeroacoustic results compare rather well. Nevertheless, the study also highlights that finer geometrical details that are typically not captured at model scales can have a non-negligible contribution to the farfield noise signature.

  6. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2018-01-01

    Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  7. Multipoint Green's functions in 1 + 1 dimensional integrable quantum field theories

    DOE PAGES

    Babujian, H. M.; Karowski, M.; Tsvelik, A. M.

    2017-02-14

    We calculate the multipoint Green functions in 1+1 dimensional integrable quantum field theories. We use the crossing formula for general models and calculate the 3 and 4 point functions taking in to account only the lower nontrivial intermediate states contributions. Then we apply the general results to the examples of the scaling Z 2 Ising model, sinh-Gordon model and Z 3 scaling Potts model. We demonstrate this calculations explicitly. The results can be applied to physical phenomena as for example to the Raman scattering.

  8. Scaling Limit for a Generalization of the Nelson Model and its Application to Nuclear Physics

    NASA Astrophysics Data System (ADS)

    Suzuki, Akito

    We study a mathematically rigorous derivation of a quantum mechanical Hamiltonian in a general framework. We derive such a Hamiltonian by taking a scaling limit for a generalization of the Nelson model, which is an abstract interaction model between particles and a Bose field with some internal degrees of freedom. Applying it to a model for the field of the nuclear force with isospins, we obtain a Schrödinger Hamiltonian with a matrix-valued potential, the one pion exchange potential, describing an effective interaction between nucleons.

  9. A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data sets (or cloud library) stored at Goddard.

  10. Examination of Studying Approaches of Students at School of Physical Education and Sports in Terms of Different Variables

    ERIC Educational Resources Information Center

    Dereceli, Cagatay

    2017-01-01

    This study aims to examine studying approaches of the students of physical education and school of physical and sports according to various variables. The data of the study conducted in the general scanning model has been collected from 478 students in 2016-2017 teaching year. Studying Approaches Scale has been used to collect data. Besides…

  11. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    NASA Astrophysics Data System (ADS)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that precipitation intensities during the investigated landslide-triggering rainfall events were already close to or above the soil's infiltration capacity.

  12. Can compactifications solve the cosmological constant problem?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertzberg, Mark P.; Center for Theoretical Physics, Department of Physics,Massachusetts Institute of Technology,77 Massachusetts Ave, Cambridge, MA 02139; Masoumi, Ali

    2016-06-30

    Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain whymore » Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.« less

  13. Micro-CT Pore Scale Study Of Flow In Porous Media: Effect Of Voxel Resolution

    NASA Astrophysics Data System (ADS)

    Shah, S.; Gray, F.; Crawshaw, J.; Boek, E.

    2014-12-01

    In the last few years, pore scale studies have become the key to understanding the complex fluid flow processes in the fields of groundwater remediation, hydrocarbon recovery and environmental issues related to carbon storage and capture. A pore scale study is often comprised of two key procedures: 3D pore scale imaging and numerical modelling techniques. The essence of a pore scale study is to test the physics implemented in a model of complicated fluid flow processes at one scale (microscopic) and then apply the model to solve the problems associated with water resources and oil recovery at other scales (macroscopic and field). However, the process of up-scaling from the pore scale to the macroscopic scale has encountered many challenges due to both pore scale imaging and modelling techniques. Due to the technical limitations in the imaging method, there is always a compromise between the spatial (voxel) resolution and the physical volume of the sample (field of view, FOV) to be scanned by the imaging methods, specifically X-ray micro-CT (XMT) in our case In this study, a careful analysis was done to understand the effect of voxel size, using XMT to image the 3D pore space of a variety of porous media from sandstones to carbonates scanned at different voxel resolution (4.5 μm, 6.2 μm, 8.3 μm and 10.2 μm) but keeping the scanned FOV constant for all the samples. We systematically segment the micro-CT images into three phases, the macro-pore phase, an intermediate phase (unresolved micro-pores + grains) and the grain phase and then study the effect of voxel size on the structure of the macro-pore and the intermediate phases and the fluid flow properties using lattice-Boltzmann (LB) and pore network (PN) modelling methods. We have also applied a numerical coarsening algorithm (up-scale method) to reduce the computational power and time required to accurately predict the flow properties using the LB and PN method.

  14. Bianchi type-II String Cosmological Model with Magnetic Field in Scale-Covariant Theory of Gravitation

    NASA Astrophysics Data System (ADS)

    Sharma, N. K.; Singh, J. K.

    2014-12-01

    The spatially homogeneous and totally anisotropic Bianchi type-II cosmological solutions of massive strings have been investigated in the presence of the magnetic field in the framework of scale-covariant theory of gravitation formulated by Canuto et al. (Phys. Rev. Lett. 39, 429, 1977). With the help of special law of variation for Hubble's parameter proposed by Berman (Nuovo Cimento 74, 182, 1983) string cosmological model is obtained in this theory. We use the power law relation between scalar field ϕ and scale factor R to find the solutions. Some physical and kinematical properties of the model are also discussed.

  15. Decoupling the influence of biological and physical processes on the dissolved oxygen in the Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Du, Jiabi; Shen, Jian

    2015-01-01

    is instructive and essential to decouple the effects of biological and physical processes on the dissolved oxygen condition, in order to understand their contribution to the interannual variability of hypoxia in Chesapeake Bay since the 1980s. A conceptual bottom DO budget model is applied, using the vertical exchange time scale (VET) to quantify the physical condition and net oxygen consumption rate to quantify biological activities. By combining observed DO data and modeled VET values along the main stem of the Chesapeake Bay, the monthly net bottom DO consumption rate was estimated for 1985-2012. The DO budget model results show that the interannual variations of physical conditions accounts for 88.8% of the interannual variations of observed DO. The high similarity between the VET spatial pattern and the observed DO suggests that physical processes play a key role in regulating the DO condition. Model results also show that long-term VET has a slight increase in summer, but no statistically significant trend is found. Correlations among southerly wind strength, North Atlantic Oscillation index, and VET demonstrate that the physical condition in the Chesapeake Bay is highly controlled by the large-scale climate variation. The relationship is most significant during the summer, when the southerly wind dominates throughout the Chesapeake Bay. The seasonal pattern of the averaged net bottom DO consumption rate (B'20) along the main stem coincides with that of the chlorophyll-a concentration. A significant correlation between nutrient loading and B'20 suggests that the biological processes in April-May are most sensitive to the nutrient loading.

  16. A Coupled GCM-Cloud Resolving Modeling System, and a Regional Scale Model to Study Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2007-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a superparameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (2ICE, several 31CE), Goddard radiation (including explicitly calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generatio11 regional scale model, WRF. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).

  17. A Coupled GCM-Cloud Resolving Modeling System, and A Regional Scale Model to Study Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2006-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicitly calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).

  18. Experiments with data assimilation in comprehensive air quality models: Impacts on model predictions and observation requirements (Invited)

    NASA Astrophysics Data System (ADS)

    Mathur, R.

    2009-12-01

    Emerging regional scale atmospheric simulation models must address the increasing complexity arising from new model applications that treat multi-pollutant interactions. Sophisticated air quality modeling systems are needed to develop effective abatement strategies that focus on simultaneously controlling multiple criteria pollutants as well as use in providing short term air quality forecasts. In recent years the applications of such models is continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physical and chemical atmospheric processes occurring at these disparate spatial and temporal scales requires the use of observation data beyond traditional in-situ networks so that the model simulations can be reasonably constrained. Preliminary applications of assimilation of remote sensing and aloft observations within a comprehensive regional scale atmospheric chemistry-transport modeling system will be presented: (1) A methodology is developed to assimilate MODIS aerosol optical depths in the model to represent the impacts long-range transport associated with the summer 2004 Alaskan fires on surface-level regional fine particulate matter (PM2.5) concentrations across the Eastern U.S. The episodic impact of this pollution transport event on PM2.5 concentrations over the eastern U.S. during mid-July 2004, is quantified through the complementary use of the model with remotely-sensed, aloft, and surface measurements; (2) Simple nudging experiments with limited aloft measurements are performed to identify uncertainties in model representations of physical processes and assess the potential use of such measurements in improving the predictive capability of atmospheric chemistry-transport models. The results from these early applications will be discussed in context of uncertainties in the model and in the remote sensing data and needs for defining a future optimum observing strategy.

  19. Patterns of soil community structure differ by scale and ecosystem type along a large-scale precipitation gradient

    USDA-ARS?s Scientific Manuscript database

    Climate models predict increased variability in precipitation regimes, which will likely increase frequency/duration of drought. Reductions in soil moisture affect physical and chemical characteristics of the soil habitat and can influence soil organisms such as mites and nematodes. These organisms ...

  20. Connecting LHC signals with deep physics at the TeV scale and baryogenesis

    NASA Astrophysics Data System (ADS)

    Shu, Jing

    We address in this dissertation two primary questions aimed at deciphering collider signals at the Large Hadron Collider (LHC) to give a deep and concrete understanding of the TeV scale physics and to interpret the origin of baryon asymmetry in our universe. We are at a stage of exploring new physics at the terascale which is responsible for the electroweak symmetry breaking (EWSB) in the Standard Model (SM) of particle physics. The LHC, which begins its operation this year, will break us into such a new energy frontier and seek for the possible signals of new physics. Theorists have come up with many possible models beyond SM to explain the origin of EWSB. However, how we will determine the underlying physics from LHC data is still an open question. In the first part of this dissertation, we consider several examples to connect the expected LHC signals to the underlying physics in a completely model independent way. We first explore the Randall-Sundrum (RS) scenario, and use the collider signals of first Kaluza-Klein (KK) excitations of gluons to discriminate several commonly considered theories which attempt to render RS consistent with precision electroweak data. We then investigate top compositeness. We derive a bound for the energy scale of right handed top compositeness from top pair production at the Tevatron, and we find that the cross section to produce four tops will be greatly amplified by 3 orders of magnitude. We next consider the possibilities that the gauge symmetry in the underlying theory is violated in the incomplete theory that we can reconstruct from the LHC observables. We derive a model independent bound on the scale of new physics from unitarity of the S-matrix if we observe a new massive vector boson with nonzero axial couplings to fermions at LHC. Finally, we derive a generalized Landau-Yang theorem and apply it to the Z' decay into two Z bosons. We show that there is a phase shift in the azimuthal angle distribution in the normalized differential cross section and the anomalous coupling of Z'-Z-Z can be discriminated from the regular one at the 3s level when both Z bosons decay leptonically at the LHC. The origin of baryon asymmetry of the Universe (BAU) remains an important, unsolved problem for particle physics and cosmology, and is one of the motivations to search for possible new physics beyond SM. In the second part of this dissertation, we attempt to account for the baryon number generation in our universe through some novel mechanisms. We first systematically investigate models of baryogenesis from spontaneously Lorentz violating background (SLVB). We find that the sphaleron transitions will generate a nonzero B+L asymmetry in the presence of SLVB and we identify two scenarios of interest. We then consider the possibilities to generate a baryon asymmetry through an earlier time phase transition and address the question whether or not we can still test the baryogenesis mechanism at LHC/ILC if the electroweak phase transition is not strongly first order. We find a general framework and realize this idea in the top flavor model. We show that the realistic baryon density can be achieved in the natural parameter space of topflavor model.

  1. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  2. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    NASA Astrophysics Data System (ADS)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model, SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.

  3. Models of chromatin spatial organisation in the cell nucleus

    NASA Astrophysics Data System (ADS)

    Nicodemi, Mario

    2014-03-01

    In the cell nucleus chromosomes have a complex architecture serving vital functional purposes. Recent experiments have started unveiling the interaction map of DNA sites genome-wide, revealing different levels of organisation at different scales. The principles, though, which orchestrate such a complex 3D structure remain still mysterious. I will overview the scenario emerging from some classical polymer physics models of the general aspect of chromatin spatial organisation. The available experimental data, which can be rationalised in a single framework, support a picture where chromatin is a complex mixture of differently folded regions, self-organised across spatial scales according to basic physical mechanisms. I will also discuss applications to specific DNA loci, e.g. the HoxB locus, where models informed with biological details, and tested against targeted experiments, can help identifying the determinants of folding.

  4. Constraining the physical structure of the inner few 100 AU scales of deeply embedded low-mass protostars

    NASA Astrophysics Data System (ADS)

    Persson, M. V.; Harsono, D.; Tobin, J. J.; van Dishoeck, E. F.; Jørgensen, J. K.; Murillo, N.; Lai, S.-P.

    2016-05-01

    Context. The physical structure of deeply embedded low-mass protostars (Class 0) on scales of less than 300 AU is still poorly constrained. While molecular line observations demonstrate the presence of disks with Keplerian rotation toward a handful of sources, others show no hint of rotation. Determining the structure on small scales (a few 100 AU) is crucial for understanding the physical and chemical evolution from cores to disks. Aims: We determine the presence and characteristics of compact, disk-like structures in deeply embedded low-mass protostars. A related goal is investigating how the derived structure affects the determination of gas-phase molecular abundances on hot-core scales. Methods: Two models of the emission, a Gaussian disk intensity distribution and a parametrized power-law disk model, are fitted to subarcsecond resolution interferometric continuum observations of five Class 0 sources, including one source with a confirmed Keplerian disk. Prior to fitting the models to the de-projected real visibilities, the estimated envelope from an independent model and any companion sources are subtracted. For reference, a spherically symmetric single power-law envelope is fitted to the larger scale emission (~1000 AU) and investigated further for one of the sources on smaller scales. Results: The radii of the fitted disk-like structures range from ~90-170 AU, and the derived masses depend on the method. Using the Gaussian disk model results in masses of 54-556 × 10-3 M⊙, and using the power-law disk model gives 9-140 × 10-3 M⊙. While the disk radii agree with previous estimates the masses are different for some of the sources studied. Assuming a typical temperature distribution (r-0.5), the fractional amount of mass in the disk above 100 K varies from 7% to 30%. Conclusions: A thin disk model can approximate the emission and physical structure in the inner few 100 AU scales of the studied deeply embedded low-mass protostars and paves the way for analysis of a larger sample with ALMA. Kinematic data are needed to determine the presence of any Keplerian disk. Using previous observations of p-H218O, we estimate the relative gas phase water abundances relative to total warm H2 to be 6.2 × 10-5 (IRAS 2A), 0.33 × 10-5 (IRAS 4A-NW), 1.8 × 10-7 (IRAS 4B), and < 2 × 10-7 (IRAS 4A-SE), roughly an order of magnitude higher than previously inferred when both warm and cold H2 were used as reference. A spherically symmetric single power-law envelope model fails to simultaneously reproduce both the small- and large-scale emission. Based on observations carried out with the IRAM Plateau de Bure Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).Continuum data for the sources are available through http://dx.doi.org/10.5281/zenodo.47642 and at CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/590/A33

  5. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    NASA Astrophysics Data System (ADS)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.

  6. The Social Physique Anxiety Scale: construct validity in adolescent females.

    PubMed

    McAuley, E; Burman, G

    1993-09-01

    Hart, Leary, and Rejeski have developed the Social Physique Anxiety Scale (SPA), a measure of the anxiety experienced in response to having one's physique evaluated by other people. The present study cross-validated the psychometric properties of this measure in a sample (N = 236) of adolescent competitive female gymnasts. Employing structural equation modeling, the proposed unidimensional factor structure of the SPA was supported, although some questions regarding the robustness of the fit are raised. Construct validity was demonstrated by significant inverse relationships between aspects of physical efficacy (perceived physical ability and physical self-presentation confidence) and degree of social physique anxiety. These findings are discussed in terms of possible alternative factor structures and integration of social anxiety and other psychosocial constructs to better understand physical activity behavior.

  7. Empirical Scaling Laws of Rocket Exhaust Cratering

    NASA Technical Reports Server (NTRS)

    Donahue, Carly M.; Metzger, Philip T.; Immer, Christopher D.

    2005-01-01

    When launching or landing a space craft on the regolith of a terrestrial surface, special attention needs to be paid to the rocket exhaust cratering effects. If the effects are not controlled, the rocket cratering could damage the spacecraft or other surrounding hardware. The cratering effects of a rocket landing on a planet's surface are not understood well, especially for the lunar case with the plume expanding in vacuum. As a result, the blast effects cannot be estimated sufficiently using analytical theories. It is necessary to develop physics-based simulation tools in order to calculate mission-essential parameters. In this work we test out the scaling laws of the physics in regard to growth rate of the crater depth. This will provide the physical insight necessary to begin the physics-based modeling.

  8. Emerging concepts for management of river ecosystems and challenges to applied integration of physical and biological sciences in the Pacific Northwest, USA

    USGS Publications Warehouse

    Rieman, Bruce; Dunham, Jason B.; Clayton, James

    2006-01-01

    Integration of biological and physical concepts is necessary to understand and conserve the ecological integrity of river systems. Past attempts at integration have often focused at relatively small scales and on mechanistic models that may not capture the complexity of natural systems leaving substantial uncertainty about ecological responses to management actions. Two solutions have been proposed to guide management in the face of that uncertainty: the use of “natural variability” in key environmental patterns, processes, or disturbance as a reference; and the retention of some areas as essentially unmanaged reserves to conserve and represent as much biological diversity as possible. Both concepts are scale dependent because dominant processes or patterns that might be referenced will change with scale. Context and linkages across scales may be as important in structuring biological systems as conditions within habitats used by individual organisms. Both ideas view the physical environment as a template for expression, maintenance, and evolution of ecological diversity. To conserve or restore a diverse physical template it will be important to recognize the ecologically important differences in physical characteristics and processes among streams or watersheds that we might attempt to mimic in management or represent in conservation or restoration reserves.

  9. Multi-scale Modeling of Power Plant Plume Emissions and Comparisons with Observations

    NASA Astrophysics Data System (ADS)

    Costigan, K. R.; Lee, S.; Reisner, J.; Dubey, M. K.; Love, S. P.; Henderson, B. G.; Chylek, P.

    2011-12-01

    The Remote Sensing Verification Project (RSVP) test-bed located in the Four Corners region of Arizona, Utah, Colorado, and New Mexico offers a unique opportunity to develop new approaches for estimating emissions of CO2. Two major power plants located in this area produce very large signals of co-emitted CO2 and NO2 in this rural region. In addition to the Environmental Protection Agency (EPA) maintaining Continuous Emissions Monitoring Systems (CEMS) on each of the power plant stacks, the RSVP program has deployed an array of in-situ and remote sensing instruments, which provide both point and integrated measurements. To aid in the synthesis and interpretation of the measurements, a multi-scale atmospheric modeling approach is implemented, using two atmospheric numerical models: the Weather Research and Forecasting Model with chemistry (WRF-Chem; Grell et al., 2005) and the HIGRAD model (Reisner et al., 2003). The high fidelity HIGRAD model incorporates a multi-phase Lagrangian particle based approach to track individual chemical species of stack plumes at ultra-high resolution, using an adaptive mesh. It is particularly suited to model buoyancy effects and entrainment processes at the edges of the power plant plumes. WRF-Chem is a community model that has been applied to a number of air quality problems and offers several physical and chemical schemes that can be used to model the transport and chemical transformation of the anthropogenic plumes out of the local region. Multiple nested grids employed in this study allow the model to incorporate atmospheric variability ranging from synoptic scales to micro-scales (~200 m), while including locally developed flows influenced by the nearby complex terrain of the San Juan Mountains. The simulated local atmospheric dynamics are provided to force the HIGRAD model, which links mesoscale atmospheric variability to the small-scale simulation of the power plant plumes. We will discuss how these two models are applied and integrated for the study and we will include the incorporation of the real-time CEMS measurements for input into the models. We will compare the model simulations to the RSVP in-situ, column, and satellite measurements for selected periods. More information on the RSVP Fourier Transform Spectrometer (FTS) measurements can be found at https://tccon-wiki.caltech.edu/Sites/Four_Corners . Grell, G.A., S.E. Peckham, R. Schmitz, S.A. McKeen, G. Frost, W.C. Skamarock and B. Eder, 2005: Fully coupled online chemistry within the WRF model. Atmos. Environ., 39, 6957-6975. Reisner, J., A. Wyszogrodzki, V. Mousseau, and D. Knoll, 2003: An efficient physics-based preconditioner of the fully implicit solution of small-scale thermally driven atmospheric flows. J Comput. Physics., 189, 30-44.

  10. Psychosocial predictors of physical activity and health-related quality of life among adults with physical disabilities: an integrative framework.

    PubMed

    Kosma, Maria; Ellis, Rebecca; Cardinal, Bradley J; Bauer, Jeremy J; McCubbin, Jeffrey A

    2009-04-01

    People with disabilities report lower physical activity (PA) and health-related quality of life (HRQOL) levels than people without disabilities. Therefore, it is important to identify factors that motivate individuals with disabilities to be physically active and thus increase their HRQOL. The purpose of the study was to prospectively explore the effects of past theory of planned behavior (TPB) constructs on future (6-month) HRQOL (physical and mental health) through past stages of change (SOC) and future (6-month) PA among adults with physical disabilities. Two models were tested whereby the SOC and PA served as the mediators between the TPB constructs, physical health (PH-Model), and mental health (MH-Model). It was hypothesized that both models would fit the sample data. Participants were 141 adults with physical disabilities (mean age = 46.04, females = 70.9%). The online survey was completed at two different time periods. First, the TPB constructs and SOC were assessed using self-report standardized questionnaires. Six months later, participants completed standardized self-report scales about their PA and HRQOL levels. Using LISREL 8, two path analyses were conducted to examine the two study models (PH-Model and MH-Model). Based on the two path analyses, attitude had the highest effect on SOC followed by perceived behavioral control within both well-fit models. The PH-Model explained more variance in PA (26%) and physical health (55%) than the MH-Model. Health promoters should reinforce both positive intentions and behavioral experiences to increase PA and HRQOL among adults with physical disabilities.

  11. Quality of Life and Symptoms in Long-term Survivors of Colorectal Cancer: Results from NSABP Protocol LTS-01

    PubMed Central

    Kunitake, Hiroko; Russell, Marcia M.; Zheng, Ping; Yothers, Greg; Land, Stephanie R.; Petersen, Laura; Fehrenbacher, Louis; Giguere, Jeffery K.; Wickerham, D. Lawrence; Ko, Clifford Y.; Ganz, Patricia A.

    2016-01-01

    Purpose Little is known about health-related quality of life (HRQL) in long-term survivors (LTS) of colorectal cancer (CRC). Methods Long-term CRC survivors (≥ 5 years) treated in previous National Surgical Adjuvant Breast and Bowel Project trials were recruited from 60 sites. After obtaining consent, a telephone survey was administered, which included HRQL instruments to measure physical health (Instrumental Activities of Daily Living [IADL], SF-12 Physical Component Scale [PCS], SF-36 Vitality Scale), mental health (SF-12 Mental Component Scale [MCS], Life Orientation Test, and Impact of Cancer), and clinical symptoms (Fatigue Symptom Inventory [FSI], European Organisation for Research and Treatment of Cancer Colorectal Module [EORTC-CR38], and Brief Pain Inventory). A multivariable model identified predictors of overall quality of life (global health rating). Results Participants (N=708) had significantly higher HRQL compared with age group-matched non-cancer controls with higher mean scores on SF-12 PCS (49.5 vs. 43.7, p=< 0.05), MCS (55.6 vs. 52.1, p=<0.05) and SF-36 Vitality scale (67.1 vs. 59.9, p=< 0.05). Multivariable modeling has demonstrated that better overall physical and mental health (PCS and MCS), positive body image (EORTC-CR38 scale), and less fatigue (FSI), were strongly associated with overall quality of life as measured by the global health rating. Interestingly, ability to perform IADLs, experience of cancer, gastrointestinal complaints, and pain were not important predictors. Conclusions In long-term CRC survivors, overall physical and mental health were excellent compared with general population. Other disease-related symptoms did not detract from good overall health. PMID:27562475

  12. Natural inflation with pseudo Nambu-Goldstone bosons

    NASA Technical Reports Server (NTRS)

    Freese, Katherine; Frieman, Joshua A.; Olinto, Angela V.

    1990-01-01

    It is shown that a pseudo-Nambu-Goldstone boson of given potential can naturally give rise to an epoch of inflation in the early universe. Mass scales which arise in particle physics models with a gauge group that becomes strongly interacting at a certain scales are shown to be conditions for successful inflation. The density fluctuation spectrum is nonscale-invariant, with extra power on large length scales.

  13. 3D Printing of Molecular Models

    ERIC Educational Resources Information Center

    Gardner, Adam; Olson, Arthur

    2016-01-01

    Physical molecular models have played a valuable role in our understanding of the invisible nano-scale world. We discuss 3D printing and its use in producing models of the molecules of life. Complex biomolecular models, produced from 3D printed parts, can demonstrate characteristics of molecular structure and function, such as viral self-assembly,…

  14. lidar change detection using building models

    NASA Astrophysics Data System (ADS)

    Kim, Angela M.; Runyon, Scott C.; Jalobeanu, Andre; Esterline, Chelsea H.; Kruse, Fred A.

    2014-06-01

    Terrestrial LiDAR scans of building models collected with a FARO Focus3D and a RIEGL VZ-400 were used to investigate point-to-point and model-to-model LiDAR change detection. LiDAR data were scaled, decimated, and georegistered to mimic real world airborne collects. Two physical building models were used to explore various aspects of the change detection process. The first model was a 1:250-scale representation of the Naval Postgraduate School campus in Monterey, CA, constructed from Lego blocks and scanned in a laboratory setting using both the FARO and RIEGL. The second model at 1:8-scale consisted of large cardboard boxes placed outdoors and scanned from rooftops of adjacent buildings using the RIEGL. A point-to-point change detection scheme was applied directly to the point-cloud datasets. In the model-to-model change detection scheme, changes were detected by comparing Digital Surface Models (DSMs). The use of physical models allowed analysis of effects of changes in scanner and scanning geometry, and performance of the change detection methods on different types of changes, including building collapse or subsistence, construction, and shifts in location. Results indicate that at low false-alarm rates, the point-to-point method slightly outperforms the model-to-model method. The point-to-point method is less sensitive to misregistration errors in the data. Best results are obtained when the baseline and change datasets are collected using the same LiDAR system and collection geometry.

  15. Aligning physical elements with persons' attitude: an approach using Rasch measurement theory

    NASA Astrophysics Data System (ADS)

    Camargo, F. R.; Henson, B.

    2013-09-01

    Affective engineering uses mathematical models to convert the information obtained from persons' attitude to physical elements into an ergonomic design. However, applications in the domain have not in many cases met measurement assumptions. This paper proposes a novel approach based on Rasch measurement theory to overcome the problem. The research demonstrates that if data fit the model, further variables can be added to a scale. An empirical study was designed to determine the range of compliance where consumers could obtain an impression of a moisturizer cream when touching some product containers. Persons, variables and stimulus objects were parameterised independently on a linear continuum. The results showed that a calibrated scale preserves comparability although incorporating further variables.

  16. Cosmological evolution of the Higgs boson's vacuum expectation value

    NASA Astrophysics Data System (ADS)

    Calmet, Xavier

    2017-11-01

    We point out that the expansion of the universe leads to a cosmological time evolution of the vacuum expectation of the Higgs boson. Within the standard model of particle physics, the cosmological time evolution of the vacuum expectation of the Higgs leads to a cosmological time evolution of the masses of the fermions and of the electroweak gauge bosons, while the scale of Quantum Chromodynamics (QCD) remains constant. Precise measurements of the cosmological time evolution of μ =m_e/m_p, where m_e and m_p are, respectively, the electron and proton mass (which is essentially determined by the QCD scale), therefore provide a test of the standard models of particle physics and of cosmology. This ratio can be measured using modern atomic clocks.

  17. An Illumination- and Temperature-Dependent Analytical Model for Copper Indium Gallium Diselenide (CIGS) Solar Cells

    DOE PAGES

    Sun, Xingshu; Silverman, Timothy; Garris, Rebekah; ...

    2016-07-18

    In this study, we present a physics-based analytical model for copper indium gallium diselenide (CIGS) solar cells that describes the illumination- and temperature-dependent current-voltage (I-V) characteristics and accounts for the statistical shunt variation of each cell. The model is derived by solving the drift-diffusion transport equation so that its parameters are physical and, therefore, can be obtained from independent characterization experiments. The model is validated against CIGS I-V characteristics as a function of temperature and illumination intensity. This physics-based model can be integrated into a large-scale simulation framework to optimize the performance of solar modules, as well as predict themore » long-term output yields of photovoltaic farms under different environmental conditions.« less

  18. Modeling process-structure-property relationships for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Yu, Cheng; Liu, Zeliang; Lian, Yanping; Wolff, Sarah; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam

    2018-02-01

    This paper presents our latest work on comprehensive modeling of process-structure-property relationships for additive manufacturing (AM) materials, including using data-mining techniques to close the cycle of design-predict-optimize. To illustrate the processstructure relationship, the multi-scale multi-physics process modeling starts from the micro-scale to establish a mechanistic heat source model, to the meso-scale models of individual powder particle evolution, and finally to the macro-scale model to simulate the fabrication process of a complex product. To link structure and properties, a highefficiency mechanistic model, self-consistent clustering analyses, is developed to capture a variety of material response. The model incorporates factors such as voids, phase composition, inclusions, and grain structures, which are the differentiating features of AM metals. Furthermore, we propose data-mining as an effective solution for novel rapid design and optimization, which is motivated by the numerous influencing factors in the AM process. We believe this paper will provide a roadmap to advance AM fundamental understanding and guide the monitoring and advanced diagnostics of AM processing.

  19. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    DOE PAGES

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less

  20. RF Models for Plasma-Surface Interactions in VSim

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Smithe, D. N.; Pankin, A. Y.; Roark, C. M.; Zhou, C. D.; Stoltz, P. H.; Kruger, S. E.

    2014-10-01

    An overview of ongoing enhancements to the Plasma Discharge (PD) module of Tech-X's VSim software tool is presented. A sub-grid kinetic sheath model, developed for the accurate computation of sheath potentials near metal and dielectric-coated walls, enables the physical effects of DC and RF sheath physics to be included in macroscopic-scale plasma simulations that need not explicitly resolve sheath scale lengths. Sheath potential evolution, together with particle behavior near the sheath, can thus be simulated in complex geometries. Generalizations of the model to include sputtering, secondary electron emission, and effects from multiple ion species and background magnetic fields are summarized; related numerical results are also presented. In addition, improved tools for plasma chemistry and IEDF/EEDF visualization and modeling are discussed, as well as our initial efforts toward the development of hybrid fluid/kinetic transition capabilities within VSim. Ultimately, we aim to establish VSimPD as a robust, efficient computational tool for modeling industrial plasma processes. Supported by US DoE SBIR-I/II Award DE-SC0009501.

  1. Impacts of subgrid-scale orography parameterization on simulated atmospheric fields over Korea using a high-resolution atmospheric forecast model

    NASA Astrophysics Data System (ADS)

    Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno

    2018-06-01

    A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.

  2. Simulating adsorption of U(VI) under transient groundwater flow and hydrochemistry: Physical versus chemical nonequilibrium model

    USGS Publications Warehouse

    Greskowiak, J.; Hay, M.B.; Prommer, H.; Liu, C.; Post, V.E.A.; Ma, R.; Davis, J.A.; Zheng, C.; Zachara, J.M.

    2011-01-01

    Coupled intragrain diffusional mass transfer and nonlinear surface complexation processes play an important role in the transport behavior of U(VI) in contaminated aquifers. Two alternative model approaches for simulating these coupled processes were analyzed and compared: (1) the physical nonequilibrium approach that explicitly accounts for aqueous speciation and instantaneous surface complexation reactions in the intragrain regions and approximates the diffusive mass exchange between the immobile intragrain pore water and the advective pore water as multirate first-order mass transfer and (2) the chemical nonequilibrium approach that approximates the diffusion-limited intragrain surface complexation reactions by a set of multiple first-order surface complexation reaction kinetics, thereby eliminating the explicit treatment of aqueous speciation in the intragrain pore water. A model comparison has been carried out for column and field scale scenarios, representing the highly transient hydrological and geochemical conditions in the U(VI)-contaminated aquifer at the Hanford 300A site, Washington, USA. It was found that the response of U(VI) mass transfer behavior to hydrogeochemically induced changes in U(VI) adsorption strength was more pronounced in the physical than in the chemical nonequilibrium model. The magnitude of the differences in model behavior depended particularly on the degree of disequilibrium between the advective and immobile phase U(VI) concentrations. While a clear difference in U(VI) transport behavior between the two models was noticeable for the column-scale scenarios, only minor differences were found for the Hanford 300A field scale scenarios, where the model-generated disequilibrium conditions were less pronounced as a result of frequent groundwater flow reversals. Copyright 2011 by the American Geophysical Union.

  3. Modeling the spreading of large-scale wildland fires

    Treesearch

    Mohamed Drissi

    2015-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...

  4. Inflated Uncertainty in Multimodel-Based Regional Climate Projections.

    PubMed

    Madsen, Marianne Sloth; Langen, Peter L; Boberg, Fredrik; Christensen, Jens Hesselbjerg

    2017-11-28

    Multimodel ensembles are widely analyzed to estimate the range of future regional climate change projections. For an ensemble of climate models, the result is often portrayed by showing maps of the geographical distribution of the multimodel mean results and associated uncertainties represented by model spread at the grid point scale. Here we use a set of CMIP5 models to show that presenting statistics this way results in an overestimation of the projected range leading to physically implausible patterns of change on global but also on regional scales. We point out that similar inconsistencies occur in impact analyses relying on multimodel information extracted using statistics at the regional scale, for example, when a subset of CMIP models is selected to represent regional model spread. Consequently, the risk of unwanted impacts may be overestimated at larger scales as climate change impacts will never be realized as the worst (or best) case everywhere.

  5. A White Paper on keV sterile neutrino Dark Matter

    DOE PAGES

    Adhikari, R.

    2017-01-13

    Here, we present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved - cosmology, astrophysics, nuclear, and particle physics - in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. First, we review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterilemore » neutrino Dark Matter arising from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. Our paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.« less

  6. A White Paper on keV sterile neutrino Dark Matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adhikari, R.

    Here, we present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved - cosmology, astrophysics, nuclear, and particle physics - in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. First, we review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterilemore » neutrino Dark Matter arising from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. Our paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.« less

  7. Modeling erosion and sedimentation coupled with hydrological and overland flow processes at the watershed scale

    NASA Astrophysics Data System (ADS)

    Kim, Jongho; Ivanov, Valeriy Y.; Katopodes, Nikolaos D.

    2013-09-01

    A novel two-dimensional, physically based model of soil erosion and sediment transport coupled to models of hydrological and overland flow processes has been developed. The Hairsine-Rose formulation of erosion and deposition processes is used to account for size-selective sediment transport and differentiate bed material into original and deposited soil layers. The formulation is integrated within the framework of the hydrologic and hydrodynamic model tRIBS-OFM, Triangulated irregular network-based, Real-time Integrated Basin Simulator-Overland Flow Model. The integrated model explicitly couples the hydrodynamic formulation with the advection-dominated transport equations for sediment of multiple particle sizes. To solve the system of equations including both the Saint-Venant and the Hairsine-Rose equations, the finite volume method is employed based on Roe's approximate Riemann solver on an unstructured grid. The formulation yields space-time dynamics of flow, erosion, and sediment transport at fine scale. The integrated model has been successfully verified with analytical solutions and empirical data for two benchmark cases. Sensitivity tests to grid resolution and the number of used particle sizes have been carried out. The model has been validated at the catchment scale for the Lucky Hills watershed located in southeastern Arizona, USA, using 10 events for which catchment-scale streamflow and sediment yield data were available. Since the model is based on physical laws and explicitly uses multiple types of watershed information, satisfactory results were obtained. The spatial output has been analyzed and the driving role of topography in erosion processes has been discussed. It is expected that the integrated formulation of the model has the promise to reduce uncertainties associated with typical parameterizations of flow and erosion processes. A potential for more credible modeling of earth-surface processes is thus anticipated.

  8. MICHIGAN SOIL VAPOR EXTRACTION REMEDIATION (MISER) MODEL: A COMPUTER PROGRAM TO MODEL SOIL VAPORT EXTRACTION AND BIOVENTING OF ORGANIC MATERIALS IN UNSATURATED GEOLOGICAL MATERIAL

    EPA Science Inventory

    This report describes the formulation, numerical development, and use of a multiphase, multicomponent, biodegradation model designed to simulate physical, chemical, and biological interactions occurring primarily in field scale soil vapor extraction (SVE) and bioventing (B...

  9. Multiscale modeling and simulation of brain blood flow

    NASA Astrophysics Data System (ADS)

    Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em

    2016-02-01

    The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.

  10. Closing in on the large-scale CMB power asymmetry

    NASA Astrophysics Data System (ADS)

    Contreras, D.; Hutchinson, J.; Moss, A.; Scott, D.; Zibin, J. P.

    2018-03-01

    Measurements of the cosmic microwave background (CMB) temperature anisotropies have revealed a dipolar asymmetry in power at the largest scales, in apparent contradiction with the statistical isotropy of standard cosmological models. The significance of the effect is not very high, and is dependent on a posteriori choices. Nevertheless, a number of models have been proposed that produce a scale-dependent asymmetry. We confront several such models for a physical, position-space modulation with CMB temperature observations. We find that, while some models that maintain the standard isotropic power spectrum are allowed, others, such as those with modulated tensor or uncorrelated isocurvature modes, can be ruled out on the basis of the overproduction of isotropic power. This remains the case even when an extra isocurvature mode fully anticorrelated with the adiabatic perturbations is added to suppress power on large scales.

  11. Characteristics of atmospheric circulation patterns associated with extreme temperatures over North America in observations and climate models

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.

    Motivated by a desire to understand the physical mechanisms involved in future anthropogenic changes in extreme temperature events, the key atmospheric circulation patterns associated with extreme daily temperatures over North America in the current climate are identified. Several novel metrics are used to systematically identify and describe these patterns for the entire continent. The orientation, physical characteristics, and spatial scale of these circulation patterns vary based on latitude, season, and proximity to important geographic features (i.e., mountains, coastlines). The anomaly patterns associated with extreme cold events tend to be similar to, but opposite in sign of, those associated with extreme warm events, especially within the westerlies, and tend to scale with temperature in the same locations. The influence of the Pacific North American (PNA) pattern, the Northern Annular Mode (NAM), and the El Niño-Southern Oscillation (ENSO) on extreme temperature days and months shows that associations between extreme temperatures and the PNA and NAM are stronger than associations with ENSO. In general, the association with extremes tends to be stronger on monthly than daily time scales. Extreme temperatures are associated with the PNA and NAM in locations typically influenced by these circulation patterns; however many extremes still occur on days when the amplitude and polarity of these patterns do not favor their occurrence. In winter, synoptic-scale, transient weather disturbances are important drivers of extreme temperature days; however these smaller-scale events are often concurrent with amplified PNA or NAM patterns. Associations are weaker in summer when other physical mechanisms affecting the surface energy balance, such as anomalous soil moisture content, are associated with extreme temperatures. Analysis of historical runs from seventeen climate models from the CMIP5 database suggests that most models simulate realistic circulation patterns associated with extreme temperature days in most places. Model-simulated patterns tend to resemble observed patterns better in the winter than the summer and at 500 hPa than at the surface. There is substantial variability among the suite of models analyzed and most models simulate circulation patterns more realistically away from influential features such as large bodies of water and complex topography.

  12. Role of religious involvement and spirituality in functioning among African Americans with cancer: testing a mediational model

    PubMed Central

    Holt, Cheryl L.; Wang, Min Qi; Caplan, Lee; Schulz, Emily; Blake, Victor; Southward, Vivian L.

    2013-01-01

    The present study tested a mediational model of the role of religious involvement, spirituality, and physical/emotional functioning in a sample of African American men and women with cancer. Several mediators were proposed based on theory and previous research, including sense of meaning, positive and negative affect, and positive and negative religious coping. One hundred patients were recruited through oncologist offices, key community leaders and community organizations, and interviewed by telephone. Participants completed an established measure of religious involvement, the Functional Assessment of Chronic Illness Therapy-Spiritual Well-Being Scale (FACIT-SP-12 version 4), the Positive and Negative Affect Schedule (PANAS), the Meaning in Life Scale, the Brief RCOPE, and the SF-12, which assesses physical and emotional functioning. Positive affect completely mediated the relationship between religious behaviors and emotional functioning. Though several other constructs showed relationships with study variables, evidence of mediation was not supported. Mediational models were not significant for the physical functioning outcome, nor were there significant main effects of religious involvement or spirituality for this outcome. Implications for cancer survivorship interventions are discussed. PMID:21222026

  13. The Standard Model: how far can it go and how can we tell?

    PubMed

    Butterworth, J M

    2016-08-28

    The Standard Model of particle physics encapsulates our current best understanding of physics at the smallest distances and highest energies. It incorporates quantum electrodynamics (the quantized version of Maxwell's electromagnetism) and the weak and strong interactions, and has survived unmodified for decades, save for the inclusion of non-zero neutrino masses after the observation of neutrino oscillations in the late 1990s. It describes a vast array of data over a wide range of energy scales. I review a selection of these successes, including the remarkably successful prediction of a new scalar boson, a qualitatively new kind of object observed in 2012 at the Large Hadron Collider. New calculational techniques and experimental advances challenge the Standard Model across an ever-wider range of phenomena, now extending significantly above the electroweak symmetry breaking scale. I will outline some of the consequences of these new challenges, and briefly discuss what is still to be found.This article is part of the themed issue 'Unifying physics and technology in light of Maxwell's equations'. © 2016 The Author(s).

  14. Standard Model—axion—seesaw—Higgs portal inflation. Five problems of particle physics and cosmology solved in one stroke

    NASA Astrophysics Data System (ADS)

    Ballesteros, Guillermo; Redondo, Javier; Ringwald, Andreas; Tamarit, Carlos

    2017-08-01

    We present a minimal extension of the Standard Model (SM) providing a consistent picture of particle physics from the electroweak scale to the Planck scale and of cosmology from inflation until today. Three right-handed neutrinos Ni, a new color triplet Q and a complex SM-singlet scalar σ, whose vacuum expectation value vσ ~ 1011 GeV breaks lepton number and a Peccei-Quinn symmetry simultaneously, are added to the SM. At low energies, the model reduces to the SM, augmented by seesaw generated neutrino masses and mixing, plus the axion. The latter solves the strong CP problem and accounts for the cold dark matter in the Universe. The inflaton is comprised by a mixture of σ and the SM Higgs, and reheating of the Universe after inflation proceeds via the Higgs portal. Baryogenesis occurs via thermal leptogenesis. Thus, five fundamental problems of particle physics and cosmology are solved at one stroke in this unified Standard Model—axion—seesaw—Higgs portal inflation (SMASH) model. It can be probed decisively by upcoming cosmic microwave background and axion dark matter experiments.

  15. Time scale variation of MgII resonance lines of HD 41335 in UV region

    NASA Astrophysics Data System (ADS)

    Nikolaou, I.

    2012-01-01

    It is known that hot emission stars (Be and Oe) present peculiar and very complex spectral line profiles. Due to these perplexed lines that appear, it is difficult to actually fit a classical distribution to those physical profiles. Therefore many physical parameters of the regions, where these lines are created, can not be determined. In this paper, we study the Ultraviolet (UV) MgII (?? 2795.523, 2802.698 A) resonance lines of the HD 41335 star, at three different periods. Considering that these profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs), we use the Gauss-Rotation model (GR-model). From this analysis we can estimate the values of a group of physical parameters, such as the apparent rotational and radial velocities, the random velocities of the thermal motions of the ions, as well as the Full Width at Half Maximum (FWHM), the column density and the absorbed energy of the independent regions of matter, which produce the main and the satellite components of the studied spectral lines. Eventually, we calculate the time scale variations of the above physical parameters.

  16. Physical modeling in geomorphology: are boundary conditions necessary?

    NASA Astrophysics Data System (ADS)

    Cantelli, A.

    2012-12-01

    Referring to the physical experimental design in geomorphology, boundary conditions are key elements that determine the quality of the results and therefore the study development. For years engineers have modeled structures, such as dams and bridges, with high precision and excellent results. Until the last decade, a great part of the physical experimental work in geomorphology has been developed with an engineer-like approach, requiring an accurate scaling analysis to determine inflow parameters and initial geometrical conditions. However, during the last decade, the way we have been approaching physical experiments has significantly changed. In particular, boundary conditions and initial conditions are considered unknown factors that need to be discovered during the experiment. This new philosophy leads to a more demanding data acquisition process but relaxes the obligation to a priori know the appropriate input and initial conditions and provides the flexibility to discover those data. Here I am going to present some practical examples of this experimental approach in deepwater geomorphology; some questions about scaling of turbidity currents and a new large experimental facility built at the Universidade Federal do Rio Grande do Sul, Brasil.

  17. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  18. A short German Physical-Self-Concept Questionnaire for elementary school children (PSCQ-C): Factorial validity and measurement invariance across gender.

    PubMed

    Lohbeck, Annette; Tietjens, Maike; Bund, Andreas

    2017-09-01

    Research on children's physical self-concept (PSC) is increasingly recognised as an important field of psychology. However, there is a lack of instruments suitable for younger children at elementary school age. In the present study, a short German 21-item Physical Self-Concept-Questionnaire for children (PSCQ-C) was tested measuring seven specific facets of elementary school children's PSC (strength, endurance, speed, flexibility, coordination, physical appearance, global sport competence). A number of 770 elementary school children aged 8-12 years completed the PSCQ-C. Results showed good psychometric properties and high reliabilities of the seven scales. Confirmatory factor analysis revealed that the presumed 7-factor model fitted the data best compared to a global 1- and 2-factor model. Also, full measurement invariance was strongly established. Correlations among the seven scales were mainly moderate. Gender differences were suggestive of developmental trends that are consistent with prior studies. These results provide support that the PSCQ-C is a confidential instrument with sound psychometric properties measuring seven specific facets of elementary school children's PSC.

  19. The necessity of feedback physics in setting the peak of the initial mass function

    NASA Astrophysics Data System (ADS)

    Guszejnov, Dávid; Krumholz, Mark R.; Hopkins, Philip F.

    2016-05-01

    A popular theory of star formation is gravito-turbulent fragmentation, in which self-gravitating structures are created by turbulence-driven density fluctuations. Simple theories of isothermal fragmentation successfully reproduce the core mass function (CMF) which has a very similar shape to the initial mass function (IMF) of stars. However, numerical simulations of isothermal turbulent fragmentation thus far have not succeeded in identifying a fragment mass scale that is independent of the simulation resolution. Moreover, the fluid equations for magnetized, self-gravitating, isothermal turbulence are scale-free, and do not predict any characteristic mass. In this paper we show that, although an isothermal self-gravitating flow does produce a CMF with a mass scale imposed by the initial conditions, this scale changes as the parent cloud evolves. In addition, the cores that form undergo further fragmentation and after sufficient time forget about their initial conditions, yielding a scale-free pure power-law distribution dN/dM ∝ M-2 for the stellar IMF. We show that this problem can be alleviated by introducing additional physics that provides a termination scale for the cascade. Our candidate for such physics is a simple model for stellar radiation feedback. Radiative heating, powered by accretion on to forming stars, arrests the fragmentation cascade and imposes a characteristic mass scale that is nearly independent of the time-evolution or initial conditions in the star-forming cloud, and that agrees well with the peak of the observed IMF. In contrast, models that introduce a stiff equation of state for denser clouds but that do not explicitly include the effects of feedback do not yield an invariant IMF.

  20. AGN Accretion Physics in the Time Domain: Survey Cadences, Stochastic Analysis, and Physical Interpretations

    NASA Astrophysics Data System (ADS)

    Moreno, Jackeline; Vogeley, Michael S.; Richards, Gordon; O'Brien, John T.; Kasliwal, Vishal

    2018-01-01

    We present rigorous testing of survey cadences (K2, SDSS, CRTS, & Pan-STARRS) for quasar variability science using a magnetohydrodynamics synthetic lightcurve and the canonical lightcurve from Kepler, Zw 229.15. We explain where the state of the art is in regards to physical interpretations of stochastic models (CARMA) applied to AGN variability. Quasar variability offers a time domain approach of probing accretion physics at the SMBH scale. Evidence shows that the strongest amplitude changes in the brightness of AGN occur on long timescales ranging from months to hundreds of days. These global behaviors can be constrained by survey data despite low sampling resolution. CARMA processes provide a flexible family of models used to interpolate between data points, predict future observations and describe behaviors in a lightcurve. This is accomplished by decomposing a signal into rise and decay timescales, frequencies for cyclic behavior and shock amplitudes. Characteristic timescales may point to length-scales over which a physical process operates such as turbulent eddies, warping or hotspots due to local thermal instabilities. We present the distribution of SDSS Stripe 82 quasars in CARMA parameters space that pass our cadence tests and also explain how the Damped Harmonic Oscillator model, CARMA(2,1), reduces to the Damped Random Walk, CARMA(1,0), given the data in a specific region of the parameter space.

  1. Two Complementary Strategies for New Physics Searches at Lepton Colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooberman, Benjamin Henry

    In this thesis I present two complementary strategies for probing beyond-the-Standard Model physics using data collected in e +e - collisions at lepton colliders. One strategy involves searching for effects at low energy mediated by new particles at the TeV mass scale, at which new physics is expected to manifest. Several new physics scenarios, including Supersymmetry and models with leptoquarks or compositeness, may lead to observable rates for charged lepton-flavor violating processes, which are forbidden in the Standard Model. I present a search for lepton-flavor violating decays of the Υ(3S) using data collected with the BABAR detector. This study establishesmore » the 90% confidence level upper limits BF(Υ(3S) → eτ) < 5.0 x 10 -6 and BF(Υ(3S) → μτ) < 4.1 x 10 -6 which are used to place constraints on new physics contributing to lepton-flavor violation at the TeV mass scale. An alternative strategy is to increase the collision energy above the threshold for new particles and produce them directly. I discuss research and development efforts aimed at producing a vertex tracker which achieves the physics performance required of a high energy lepton collider. A small-scale vertex tracker prototype is constructed using Silicon sensors of 50 μm thickness and tested using charged particle beams. This tracker achieves the targeted impact parameter resolution of σ LP = (5⊕10 GeV/p T) as well as a longitudinal vertex resolution of (260 ± 10) μm, which is consistent with the requirements of a TeV-scale lepton collider. This detector research and development effort must be motivated and directed by simulation studies of physics processes. Investigation of a dark matter-motivated Supersymmetry scenario is presented, in which the dark matter is composed of Supersymmetric neutralinos. In this scenario, studies of the e +e - → H 0A 0 production process allow for precise measurements of the properties of the A 0 Supersymmetric Higgs boson, which improve the achievable precision on the neutralino dark matter candidate relic density to 8%. Comparison between this quantity and the dark matter density determined from cosmological observations will further our understanding of dark matter by allowing us to determine if it is of Supersymmetric origin.« less

  2. Preparing for Exascale: Towards convection-permitting, global atmospheric simulations with the Model for Prediction Across Scales (MPAS)

    NASA Astrophysics Data System (ADS)

    Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald

    2017-04-01

    With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.

  3. Dark matter self-interactions and small scale structure

    NASA Astrophysics Data System (ADS)

    Tulin, Sean; Yu, Hai-Bo

    2018-02-01

    We review theories of dark matter (DM) beyond the collisionless paradigm, known as self-interacting dark matter (SIDM), and their observable implications for astrophysical structure in the Universe. Self-interactions are motivated, in part, due to the potential to explain long-standing (and more recent) small scale structure observations that are in tension with collisionless cold DM (CDM) predictions. Simple particle physics models for SIDM can provide a universal explanation for these observations across a wide range of mass scales spanning dwarf galaxies, low and high surface brightness spiral galaxies, and clusters of galaxies. At the same time, SIDM leaves intact the success of ΛCDM cosmology on large scales. This report covers the following topics: (1) small scale structure issues, including the core-cusp problem, the diversity problem for rotation curves, the missing satellites problem, and the too-big-to-fail problem, as well as recent progress in hydrodynamical simulations of galaxy formation; (2) N-body simulations for SIDM, including implications for density profiles, halo shapes, substructure, and the interplay between baryons and self-interactions; (3) semi-analytic Jeans-based methods that provide a complementary approach for connecting particle models with observations; (4) merging systems, such as cluster mergers (e.g., the Bullet Cluster) and minor infalls, along with recent simulation results for mergers; (5) particle physics models, including light mediator models and composite DM models; and (6) complementary probes for SIDM, including indirect and direct detection experiments, particle collider searches, and cosmological observations. We provide a summary and critical look for all current constraints on DM self-interactions and an outline for future directions.

  4. Measuring everyday functional competence using the Rasch assessment of everyday activity limitations (REAL) item bank.

    PubMed

    Oude Voshaar, Martijn A H; Ten Klooster, Peter M; Vonkeman, Harald E; van de Laar, Mart A F J

    2017-11-01

    Traditional patient-reported physical function instruments often poorly differentiate patients with mild-to-moderate disability. We describe the development and psychometric evaluation of a generic item bank for measuring everyday activity limitations in outpatient populations. Seventy-two items generated from patient interviews and mapped to the International Classification of Functioning, Disability and Health (ICF) domestic life chapter were administered to 1128 adults representative of the Dutch population. The partial credit model was fitted to the item responses and evaluated with respect to its assumptions, model fit, and differential item functioning (DIF). Measurement performance of a computerized adaptive testing (CAT) algorithm was compared with the SF-36 physical functioning scale (PF-10). A final bank of 41 items was developed. All items demonstrated acceptable fit to the partial credit model and measurement invariance across age, sex, and educational level. Five- and ten-item CAT simulations were shown to have high measurement precision, which exceeded that of SF-36 physical functioning scale across the physical function continuum. Floor effects were absent for a 10-item empirical CAT simulation, and ceiling effects were low (13.5%) compared with SF-36 physical functioning (38.1%). CAT also discriminated better than SF-36 physical functioning between age groups, number of chronic conditions, and respondents with or without rheumatic conditions. The Rasch assessment of everyday activity limitations (REAL) item bank will hopefully prove a useful instrument for assessing everyday activity limitations. T-scores obtained using derived measures can be used to benchmark physical function outcomes against the general Dutch adult population.

  5. Shift scheduling model considering workload and worker’s preference for security department

    NASA Astrophysics Data System (ADS)

    Herawati, A.; Yuniartha, D. R.; Purnama, I. L. I.; Dewi, LT

    2018-04-01

    Security department operates for 24 hours and applies shift scheduling to organize its workers as well as in hotel industry. This research has been conducted to develop shift scheduling model considering the workers physical workload using rating of perceived exertion (RPE) Borg’s Scale and workers’ preference to accommodate schedule flexibility. The mathematic model is developed in integer linear programming and results optimal solution for simple problem. Resulting shift schedule of the developed model has equally distribution shift allocation among workers to balance the physical workload and give flexibility for workers in working hours arrangement.

  6. A Novel Multiscale Physics Based Progressive Failure Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.; Yarrington, Phillip W.

    2008-01-01

    A variable fidelity, multiscale, physics based finite element procedure for predicting progressive damage and failure of laminated continuous fiber reinforced composites is introduced. At every integration point in a finite element model, progressive damage is accounted for at the lamina-level using thermodynamically based Schapery Theory. Separate failure criteria are applied at either the global-scale or the microscale in two different FEM models. A micromechanics model, the Generalized Method of Cells, is used to evaluate failure criteria at the micro-level. The stress-strain behavior and observed failure mechanisms are compared with experimental results for both models.

  7. An Introduction to Magnetospheric Physics by Means of Simple Models

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1981-01-01

    The large scale structure and behavior of the Earth's magnetosphere is discussed. The model is suitable for inclusion in courses on space physics, plasmas, astrophysics or the Earth's environment, as well as for self-study. Nine quantitative problems, dealing with properties of linear superpositions of a dipole and a constant field are presented. Topics covered include: open and closed models of the magnetosphere; field line motion; the role of magnetic merging (reconnection); magnetospheric convection; and the origin of the magnetopause, polar cusps, and high latitude lobes.

  8. Benford's law gives better scaling exponents in phase transitions of quantum XY models.

    PubMed

    Rane, Ameya Deepak; Mishra, Utkarsh; Biswas, Anindya; Sen De, Aditi; Sen, Ujjwal

    2014-08-01

    Benford's law is an empirical law predicting the distribution of the first significant digits of numbers obtained from natural phenomena and mathematical tables. It has been found to be applicable for numbers coming from a plethora of sources, varying from seismographic, biological, financial, to astronomical. We apply this law to analyze the data obtained from physical many-body systems described by the one-dimensional anisotropic quantum XY models in a transverse magnetic field. We detect the zero-temperature quantum phase transition and find that our method gives better finite-size scaling exponents for the critical point than many other known scaling exponents using measurable quantities like magnetization, entanglement, and quantum discord. We extend our analysis to the same system but at finite temperature and find that it also detects the finite-temperature phase transition in the model. Moreover, we compare the Benford distribution analysis with the same obtained from the uniform and Poisson distributions. The analysis is furthermore important in that the high-precision detection of the cooperative physical phenomena is possible even from low-precision experimental data.

  9. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2010-01-01

    A multi-scale modeling system with unified physics has been developed at NASA Goddard Space Flight Center (GSFC). The system consists of an MMF, the coupled NASA Goddard finite-volume GCM (fvGCM) and Goddard Cumulus Ensemble model (GCE, a CRM); the state-of-the-art Weather Research and Forecasting model (WRF) and the stand alone GCE. These models can share the same microphysical schemes, radiation (including explicitly calculated cloud optical properties), and surface models that have been developed, improved and tested for different environments. In this talk, I will present: (1) A brief review on GCE model and its applications on the impact of the aerosol on deep precipitation processes, (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications). We are also performing the inline tracer calculation to comprehend the ph ysical processes (i.e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes and mesoscale convective systems.

  10. A Method for Combining Experimentation and Molecular Dynamics Simulation to Improve Cohesive Zone Models for Metallic Microstructures

    NASA Technical Reports Server (NTRS)

    Hochhalter, J. D.; Glaessgen, E. H.; Ingraffea, A. R.; Aquino, W. A.

    2009-01-01

    Fracture processes within a material begin at the nanometer length scale at which the formation, propagation, and interaction of fundamental damage mechanisms occur. Physics-based modeling of these atomic processes quickly becomes computationally intractable as the system size increases. Thus, a multiscale modeling method, based on the aggregation of fundamental damage processes occurring at the nanoscale within a cohesive zone model, is under development and will enable computationally feasible and physically meaningful microscale fracture simulation in polycrystalline metals. This method employs atomistic simulation to provide an optimization loop with an initial prediction of a cohesive zone model (CZM). This initial CZM is then applied at the crack front region within a finite element model. The optimization procedure iterates upon the CZM until the finite element model acceptably reproduces the near-crack-front displacement fields obtained from experimental observation. With this approach, a comparison can be made between the original CZM predicted by atomistic simulation and the converged CZM that is based on experimental observation. Comparison of the two CZMs gives insight into how atomistic simulation scales.

  11. Using Multi-scale Dynamic Rupture Models to Improve Ground Motion Estimates: ALCF-2 Early Science Program Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ely, Geoffrey P.

    2013-10-31

    This project uses dynamic rupture simulations to investigate high-frequency seismic energy generation. The relevant phenomena (frictional breakdown, shear heating, effective normal-stress fluctuations, material damage, etc.) controlling rupture are strongly interacting and span many orders of magnitude in spatial scale, requiring highresolution simulations that couple disparate physical processes (e.g., elastodynamics, thermal weakening, pore-fluid transport, and heat conduction). Compounding the computational challenge, we know that natural faults are not planar, but instead have roughness that can be approximated by power laws potentially leading to large, multiscale fluctuations in normal stress. The capacity to perform 3D rupture simulations that couple these processes willmore » provide guidance for constructing appropriate source models for high-frequency ground motion simulations. The improved rupture models from our multi-scale dynamic rupture simulations will be used to conduct physicsbased (3D waveform modeling-based) probabilistic seismic hazard analysis (PSHA) for California. These calculation will provide numerous important seismic hazard results, including a state-wide extended earthquake rupture forecast with rupture variations for all significant events, a synthetic seismogram catalog for thousands of scenario events and more than 5000 physics-based seismic hazard curves for California.« less

  12. Microbiological-enhanced mixing across scales during in-situ bioreduction of metals and radionuclides at Department of Energy Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valocchi, Albert; Werth, Charles; Liu, Wen-Tso

    Bioreduction is being actively investigated as an effective strategy for subsurface remediation and long-term management of DOE sites contaminated by metals and radionuclides (i.e. U(VI)). These strategies require manipulation of the subsurface, usually through injection of chemicals (e.g., electron donor) which mix at varying scales with the contaminant to stimulate metal reducing bacteria. There is evidence from DOE field experiments suggesting that mixing limitations of substrates at all scales may affect biological growth and activity for U(VI) reduction. Although current conceptual models hold that biomass growth and reduction activity is limited by physical mixing processes, a growing body of literaturemore » suggests that reaction could be enhanced by cell-to-cell interaction occurring over length scales extending tens to thousands of microns. Our project investigated two potential mechanisms of enhanced electron transfer. The first is the formation of single- or multiple-species biofilms that transport electrons via direct electrical connection such as conductive pili (i.e. ‘nanowires’) through biofilms to where the electron acceptor is available. The second is through diffusion of electron carriers from syntrophic bacteria to dissimilatory metal reducing bacteria (DMRB). The specific objectives of this work are (i) to quantify the extent and rate that electrons are transported between microorganisms in physical mixing zones between an electron donor and electron acceptor (e.g. U(IV)), (ii) to quantify the extent that biomass growth and reaction are enhanced by interspecies electron transport, and (iii) to integrate mixing across scales (e.g., microscopic scale of electron transfer and macroscopic scale of diffusion) in an integrated numerical model to quantify these mechanisms on overall U(VI) reduction rates. We tested these hypotheses with five tasks that integrate microbiological experiments, unique micro-fluidics experiments, flow cell experiments, and multi-scale numerical models. Continuous fed-batch reactors were used to derive kinetic parameters for DMRB, and to develop an enrichment culture for elucidation of syntrophic relationships in a complex microbial community. Pore and continuum scale experiments using microfluidic and bench top flow cells were used to evaluate the impact of cell-to-cell and microbial interactions on reaction enhancement in mixing-limited bioactive zones, and the mechanisms of this interaction. Some of the microfluidic experiments were used to develop and test models that considers direct cell-to-cell interactions during metal reduction. Pore scale models were incorporated into a multi-scale hybrid modeling framework that combines pore scale modeling at the reaction interface with continuum scale modeling. New computational frameworks for combining continuum and pore-scale models were also developed« less

  13. Density dependence, spatial scale and patterning in sessile biota.

    PubMed

    Gascoigne, Joanna C; Beadman, Helen A; Saurel, Camille; Kaiser, Michel J

    2005-09-01

    Sessile biota can compete with or facilitate each other, and the interaction of facilitation and competition at different spatial scales is key to developing spatial patchiness and patterning. We examined density and scale dependence in a patterned, soft sediment mussel bed. We followed mussel growth and density at two spatial scales separated by four orders of magnitude. In summer, competition was important at both scales. In winter, there was net facilitation at the small scale with no evidence of density dependence at the large scale. The mechanism for facilitation is probably density dependent protection from wave dislodgement. Intraspecific interactions in soft sediment mussel beds thus vary both temporally and spatially. Our data support the idea that pattern formation in ecological systems arises from competition at large scales and facilitation at smaller scales, so far only shown in vegetation systems. The data, and a simple, heuristic model, also suggest that facilitative interactions in sessile biota are mediated by physical stress, and that interactions change in strength and sign along a spatial or temporal gradient of physical stress.

  14. Computational modeling of fully-ionized, magnetized plasmas using the fluid approximation

    NASA Astrophysics Data System (ADS)

    Schnack, Dalton

    2005-10-01

    Strongly magnetized plasmas are rich in spatial and temporal scales, making a computational approach useful for studying these systems. The most accurate model of a magnetized plasma is based on a kinetic equation that describes the evolution of the distribution function for each species in six-dimensional phase space. However, the high dimensionality renders this approach impractical for computations for long time scales in relevant geometry. Fluid models, derived by taking velocity moments of the kinetic equation [1] and truncating (closing) the hierarchy at some level, are an approximation to the kinetic model. The reduced dimensionality allows a wider range of spatial and/or temporal scales to be explored. Several approximations have been used [2-5]. Successful computational modeling requires understanding the ordering and closure approximations, the fundamental waves supported by the equations, and the numerical properties of the discretization scheme. We review and discuss several ordering schemes, their normal modes, and several algorithms that can be applied to obtain a numerical solution. The implementation of kinetic parallel closures is also discussed [6].[1] S. Chapman and T.G. Cowling, ``The Mathematical Theory of Non-Uniform Gases'', Cambridge University Press, Cambridge, UK (1939).[2] R.D. Hazeltine and J.D. Meiss, ``Plasma Confinement'', Addison-Wesley Publishing Company, Redwood City, CA (1992).[3] L.E. Sugiyama and W. Park, Physics of Plasmas 7, 4644 (2000).[4] J.J. Ramos, Physics of Plasmas, 10, 3601 (2003).[5] P.J. Catto and A.N. Simakov, Physics of Plasmas, 11, 90 (2004).[6] E.D. Held et al., Phys. Plasmas 11, 2419 (2004)

  15. Acoustic scaling: A re-evaluation of the acoustic model of Manchester Studio 7

    NASA Astrophysics Data System (ADS)

    Walker, R.

    1984-12-01

    The reasons for the reconstruction and re-evaluation of the acoustic scale mode of a large music studio are discussed. The design and construction of the model using mechanical and structural considerations rather than purely acoustic absorption criteria is described and the results obtained are given. The results confirm that structural elements within the studio gave rise to unexpected and unwanted low-frequency acoustic absorption. The results also show that at least for the relatively well understood mechanisms of sound energy absorption physical modelling of the structural and internal components gives an acoustically accurate scale model, within the usual tolerances of acoustic design. The poor reliability of measurements of acoustic absorption coefficients, is well illustrated. The conclusion is reached that such acoustic scale modelling is a valid and, for large scale projects, financially justifiable technique for predicting fundamental acoustic effects. It is not appropriate for the prediction of fine details because such small details are unlikely to be reproduced exactly at a different size without extensive measurements of the material's performance at both scales.

  16. Hydrological processes at the urban residential scale

    Treesearch

    Q. Xiao; E.G. McPherson; J.R. Simpson; S.L. Ustin

    2007-01-01

    In the face of increasing urbanization, there is growing interest in application of microscale hydrologic solutions to minimize storm runoff and conserve water at the source. In this study, a physically based numerical model was developed to understand hydrologic processes better at the urban residential scale and the interaction of these processes among different...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  18. An analysis platform for multiscale hydrogeologic modeling with emphasis on hybrid multiscale methods.

    PubMed

    Scheibe, Timothy D; Murphy, Ellyn M; Chen, Xingyuan; Rice, Amy K; Carroll, Kenneth C; Palmer, Bruce J; Tartakovsky, Alexandre M; Battiato, Ilenia; Wood, Brian D

    2015-01-01

    One of the most significant challenges faced by hydrogeologic modelers is the disparity between the spatial and temporal scales at which fundamental flow, transport, and reaction processes can best be understood and quantified (e.g., microscopic to pore scales and seconds to days) and at which practical model predictions are needed (e.g., plume to aquifer scales and years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computation and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that model parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this article, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flowchart (Multiscale Analysis Platform), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and also a viable alternative to conventional single-scale models in the near future. © 2014, National Ground Water Association.

  19. Towards integrated modelling of soil organic carbon cycling at landscape scale

    NASA Astrophysics Data System (ADS)

    Viaud, V.

    2009-04-01

    Soil organic carbon (SOC) is recognized as a key factor of the chemical, biological and physical quality of soil. Numerous models of soil organic matter turnover have been developed since the 1930ies, most of them dedicated to plot scale applications. More recently, they have been applied to national scales to establish the inventories of carbon stocks directed by the Kyoto protocol. However, only few studies consider the intermediate landscape scale, where the spatio-temporal pattern of land management practices, its interactions with the physical environment and its impacts on SOC dynamics can be investigated to provide guidelines for sustainable management of soils in agricultural areas. Modelling SOC cycling at this scale requires accessing accurate spatially explicit input data on soils (SOC content, bulk density, depth, texture) and land use (land cover, farm practices), and combining both data in a relevant integrated landscape representation. The purpose of this paper is to present a first approach to modelling SOC evolution in a small catchment. The impact of the way landscape is represented on SOC stocks in the catchment was more specifically addressed. This study was based on the field map, the soil survey, the crop rotations and land management practices of an actual 10-km² agricultural catchment located in Brittany (France). RothC model was used to drive soil organic matter dynamics. Landscape representation in the form of a systematic regular grid, where driving properties vary continuously in space, was compared to a representation where landscape is subdivided into a set of homogeneous geographical units. This preliminary work enabled to identify future needs to improve integrated soil-landscape modelling in agricultural areas.

  20. Progress report for a research program in theoretical high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, D.; Fried, H.M.; Jevicki, A.

    This year's research has dealt with: superstrings in the early universe; the invisible axion emissions from SN1987A; quartic interaction in Witten's superstring field theory; W-boson associated multiplicity and the dual parton model; cosmic strings and galaxy formation; cosmic strings and baryogenesis; quark flavor mixing; p -- /bar p/ scattering at TeV energies; random surfaces; ordered exponentials and differential equations; initial value and back-reaction problems in quantum field theory; string field theory and Weyl invariance; the renormalization group and string field theory; the evolution of scalar fields in an inflationary universe, with and without the effects of gravitational perturbations; cosmic stringmore » catalysis of skyrmion decay; inflation and cosmic strings from dynamical symmetry breaking; the physic of flavor mixing; string-inspired cosmology; strings at high-energy densities and complex temperatures; the problem of non-locality in string theory; string statistical mechanics; large-scale structures with cosmic strings and neutrinos; the delta expansion for stochastic quantization; high-energy neutrino flux from ordinary cosmic strings; a physical picture of loop bremsstrahlung; cylindrically-symmetric solutions of four-dimensional sigma models; large-scale structure with hot dark matter and cosmic strings; the unitarization of the odderon; string thermodynamics and conservation laws; the dependence of inflationary-universe models on initial conditions; the delta expansion and local gauge invariance; particle physics and galaxy formation; chaotic inflation with metric and matter perturbations; grand-unified theories, galaxy formation, and large-scale structure; neutrino clustering in cosmic-string-induced wakes; and infrared approximations to nonlinear differential equations. 17 refs.« less

  1. Numerical Investigation of Dual-Mode Scramjet Combustor with Large Upstream Interaction

    NASA Technical Reports Server (NTRS)

    Mohieldin, T. O.; Tiwari, S. N.; Reubush, David E. (Technical Monitor)

    2004-01-01

    Dual-mode scramjet combustor configuration with significant upstream interaction is investigated numerically, The possibility of scaling the domain to accelerate the convergence and reduce the computational time is explored. The supersonic combustor configuration was selected to provide an understanding of key features of upstream interaction and to identify physical and numerical issues relating to modeling of dual-mode configurations. The numerical analysis was performed with vitiated air at freestream Math number of 2.5 using hydrogen as the sonic injectant. Results are presented for two-dimensional models and a three-dimensional jet-to-jet symmetric geometry. Comparisons are made with experimental results. Two-dimensional and three-dimensional results show substantial oblique shock train reaching upstream of the fuel injectors. Flow characteristics slow numerical convergence, while the upstream interaction slowly increases with further iterations. As the flow field develops, the symmetric assumption breaks down. A large separation zone develops and extends further upstream of the step. This asymmetric flow structure is not seen in the experimental data. Results obtained using a sub-scale domain (both two-dimensional and three-dimensional) qualitatively recover the flow physics obtained from full-scale simulations. All results show that numerical modeling using a scaled geometry provides good agreement with full-scale numerical results and experimental results for this configuration. This study supports the argument that numerical scaling is useful in simulating dual-mode scramjet combustor flowfields and could provide an excellent convergence acceleration technique for dual-mode simulations.

  2. a Study of Ultrasonic Wave Propagation Through Parallel Arrays of Immersed Tubes

    NASA Astrophysics Data System (ADS)

    Cocker, R. P.; Challis, R. E.

    1996-06-01

    Tubular array structures are a very common component in industrial heat exchanging plant and the non-destructive testing of these arrays is essential. Acoustic methods using microphones or ultrasound are attractive but require a thorough understanding of the acoustic properties of tube arrays. This paper details the development and testing of a small-scale physical model of a tube array to verify the predictions of a theoretical model for acoustic propagation through tube arrays developed by Heckl, Mulholland, and Huang [1-5] as a basis for the consideration of small-scale physical models in the development of non-destructive testing procedures for tube arrays. Their model predicts transmission spectra for plane waves incident on an array of tubes arranged in straight rows. Relative transmission is frequency dependent with bands of high and low attenuation caused by resonances within individual tubes and between tubes in the array. As the number of rows in the array increases the relative transmission spectrum becomes more complex, with increasingly well-defined bands of high and low attenuation. Diffraction of acoustic waves with wavelengths less than the tube spacing is predicted and appears as step reductions in the transmission spectrum at frequencies corresponding to integer multiples of the tube spacing. Experiments with the physical model confirm the principle features of the theoretical treatment.

  3. Length-scale dependent mechanical properties of Al-Cu eutectic alloy: Molecular dynamics based model and its experimental verification

    NASA Astrophysics Data System (ADS)

    Tiwary, C. S.; Chakraborty, S.; Mahapatra, D. R.; Chattopadhyay, K.

    2014-05-01

    This paper attempts to gain an understanding of the effect of lamellar length scale on the mechanical properties of two-phase metal-intermetallic eutectic structure. We first develop a molecular dynamics model for the in-situ grown eutectic interface followed by a model of deformation of Al-Al2Cu lamellar eutectic. Leveraging the insights obtained from the simulation on the behaviour of dislocations at different length scales of the eutectic, we present and explain the experimental results on Al-Al2Cu eutectic with various different lamellar spacing. The physics behind the mechanism is further quantified with help of atomic level energy model for different length scale as well as different strain. An atomic level energy partitioning of the lamellae and the interface regions reveals that the energy of the lamellae core are accumulated more due to dislocations irrespective of the length-scale. Whereas the energy of the interface is accumulated more due to dislocations when the length-scale is smaller, but the trend is reversed when the length-scale is large beyond a critical size of about 80 nm.

  4. Single-particle dynamics of the Anderson model: a local moment approach

    NASA Astrophysics Data System (ADS)

    Glossop, Matthew T.; Logan, David E.

    2002-07-01

    A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.

  5. Multi-physics modelling approach for oscillatory microengines: application for a microStirling generator design

    NASA Astrophysics Data System (ADS)

    Formosa, F.; Fréchette, L. G.

    2015-12-01

    An electrical circuit equivalent (ECE) approach has been set up allowing elementary oscillatory microengine components to be modelled. They cover gas channel/chamber thermodynamics, viscosity and thermal effects, mechanical structure and electromechanical transducers. The proposed tool has been validated on a centimeter scale Free Piston membrane Stirling engine [1]. We propose here new developments taking into account scaling effects to establish models suitable for any microengines. They are based on simplifications derived from the comparison of the hydraulic radius with respect to the viscous and thermal penetration depths respectively).

  6. Hostility and quality of life among Hispanics/Latinos in the HCHS/SOL Sociocultural Ancillary Study

    PubMed Central

    Moncrieft, Ashley E.; Llabre, Maria M.; Gallo, Linda C.; Cai, Jianwen; Gonzalez, Franklyn; Gonzalez, Patricia; Ostrovsky, Natania W.; Schneiderman, Neil; Penedo, Frank J.

    2016-01-01

    Objective The purpose of this study was to determine if hostility is associated with physical and mental health-related quality of life (QoL) in U.S. Hispanics/Latinos after accounting for depression and anxiety. Methods Analyses included 5,313 adults (62% women, 18–75 years) who completed the ancillary sociocultural assessment of the Hispanic Community Health Study/Study of Latinos. Participants completed the Center for Epidemiological Studies Depression Scale, Spielberger Trait Anxiety Scale, Spielberger Trait Anger Scale, Cook-Medley Hostility cynicism subscale, and Short Form Health Survey. In a structural regression model, associations of hostility with mental and physical QoL were examined. Results In a model adjusting for age, sex, disease burden, income, education and years in the U.S., hostility was related to worse mental QoL, and was marginally associated with worse physical QoL. However, when adjusting for the influence of depression and anxiety, greater hostility was associated with better mental QoL, and was not associated with physical QoL. Conclusions Results indicate observed associations between hostility and QoL are confounded by symptoms of anxiety and depression, and suggest hostility is independently associated with better mental QoL in this population. Findings also highlight the importance of differentiating shared and unique associations of specific emotions with health outcomes. PMID:27456582

  7. Physics of microstructures enhancement of thin film evaporation heat transfer in microchannels flow boiling

    PubMed Central

    Bigham, Sajjad; Fazeli, Abdolreza; Moghaddam, Saeed

    2017-01-01

    Performance enhancement of the two-phase flow boiling heat transfer process in microchannels through implementation of surface micro- and nanostructures has gained substantial interest in recent years. However, the reported results range widely from a decline to improvements in performance depending on the test conditions and fluid properties, without a consensus on the physical mechanisms responsible for the observed behavior. This gap in knowledge stems from a lack of understanding of the physics of surface structures interactions with microscale heat and mass transfer events involved in the microchannel flow boiling process. Here, using a novel measurement technique, the heat and mass transfer process is analyzed within surface structures with unprecedented detail. The local heat flux and dryout time scale are measured as the liquid wicks through surface structures and evaporates. The physics governing heat transfer enhancement on textured surfaces is explained by a deterministic model that involves three key parameters: the drying time scale of the liquid film wicking into the surface structures (τd), the heating length scale of the liquid film (δH) and the area fraction of the evaporating liquid film (Ar). It is shown that the model accurately predicts the optimum spacing between surface structures (i.e. pillars fabricated on the microchannel wall) in boiling of two fluids FC-72 and water with fundamentally different wicking characteristics. PMID:28303952

  8. Mapping porosity of the deep critical zone in 3D using near-surface geophysics, rock physics modeling, and drilling

    NASA Astrophysics Data System (ADS)

    Flinchum, B. A.; Holbrook, W. S.; Grana, D.; Parsekian, A.; Carr, B.; Jiao, J.

    2017-12-01

    Porosity is generated by chemical, physical and biological processes that work to transform bedrock into soil. The resulting porosity structure can provide specifics about these processes and can improve understanding groundwater storage in the deep critical zone. Near-surface geophysical methods, when combined with rock physics and drilling, can be a tool used to map porosity over large spatial scales. In this study, we estimate porosity in three-dimensions (3D) across a 58 Ha granite catchment. Observations focus on seismic refraction, downhole nuclear magnetic resonance logs, downhole sonic logs, and samples of core acquired by push coring. We use a novel petrophysical approach integrating two rock physics models, a porous medium for the saprolite and a differential effective medium for the fractured rock, that drive a Bayesian inversion to calculate porosity from seismic velocities. The inverted geophysical porosities are within about 0.05 m3/m3 of lab measured values. We extrapolate the porosity estimates below seismic refraction lines to a 3D volume using ordinary kriging to map the distribution of porosity in 3D up to depths of 80 m. This study provides a unique map of porosity on scale never-before-seen in critical zone science. Estimating porosity on these large spatial scales opens the door for improving and understanding the processes that shape the deep critical zone.

  9. Scaling and criticality in a stochastic multi-agent model of a financial market

    NASA Astrophysics Data System (ADS)

    Lux, Thomas; Marchesi, Michele

    1999-02-01

    Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.

  10. An improved Cauchy number approach for predicting the drag and reconfiguration of flexible vegetation

    NASA Astrophysics Data System (ADS)

    Whittaker, Peter; Wilson, Catherine A. M. E.; Aberle, Jochen

    2015-09-01

    An improved model to describe the drag and reconfiguration of flexible riparian vegetation is proposed. The key improvement over previous models is the use of a refined 'vegetative' Cauchy number to explicitly determine the magnitude and rate of the vegetation's reconfiguration. After being derived from dimensional consideration, the model is applied to two experimental data sets. The first contains high-resolution drag force and physical property measurements for twenty-one foliated and defoliated full-scale trees, including specimens of Alnus glutinosa, Populus nigra and Salix alba. The second data set is independent and of a different scale, consisting of drag force and physical property measurements for natural and artificial branches of willow and poplar, under partially and fully submerged flow conditions. Good agreement between the measured and predicted drag forces is observed for both data sets, especially when compared to a more typical 'rigid' approximation, where the effects of reconfiguration are neglected.

  11. Cascade model for fluvial geomorphology

    NASA Technical Reports Server (NTRS)

    Newman, W. I.; Turcotte, D. L.

    1990-01-01

    Erosional landscapes are generally scale invariant and fractal. Spectral studies provide quantitative confirmation of this statement. Linear theories of erosion will not generate scale-invariant topography. In order to explain the fractal behavior of landscapes a modified Fourier series has been introduced that is the basis for a renormalization approach. A nonlinear dynamical model has been introduced for the decay of the modified Fourier series coefficients that yield a fractal spectra. It is argued that a physical basis for this approach is that a fractal (or nearly fractal) distribution of storms (floods) continually renews erosional features on all scales.

  12. Physics with e{sup +}e{sup -} Linear Colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barklow, Timothy L

    2003-05-05

    We describe the physics potential of e{sup +}e{sup -} linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosonsmore » and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e{sup +}e{sup -} linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines.« less

  13. Statistical-physical model of the hydraulic conductivity

    NASA Astrophysics Data System (ADS)

    Usowicz, B.; Marczewski, W.; Usowicz, J. B.; Lukowski, M. I.

    2012-04-01

    The water content in unsaturated subsurface soil layer is determined by processes of exchanging mass and energy between media of soil and atmosphere, and particular members of layered media. Generally they are non-homogeneous on different scales, considering soil porosity, soil texture including presence of vegetation elements in the root zone, and canopy above the surface, and varying biomass density of plants above the surface in clusters. That heterogeneity determines statistically effective values of particular physical properties. This work considers mainly those properties which determine the hydraulic conductivity of soil. This property is necessary for characterizing physically water transfer in the root zone and access of nutrient matter for plants, but it also the water capacity on the field scale. The temporal variability of forcing conditions and evolutionarily changing vegetation causes substantial effects of impact on the water capacity in large scales, bringing the evolution of water conditions in the entire area, spanning a possible temporal state in the range between floods and droughts. The dynamic of this evolution of water conditions is highly determined by vegetation but is hardly predictable in evaluations. Hydrological models require feeding with input data determining hydraulic properties of the porous soil which are proposed in this paper by means of the statistical-physical model of the water hydraulic conductivity. The statistical-physical model was determined for soils being typical in Euroregion Bug, Eastern Poland. The model is calibrated on the base of direct measurements in the field scales, and enables determining typical characteristics of water retention by the retention curves bounding the hydraulic conductivity to the state of water saturation of the soil. The values of the hydraulic conductivity in two reference states are used for calibrating the model. One is close to full saturation, and another is for low water content far from saturation, in a particular case of the soil type. Effects of calibrating a soil depends on assumed ranges of soil properties engaged to recognizing the soil type. Among those properties, the key role is for the bulk density, the porosity and its dependence on the specific area of the soil. The aim of this work is to provide such variables of auxiliary data to SMOS, which would bring a relation of the soil moisture to the water capacity, under retrieving SM from SMOS L1C data. * The work was financially supported in part by the ESA Programme for European Cooperating States (PECS), No.98084 "SWEX-R, Soil Water and Energy Exchange/Research", AO3275.

  14. Lepton-flavored dark matter

    DOE PAGES

    Kile, Jennifer; Kobach, Andrew; Soni, Amarjit

    2015-04-08

    In this work, we address two paradoxes. The first is that the measured dark-matter relic density can be satisfied with new physics at O(100 GeV–1 TeV), while the null results from direct-detection experiments place lower bounds of O(10 TeV) on a new-physics scale. The second puzzle is that the severe suppression of lepton-flavor-violating processes involving electrons, e.g. μ → 3e, τ → eμμ, etc., implies that generic new-physics contributions to lepton interactions cannot exist below O(10–100 TeV), whereas the 3.6σ deviation of the muon g-2 from the standard model can be explained by a new physics scale ⁺e ⁻ colliders.more » We suggest experimental tests for these ideas at colliders and for low-energy observables. (author)« less

  15. Lepton-flavored dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kile, Jennifer; Kobach, Andrew; Soni, Amarjit

    In this work, we address two paradoxes. The first is that the measured dark-matter relic density can be satisfied with new physics at O(100 GeV–1 TeV), while the null results from direct-detection experiments place lower bounds of O(10 TeV) on a new-physics scale. The second puzzle is that the severe suppression of lepton-flavor-violating processes involving electrons, e.g. μ → 3e, τ → eμμ, etc., implies that generic new-physics contributions to lepton interactions cannot exist below O(10–100 TeV), whereas the 3.6σ deviation of the muon g-2 from the standard model can be explained by a new physics scale ⁺e ⁻ colliders.more » We suggest experimental tests for these ideas at colliders and for low-energy observables. (author)« less

  16. Development and testing of a physically based model of streambank erosion for coupling with a basin-scale hydrologic model SWAT

    USDA-ARS?s Scientific Manuscript database

    A comprehensive stream bank erosion model based on excess shear stress has been developed and incorporated in the hydrological model Soil and Water Assessment Tool (SWAT). It takes into account processes such as weathering, vegetative cover, and channel meanders to adjust critical and effective str...

  17. Prediction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation

    DTIC Science & Technology

    2016-08-04

    soil type. The modeling approach is based on (i) a seamless integration of multibody dynamics and discrete element method (DEM) solvers, and (ii...ensure that the vehicle follows a desired path. The soil is modeled as a Discrete Element Model (DEM) with a general cohesive material model that is

  18. Precision islands in the Ising and O(N ) models

    DOE PAGES

    Kos, Filip; Poland, David; Simmons-Duffin, David; ...

    2016-08-04

    We make precise determinations of the leading scaling dimensions and operator product expansion (OPE) coefficients in the 3d Ising, O(2), and O(3) models from the conformal bootstrap with mixed correlators. We improve on previous studies by scanning over possible relative values of the leading OPE coefficients, which incorporates the physical information that there is only a single operator at a given scaling dimension. The scaling dimensions and OPE coefficients obtained for the 3d Ising model, (Δ σ , Δ ϵ , λ σσϵ , λ ϵϵϵ ) = (0.5181489(10), 1.412625(10), 1.0518537(41), 1.532435(19) , give the most precise determinations of thesemore » quantities to date.« less

  19. Inflation in the standard cosmological model

    NASA Astrophysics Data System (ADS)

    Uzan, Jean-Philippe

    2015-12-01

    The inflationary paradigm is now part of the standard cosmological model as a description of its primordial phase. While its original motivation was to solve the standard problems of the hot big bang model, it was soon understood that it offers a natural theory for the origin of the large-scale structure of the universe. Most models rely on a slow-rolling scalar field and enjoy very generic predictions. Besides, all the matter of the universe is produced by the decay of the inflaton field at the end of inflation during a phase of reheating. These predictions can be (and are) tested from their imprint of the large-scale structure and in particular the cosmic microwave background. Inflation stands as a window in physics where both general relativity and quantum field theory are at work and which can be observationally studied. It connects cosmology with high-energy physics. Today most models are constructed within extensions of the standard model, such as supersymmetry or string theory. Inflation also disrupts our vision of the universe, in particular with the ideas of chaotic inflation and eternal inflation that tend to promote the image of a very inhomogeneous universe with fractal structure on a large scale. This idea is also at the heart of further speculations, such as the multiverse. This introduction summarizes the connections between inflation and the hot big bang model and details the basics of its dynamics and predictions. xml:lang="fr"

  20. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranosian, Antranik Antonio; Schembri, Philip Edward; Luscher, Darby Jon

    The Los Alamos National Laboratory's Weapon Systems Engineering division's Advanced Engineering Analysis group employs material constitutive models of composites for use in simulations of components and assemblies of interest. Experimental characterization, modeling and prediction of the macro-scale (i.e. continuum) behaviors of these composite materials is generally difficult because they exhibit nonlinear behaviors on the meso- (e.g. micro-) and macro-scales. Furthermore, it can be difficult to measure and model the mechanical responses of the individual constituents and constituent interactions in the composites of interest. Current efforts to model such composite materials rely on semi-empirical models in which meso-scale properties are inferredmore » from continuum level testing and modeling. The proposed approach involves removing the difficulties of interrogating and characterizing micro-scale behaviors by scaling-up the problem to work with macro-scale composites, with the intention of developing testing and modeling capabilities that will be applicable to the mesoscale. This approach assumes that the physical mechanisms governing the responses of the composites on the meso-scale are reproducible on the macro-scale. Working on the macro-scale simplifies the quantification of composite constituents and constituent interactions so that efforts can be focused on developing material models and the testing techniques needed for calibration and validation. Other benefits to working with macro-scale composites include the ability to engineer and manufacture—potentially using additive manufacturing techniques—composites that will support the application of advanced measurement techniques such as digital volume correlation and three-dimensional computed tomography imaging, which would aid in observing and quantifying complex behaviors that are exhibited in the macro-scale composites of interest. Ultimately, the goal of this new approach is to develop a meso-scale composite modeling framework, applicable to many composite materials, and the corresponding macroscale testing and test data interrogation techniques to support model calibration.« less

  2. Item response modeling: a psychometric assessment of the children's fruit, vegetable, water, and physical activity self-efficacy scales among Chinese children.

    PubMed

    Wang, Jing-Jing; Chen, Tzu-An; Baranowski, Tom; Lau, Patrick W C

    2017-09-16

    This study aimed to evaluate the psychometric properties of four self-efficacy scales (i.e., self-efficacy for fruit (FSE), vegetable (VSE), and water (WSE) intakes, and physical activity (PASE)) and to investigate their differences in item functioning across sex, age, and body weight status groups using item response modeling (IRM) and differential item functioning (DIF). Four self-efficacy scales were administrated to 763 Hong Kong Chinese children (55.2% boys) aged 8-13 years. Classical test theory (CTT) was used to examine the reliability and factorial validity of scales. IRM was conducted and DIF analyses were performed to assess the characteristics of item parameter estimates on the basis of children's sex, age and body weight status. All self-efficacy scales demonstrated adequate to excellent internal consistency reliability (Cronbach's α: 0.79-0.91). One FSE misfit item and one PASE misfit item were detected. Small DIF were found for all the scale items across children's age groups. Items with medium to large DIF were detected in different sex and body weight status groups, which will require modification. A Wright map revealed that items covered the range of the distribution of participants' self-efficacy for each scale except VSE. Several self-efficacy scales' items functioned differently by children's sex and body weight status. Additional research is required to modify the four self-efficacy scales to minimize these moderating influences for application.

  3. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not totally satisfactory. With a model for the additional term in the momentum equation, the predictions of the constant-coefficient Smagorinsky and constant-coefficient Scale-Similarity models were improved to a certain extent; however, most of the improvement was obtained for the Gradient model. The previously derived model and a newly developed model for the additional term in the momentum equation were both tested, with the new model proving even more successful than the previous model at reproducing the high density-gradient magnitude regions. Several dynamic SGS-flux models, in which the SGS-flux model coefficient is computed as part of the simulation, were tested in conjunction with the new model for this additional term in the momentum equation. The most successful dynamic model was a "mixed" model combining the Smagorinsky and Gradient models. This work is directly applicable to simulations of gas turbine engines (aeronautics) and rocket engines (astronautics).

  4. Upscaling of U (VI) desorption and transport from decimeter‐scale heterogeneity to plume‐scale modeling

    USGS Publications Warehouse

    Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan; Briggs, Martin A.; Day-Lewis, Frederick D.

    2015-01-01

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.

  5. A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thatcher, Diana R.; Jablonowski, Christiane

    A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less

  6. A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores

    DOE PAGES

    Thatcher, Diana R.; Jablonowski, Christiane

    2016-04-04

    A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less

  7. Multi-scale modeling of diffusion-controlled reactions in polymers: renormalisation of reactivity parameters.

    PubMed

    Everaers, Ralf; Rosa, Angelo

    2012-01-07

    The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.

  8. Particle physics today, tomorrow and beyond

    NASA Astrophysics Data System (ADS)

    Ellis, John

    2018-01-01

    The most important discovery in particle physics in recent years was that of the Higgs boson, and much effort is continuing to measure its properties, which agree obstinately with the Standard Model, so far. However, there are many reasons to expect physics beyond the Standard Model, motivated by the stability of the electroweak vacuum, the existence of dark matter and the origin of the visible matter in the Universe, neutrino physics, the hierarchy of mass scales in physics, cosmological inflation and the need for a quantum theory for gravity. Most of these issues are being addressed by the experiments during Run 2 of the LHC, and supersymmetry could help resolve many of them. In addition to the prospects for the LHC, I also review briefly those for direct searches for dark matter and possible future colliders.

  9. Interwell Connectivity Evaluation Using Injection and Production Fluctuation Data

    NASA Astrophysics Data System (ADS)

    Shang, Barry Zhongqi

    The development of multiscale methods for computational simulation of biophysical systems represents a significant challenge. Effective computational models that bridge physical insights obtained from atomistic simulations and experimental findings are lacking. An accurate passing of information between these scales would enable: (1) an improved physical understanding of structure-function relationships, and (2) enhanced rational strategies for molecular engineering and materials design. Two approaches are described in this dissertation to facilitate these multiscale goals. In Part I, we develop a lattice kinetic Monte Carlo model to simulate cellulose decomposition by cellulase enzymes and to understand the effects of spatial confinement on enzyme kinetics. An enhanced mechanistic understanding of this reaction system could enhance the design of cellulose bioconversion technologies for renewable and sustainable energy. Using our model, we simulate the reaction up to experimental conversion times of days, while simultaneously capturing the microscopic kinetic behaviors. Therefore, the influence of molecular-scale kinetics on the macroscopic conversion rate is made transparent. The inclusion of spatial constraints in the kinetic model represents a significant advance over classical mass-action models commonly used to describe this reaction system. We find that restrictions due to enzyme jamming and substrate heterogeneity at the molecular level play a dominate role in limiting cellulose conversion. We identify that the key rate limitations are the slow rates of enzyme complexation with glucan chains and the competition between enzyme processivity and jamming. We show that the kinetics of complexation, which involves extraction of a glucan chain end from the cellulose surface and threading through the enzyme active site, occurs slowly on the order of hours, while intrinsic hydrolytic bond cleavage occurs on the order of seconds. We also elucidate the subtle trade-off between processivity and jamming. Highly processive enzymes cleave a large fraction of a glucan chain during each processive run but are prone to jamming at obstacles. Less processive enzymes avoid jamming but cleave only a small fraction of a chain. Optimizing this trade-off maximizes the cellulose conversion rate. We also elucidate the molecular-scale kinetic origins for synergy among cellulases in enzyme mixtures. In contrast to the currently accepted theory, we show that the ability of an endoglucanase to increase the concentration of chain ends for exoglucanases is insufficient for synergy to occur. Rather, endoglucanases must enhance the rate of complexation between exoglucanases and the newly created chain ends. This enhancement occurs when the endoglucanase is able to partially decrystallize the cellulose surface. We show generally that the driving forces for complexation and jamming, which govern the kinetics of pure exoglucanases, also control the degree of synergy in endo-exo mixtures. In Part II, we focus our attention on a different multiscale problem. This challenge is the development of coarse-grained models from atomistic models to access larger length- and time-scales in a simulation. This problem is difficult because it requires a delicate balance between maintaining (1) physical simplicity in the coarse-grained model and (2) physical consistency with the atomistic model. To achieve these goals, we develop a scheme to coarse-grain an atomistic fluid model into a fluctuating hydrodynamics (FHD) model. The FHD model describes the solvent as a field of fluctuating mass, momentum, and energy densities. The dynamics of the fluid are governed by continuum balance equations and fluctuation-dissipation relations based on the constitutive transport laws. The incorporation of both macroscopic transport and microscopic fluctuation phenomena could provide richer physical insight into the behaviors of biophysical systems driven by hydrodynamic fluctuations, such as hydrophobic assembly and crystal nucleation. We further extend our coarse-graining method by developing an interfacial FHD model using information obtained from simulations of an atomistic liquid-vapor interface. We illustrate that a phenomenological Ginzburg-Landau free energy employed in the FHD model can effectively represent the attractive molecular interactions of the atomistic model, which give rise to phase separation. For argon and water, we show that the interfacial FHD model can reproduce the compressibility, surface tension, and capillary wave spectrum of the atomistic model. Via this approach, simulations that explore the coupling between hydrodynamic fluctuations and phase equilibria with molecular-scale consistency are now possible. In both Parts I and II, the emerging theme is that the combination of bottom-up coarse graining and top-down phenomenology is essential for enabling a multiscale approach to remain physically consistent with molecular-scale interactions while simultaneously capturing the collective macroscopic behaviors. This hybrid strategy enables the resulting computational models to be both physically insightful and practically meaningful. (Abstract shortened by UMI.).

  10. The quest for a new modelling framework in mathematical biology. Comment on "On the interplay between mathematics and biology: Hallmarks towards a new systems biology" by N. Bellomo et al.

    NASA Astrophysics Data System (ADS)

    Eftimie, Raluca

    2015-03-01

    One of the main unsolved problems of modern physics is finding a "theory of everything" - a theory that can explain, with the help of mathematics, all physical aspects of the universe. While the laws of physics could explain some aspects of the biology of living systems (e.g., the phenomenological interpretation of movement of cells and animals), there are other aspects specific to biology that cannot be captured by physics models. For example, it is generally accepted that the evolution of a cell-based system is influenced by the activation state of cells (e.g., only activated and functional immune cells can fight diseases); on the other hand, the evolution of an animal-based system can be influenced by the psychological state (e.g., distress) of animals. Therefore, the last 10-20 years have seen also a quest for a "theory of everything"-approach extended to biology, with researchers trying to propose mathematical modelling frameworks that can explain various biological phenomena ranging from ecology to developmental biology and medicine [1,2,6]. The basic idea behind this approach can be found in a few reviews on ecology and cell biology [6,7,9-11], where researchers suggested that due to the parallel between the micro-scale dynamics and the emerging macro-scale phenomena in both cell biology and in ecology, many mathematical methods used for ecological processes could be adapted to cancer modelling [7,9] or to modelling in immunology [11]. However, this approach generally involved the use of different models to describe different biological aspects (e.g., models for cell and animal movement, models for competition between cells or animals, etc.).

  11. Curvaton scenario within the minimal supersymmetric standard model and predictions for non-Gaussianity.

    PubMed

    Mazumdar, Anupam; Nadathur, Seshadri

    2012-03-16

    We provide a model in which both the inflaton and the curvaton are obtained from within the minimal supersymmetric standard model, with known gauge and Yukawa interactions. Since now both the inflaton and curvaton fields are successfully embedded within the same sector, their decay products thermalize very quickly before the electroweak scale. This results in two important features of the model: first, there will be no residual isocurvature perturbations, and second, observable non-Gaussianities can be generated with the non-Gaussianity parameter f(NL)~O(5-1000) being determined solely by the combination of weak-scale physics and the standard model Yukawa interactions.

  12. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.

  13. An Analysis Platform for Multiscale Hydrogeologic Modeling with Emphasis on Hybrid Multiscale Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheibe, Timothy D.; Murphy, Ellyn M.; Chen, Xingyuan

    2015-01-01

    One of the most significant challenges facing hydrogeologic modelers is the disparity between those spatial and temporal scales at which fundamental flow, transport and reaction processes can best be understood and quantified (e.g., microscopic to pore scales, seconds to days) and those at which practical model predictions are needed (e.g., plume to aquifer scales, years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computational and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that modelmore » parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this paper, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flow chart (Multiscale Analysis Platform or MAP), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and may become a viable alternative to conventional single-scale models in the near future.« less

  14. Re-design of a physically-based catchment scale agrochemical model for the simulation of parameter spaces and flexible transformation schemes

    NASA Astrophysics Data System (ADS)

    Stegen, Ronald; Gassmann, Matthias

    2017-04-01

    The use of a broad variation of agrochemicals is essential for the modern industrialized agriculture. During the last decades, the awareness of the side effects of their use has grown and with it the requirement to reproduce, understand and predict the behaviour of these agrochemicals in the environment, in order to optimize their use and minimize the side effects. The modern modelling has made great progress in understanding and predicting these chemicals with digital methods. While the behaviour of the applied chemicals is often investigated and modelled, most studies only simulate parent chemicals, considering total annihilation of the substance. However, due to a diversity of chemical, physical and biological processes, the substances are rather transformed into new chemicals, which themselves are transformed until, at the end of the chain, the substance is completely mineralized. During this process, the fate of each transformation product is determined by its own environmental characteristics and the pathway and results of transformation can differ largely by substance and environmental influences, that can occur in different compartments of the same site. Simulating transformation products introduces additional model uncertainties. Thus, the calibration effort increases compared to simulations of the transport and degradation of the primary substance alone. The simulation of the necessary physical processes needs a lot of calculation time. Due to that, few physically-based models offer the possibility to simulate transformation products at all, mostly at the field scale. The few models available for the catchment scale are not optimized for this duty, i.e. they are only able to simulate a single parent compound and up to two transformation products. Thus, for simulations of large physico-chemical parameter spaces, the enormous calculation time of the underlying hydrological model diminishes the overall performance. In this study, the structure of the model ZIN-AGRITRA is re-designed for the transport and transformation of an unlimited amount of agrochemicals in the soil-water-plant system at catchment scale. The focus is, besides a good hydrological standard, on a flexible variation of transformation processes and the optimization for the use of large numbers of different substances. Due to the new design, a reduction of the calculation time per tested substance is acquired, allowing faster testing of parameter spaces. Additionally, the new concept allows for the consideration of different transformation processes and products in different environmental compartments. A first test of calculation time improvements and flexible transformation pathways was performed in a Mediterranean meso-scale catchment, using the insecticide Chlorpyrifos and two of its transformation products, which emerge from different transformation processes, as test substances.

  15. Model Studies of the Portugues and Bucana Rivers Channelization, Puerto Rico. Hydraulic Model Investigation.

    DTIC Science & Technology

    1978-05-01

    Two 1:30-scale physical hydraulic models of the Portugues and Bucana Rivers were used to determine the adequacy of the original designs for the flood...transmit all expected flood releases from the proposed Portugues and Cerrillos Dams. Modifications to transitions at entrances to the high-velocity

  16. Critical branching neural networks.

    PubMed

    Kello, Christopher T

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity. These scaling laws are related to neural and cognitive functions, in that critical branching is shown to yield spiking activity with maximal memory and encoding capacities when analyzed using reservoir computing techniques. The model is also shown to account for findings of pervasive 1/f scaling in speech and cued response behaviors that are difficult to explain by isolable causes. Issues and questions raised by the model and its results are discussed from the perspectives of physics, neuroscience, computer and information sciences, and psychological and cognitive sciences.

  17. Subgrid-scale models for large-eddy simulation of rotating turbulent flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits; Trias, Xavier; Abkar, Mahdi; Bae, Hyunji Jane; Lozano-Duran, Adrian; Verstappen, Roel

    2016-11-01

    This paper discusses subgrid models for large-eddy simulation of anisotropic flows using anisotropic grids. In particular, we are looking into ways to model not only the subgrid dissipation, but also transport processes, since these are expected to play an important role in rotating turbulent flows. We therefore consider subgrid-scale models of the form τ = - 2νt S +μt (SΩ - ΩS) , where the eddy-viscosity νt is given by the minimum-dissipation model, μt represents a transport coefficient; S is the symmetric part of the velocity gradient and Ω the skew-symmetric part. To incorporate the effect of mesh anisotropy the filter length is taken in such a way that it minimizes the difference between the turbulent stress in physical and computational space, where the physical space is covered by an anisotropic mesh and the computational space is isotropic. The resulting model is successfully tested for rotating homogeneous isotropic turbulence and rotating plane-channel flows. The research was largely carried out during the CTR SP 2016. M.S, and R.V. acknowledge the financial support to attend this Summer Program.

  18. Efficient implicit LES method for the simulation of turbulent cavitating flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan

    2016-07-01

    We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less

  19. An item response theory analysis of Harter's Self-Perception Profile for children or why strong clinical scales should be distrusted.

    PubMed

    Egberink, Iris J L; Meijer, Rob R

    2011-06-01

    The authors investigated the psychometric properties of the subscales of the Self-Perception Profile for Children with item response theory (IRT) models using a sample of 611 children. Results from a nonparametric Mokken analysis and a parametric IRT approach for boys (n = 268) and girls (n = 343) were compared. The authors found that most scales formed weak scales and that measurement precision was relatively low and only present for latent trait values indicating low self-perception. The subscales Physical Appearance and Global Self-Worth formed one strong scale. Children seem to interpret Global Self-Worth items as if they measure Physical Appearance. Furthermore, the authors found that strong Mokken scales (such as Global Self-Worth) consisted mostly of items that repeat the same item content. They conclude that researchers should be very careful in interpreting the total scores on the different Self-Perception Profile for Children scales. Finally, implications for further research are discussed.

  20. A unified gas-kinetic scheme for continuum and rarefied flows IV: Full Boltzmann and model equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chang, E-mail: cliuaa@ust.hk; Xu, Kun, E-mail: makxu@ust.hk; Sun, Quanhua, E-mail: qsun@imech.ac.cn

    Fluid dynamic equations are valid in their respective modeling scales, such as the particle mean free path scale of the Boltzmann equation and the hydrodynamic scale of the Navier–Stokes (NS) equations. With a variation of the modeling scales, theoretically there should have a continuous spectrum of fluid dynamic equations. Even though the Boltzmann equation is claimed to be valid in all scales, many Boltzmann solvers, including direct simulation Monte Carlo method, require the cell resolution to the order of particle mean free path scale. Therefore, they are still single scale methods. In order to study multiscale flow evolution efficiently, themore » dynamics in the computational fluid has to be changed with the scales. A direct modeling of flow physics with a changeable scale may become an appropriate approach. The unified gas-kinetic scheme (UGKS) is a direct modeling method in the mesh size scale, and its underlying flow physics depends on the resolution of the cell size relative to the particle mean free path. The cell size of UGKS is not limited by the particle mean free path. With the variation of the ratio between the numerical cell size and local particle mean free path, the UGKS recovers the flow dynamics from the particle transport and collision in the kinetic scale to the wave propagation in the hydrodynamic scale. The previous UGKS is mostly constructed from the evolution solution of kinetic model equations. Even though the UGKS is very accurate and effective in the low transition and continuum flow regimes with the time step being much larger than the particle mean free time, it still has space to develop more accurate flow solver in the region, where the time step is comparable with the local particle mean free time. In such a scale, there is dynamic difference from the full Boltzmann collision term and the model equations. This work is about the further development of the UGKS with the implementation of the full Boltzmann collision term in the region where it is needed. The central ingredient of the UGKS is the coupled treatment of particle transport and collision in the flux evaluation across a cell interface, where a continuous flow dynamics from kinetic to hydrodynamic scales is modeled. The newly developed UGKS has the asymptotic preserving (AP) property of recovering the NS solutions in the continuum flow regime, and the full Boltzmann solution in the rarefied regime. In the mostly unexplored transition regime, the UGKS itself provides a valuable tool for the non-equilibrium flow study. The mathematical properties of the scheme, such as stability, accuracy, and the asymptotic preserving, will be analyzed in this paper as well.« less

  1. Understanding electrical conduction in lithium ion batteries through multi-scale modeling

    NASA Astrophysics Data System (ADS)

    Pan, Jie

    Silicon (Si) has been considered as a promising negative electrode material for lithium ion batteries (LIBs) because of its high theoretical capacity, low discharge voltage, and low cost. However, the utilization of Si electrode has been hampered by problems such as slow ionic transport, large stress/strain generation, and unstable solid electrolyte interphase (SEI). These problems severely influence the performance and cycle life of Si electrodes. In general, ionic conduction determines the rate performance of the electrode, while electron leakage through the SEI causes electrolyte decomposition and, thus, causes capacity loss. The goal of this thesis research is to design Si electrodes with high current efficiency and durability through a fundamental understanding of the ionic and electronic conduction in Si and its SEI. Multi-scale physical and chemical processes occur in the electrode during charging and discharging. This thesis, thus, focuses on multi-scale modeling, including developing new methods, to help understand these coupled physical and chemical processes. For example, we developed a new method based on ab initio molecular dynamics to study the effects of stress/strain on Li ion transport in amorphous lithiated Si electrodes. This method not only quantitatively shows the effect of stress on ionic transport in amorphous materials, but also uncovers the underlying atomistic mechanisms. However, the origin of ionic conduction in the inorganic components in SEI is different from that in the amorphous Si electrode. To tackle this problem, we developed a model by separating the problem into two scales: 1) atomistic scale: defect physics and transport in individual SEI components with consideration of the environment, e.g., LiF in equilibrium with Si electrode; 2) mesoscopic scale: defect distribution near the heterogeneous interface based on a space charge model. In addition, to help design better artificial SEI, we further demonstrated a theoretical design of multicomponent SEIs by utilizing the synergetic effect found in the natural SEI. We show that the electrical conduction can be optimized by varying the grain size and volume fraction of two phases in the artificial multicomponent SEI.

  2. Unbiased estimates of galaxy scaling relations from photometric redshift surveys

    NASA Astrophysics Data System (ADS)

    Rossi, Graziano; Sheth, Ravi K.

    2008-06-01

    Many physical properties of galaxies correlate with one another, and these correlations are often used to constrain galaxy formation models. Such correlations include the colour-magnitude relation, the luminosity-size relation, the fundamental plane, etc. However, the transformation from observable (e.g. angular size, apparent brightness) to physical quantity (physical size, luminosity) is often distance dependent. Noise in the distance estimate will lead to biased estimates of these correlations, thus compromising the ability of photometric redshift surveys to constrain galaxy formation models. We describe two methods which can remove this bias. One is a generalization of the Vmax method, and the other is a maximum-likelihood approach. We illustrate their effectiveness by studying the size-luminosity relation in a mock catalogue, although both methods can be applied to other scaling relations as well. We show that if one simply uses photometric redshifts one obtains a biased relation; our methods correct for this bias and recover the true relation.

  3. Harmonic generation beyond the Strong-Field Approximation: the physics behind the short-wave-infrared scaling laws.

    PubMed

    Pérez-Hernández, J A; Roso, L; Plaja, L

    2009-06-08

    The physics of laser-mater interactions beyond the perturbative limit configures the field of extreme non-linear optics. Although most experiments have been done in the near infrared ( lambda

  4. Coarse-grained component concurrency in Earth system modeling: parallelizing atmospheric radiative transfer in the GFDL AM3 model using the Flexible Modeling System coupling framework

    NASA Astrophysics Data System (ADS)

    Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac

    2016-10-01

    Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.

  5. Physics-based simulations of the impacts forest management practices have on hydrologic response

    Treesearch

    Adrianne Carr; Keith Loague

    2012-01-01

    The impacts of logging on near-surface hydrologic response at the catchment and watershed scales were examined quantitatively using numerical simulation. The simulations were conducted with the Integrated Hydrology Model (InHM) for the North Fork of Caspar Creek Experimental Watershed, located near Fort Bragg, California. InHM is a comprehensive physics-based...

  6. Sport Participation Motivation in Young Adolescent Girls: The Role of Friendship Quality and Self-Concept

    ERIC Educational Resources Information Center

    McDonough, Meghan H.; Crocker, Peter R. E.

    2005-01-01

    This study examined the factor structure of the Sport Friendship Quality Scale (SFQS; Weiss & Smith, 1999) and compared two models in which (a) self-worth mediated the relationship between physical self/friendship quality and sport commitment and (b) friendship quality and physical self-perceptions directly predicted self-worth and sport…

  7. Predicting viscous-range velocity gradient dynamics in large-eddy simulations of turbulence

    NASA Astrophysics Data System (ADS)

    Johnson, Perry; Meneveau, Charles

    2017-11-01

    The details of small-scale turbulence are not directly accessible in large-eddy simulations (LES), posing a modeling challenge because many important micro-physical processes depend strongly on the dynamics of turbulence in the viscous range. Here, we introduce a method for coupling existing stochastic models for the Lagrangian evolution of the velocity gradient tensor with LES to simulate unresolved dynamics. The proposed approach is implemented in LES of turbulent channel flow and detailed comparisons with DNS are carried out. An application to modeling the fate of deformable, small (sub-Kolmogorov) droplets at negligible Stokes number and low volume fraction with one-way coupling is carried out. These results illustrate the ability of the proposed model to predict the influence of small scale turbulence on droplet micro-physics in the context of LES. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.

  8. Simulations of Tornadoes, Tropical Cyclones, MJOs, and QBOs, using GFDL's multi-scale global climate modeling system

    NASA Astrophysics Data System (ADS)

    Lin, Shian-Jiann; Harris, Lucas; Chen, Jan-Huey; Zhao, Ming

    2014-05-01

    A multi-scale High-Resolution Atmosphere Model (HiRAM) is being developed at NOAA/Geophysical Fluid Dynamics Laboratory. The model's dynamical framework is the non-hydrostatic extension of the vertically Lagrangian finite-volume dynamical core (Lin 2004, Monthly Wea. Rev.) constructed on a stretchable (via Schmidt transformation) cubed-sphere grid. Physical parametrizations originally designed for IPCC-type climate predictions are in the process of being modified and made more "scale-aware", in an effort to make the model suitable for multi-scale weather-climate applications, with horizontal resolution ranging from 1 km (near the target high-resolution region) to as low as 400 km (near the antipodal point). One of the main goals of this development is to enable simulation of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously thought impossible. We will present preliminary results, covering a very wide spectrum of temporal-spatial scales, ranging from simulation of tornado genesis (hours), Madden-Julian Oscillations (intra-seasonal), topical cyclones (seasonal), to Quasi Biennial Oscillations (intra-decadal), using the same global multi-scale modeling system.

  9. Coarse-graining to the meso and continuum scales with molecular-dynamics-like models

    NASA Astrophysics Data System (ADS)

    Plimpton, Steve

    Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.

  10. Mathematical and Numerical Techniques in Energy and Environmental Modeling

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Ewing, R. E.

    Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms

  11. Economies of scale: The physics basis

    NASA Astrophysics Data System (ADS)

    Bejan, A.; Almerbati, A.; Lorente, S.

    2017-01-01

    Why is size so important? Why are "economies of scale" a universal feature of all flow systems, animate, inanimate, and human made? The empirical evidence is clear: the bigger are more efficient carriers (per unit) than the smaller. This natural tendency is observed across the board, from animal design to technology, logistics, and economics. In this paper, we rely on physics (thermodynamics) to determine the relation between the efficiency and size. Here, the objective is to predict a natural phenomenon, which is universal. It is not to model a particular type of device. The objective is to demonstrate based on physics that the efficiencies of diverse power plants should increase with size. The analysis is performed in two ways. First is the tradeoff between the "external" irreversibilities due to the temperature differences that exist above and below the temperature range occupied by the circuit executed by the working fluid. Second is the allocation of the fluid flow irreversibility between the hot and cold portions of the fluid flow circuit. The implications of this report in economics and design science (scaling up, scaling down) and the necessity of multi-scale design with hierarchy are discussed.

  12. Testing the standard model by precision measurement of the weak charges of quarks.

    PubMed

    Young, R D; Carlini, R D; Thomas, A W; Roche, J

    2007-09-21

    In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low energy. The precision of this new result, combined with earlier atomic parity-violation measurements, places tight constraints on the size of possible contributions from physics beyond the standard model. Consequently, this result improves the lower-bound on the scale of relevant new physics to approximately 1 TeV.

  13. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  14. REVIEW ARTICLE: How will physics be involved in silicon microelectronics

    NASA Astrophysics Data System (ADS)

    Kamarinos, Georges; Felix, Pierre

    1996-03-01

    By the year 2000 electronics will probably be the basis of the largest industry in the world. Silicon microelectronics will continue to keep a dominant place covering 99% of the `semiconductor market'. The aim of this review article is to indicate for the next decade the domains in which research work in `physics' is needed for a technological advance towards increasing speed, complexity and density of silicon ultra large scale integration (ULSI) integrated circuits (ICs). By `physics' we mean here not only condensed matter physics but also the basic physical chemistry and thermodynamics. The review begins with a brief and general introduction in which we elucidate the current state of the art and the trends in silicon microelectronics. Afterwards we examine the involvement of physics in silicon microelectronics in the two main sections. The first section concerns the processes of fabrication of ICs: lithography, oxidation, diffusion, chemical and physical vapour deposition, rapid thermal processing, etching, interconnections, ultra-clean processing and microcontamination. The second section concerns the electrical operation of the ULSI devices. It defines the integration scales and points out the importance of the intermediate scale of integration which is the scale of the next generation of ICs. The emergence of cryomicroelectronics is also reviewed and an extended paragraph is dedicated to the problem of reliability and ageing of devices and ICs: hot carrier degradation, interdevice coupling and noise are considered. It is shown, during our analysis, that the next generation of silicon ICs needs mainly: (i) `scientific' fabrication and (ii) microscopic modelling and simulation of the electrical characteristics of the scaled down devices. To attain the above objectives a return to the `first principles' of physics as well as a recourse to nonlinear and non-equilibrium thermodynamics are mandatory. In the references we list numerous review papers and references of specialized colloquia proceedings so that a more detailed survey of the subject is possible for the reader.

  15. The role of physical habitat and sampling effort on estimates of benthic macroinvertebrate taxonomic richness at basin and site scales.

    PubMed

    Silva, Déborah R O; Ligeiro, Raphael; Hughes, Robert M; Callisto, Marcos

    2016-06-01

    Taxonomic richness is one of the most important measures of biological diversity in ecological studies, including those with stream macroinvertebrates. However, it is impractical to measure the true richness of any site directly by sampling. Our objective was to evaluate the effect of sampling effort on estimates of macroinvertebrate family and Ephemeroptera, Plecoptera, and Trichoptera (EPT) genera richness at two scales: basin and stream site. In addition, we tried to determine which environmental factors at the site scale most influenced the amount of sampling effort needed. We sampled 39 sites in the Cerrado biome (neotropical savanna). In each site, we obtained 11 equidistant samples of the benthic assemblage and multiple physical habitat measurements. The observed basin-scale richness achieved a consistent estimation from Chao 1, Jack 1, and Jack 2 richness estimators. However, at the site scale, there was a constant increase in the observed number of taxa with increased number of samples. Models that best explained the slope of site-scale sampling curves (representing the necessity of greater sampling effort) included metrics that describe habitat heterogeneity, habitat structure, anthropogenic disturbance, and water quality, for both macroinvertebrate family and EPT genera richness. Our results demonstrate the importance of considering basin- and site-scale sampling effort in ecological surveys and that taxa accumulation curves and richness estimators are good tools for assessing sampling efficiency. The physical habitat explained a significant amount of the sampling effort needed. Therefore, future studies should explore the possible implications of physical habitat characteristics when developing sampling objectives, study designs, and calculating the needed sampling effort.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perdikaris, Paris, E-mail: parisp@mit.edu; Grinberg, Leopold, E-mail: leopoldgrinberg@us.ibm.com; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu

    The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process takingmore » place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.« less

  17. Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Kumar, V.

    2017-12-01

    Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.

  18. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors.

    PubMed

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-08-31

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction.

  19. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors

    PubMed Central

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-01-01

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction. PMID:26404287

  20. Inflatable Dark Matter

    DOE PAGES

    Davoudiasl, Hooman; Hooper, Dan; McDermott, Samuel D.

    2016-01-22

    We describe a general scenario, dubbed “Inflatable Dark Matter”, in which the density of dark matter particles can be reduced through a short period of late-time inflation in the early universe. The overproduction of dark matter that is predicted within many otherwise well-motivated models of new physics can be elegantly remedied within this context, without the need to tune underlying parameters or to appeal to anthropic considerations. Thermal relics that would otherwise be disfavored can easily be accommodated within this class of scenarios, including dark matter candidates that are very heavy or very light. Furthermore, the non-thermal abundance of GUTmore » or Planck scale axions can be brought to acceptable levels, without invoking anthropic tuning of initial conditions. Additionally, a period of late-time inflation could have occurred over a wide range of scales from ~ MeV to the weak scale or above, and could have been triggered by physics within a hidden sector, with small but not necessarily negligible couplings to the Standard Model.« less

  1. Higgsploding universe

    NASA Astrophysics Data System (ADS)

    Khoze, Valentin V.; Spannowsky, Michael

    2017-10-01

    Higgsplosion is a dynamical mechanism that introduces an exponential suppression of quantum fluctuations beyond the Higgsplosion energy scale E* and further guarantees perturbative unitarity in multi-Higgs production processes. By calculating the Higgsplosion scale for spin 0, 1 /2 , 1 and 2 particles at leading order, we argue that Higgsplosion regulates all n-point functions, thereby embedding the standard model of particle physics and its extensions into an asymptotically safe theory. There are no Landau poles and the Higgs self-coupling stays positive. Asymptotic safety is of particular interest for theories of particle physics that include quantum gravity. We argue that in a Hippsloding theory one cannot probe shorter and shorter length scales by increasing the energy of the collision beyond the Higgsplosion energy and there is a minimal length set by r*˜1 /E* that can be probed. We further show that Higgsplosion is consistent and not in conflict with models of inflation and the existence of axions. There is also a possibility of testing Higgsplosion experimentally at future high energy experiments.

  2. Incorporating physically-based microstructures in materials modeling: Bridging phase field and crystal plasticity frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.

    Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less

  3. The clustering of baryonic matter. I: a halo-model approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedeli, C., E-mail: cosimo.fedeli@oabo.inaf.it

    2014-04-01

    In this paper I generalize the halo model for the clustering of dark matter in order to produce the power spectra of the two main baryonic matter components in the Universe: stars and hot gas. As a natural extension, this can be also used to describe the clustering of all mass. According to the design of the halo model, the large-scale power spectra of the various matter components are physically connected with the distribution of each component within bound structures and thus, ultimately, with the complete set of physical processes that drive the formation of galaxies and galaxy clusters. Besidesmore » being practical for cosmological and parametric studies, the semi-analytic model presented here has also other advantages. Most importantly, it allows one to understand on physical ground what is the relative contribution of each matter component to the total clustering of mass as a function of scale, and thus it opens an interesting new window to infer the distribution of baryons through high precision cosmic shear measurements. This is particularly relevant for future wide-field photometric surveys such as Euclid. In this work the concept of the model and its uncertainties are illustrated in detail, while in a companion paper we use a set of numerical hydrodynamic simulations to show a practical application and to investigate where the model itself needs to be improved.« less

  4. Incorporating physically-based microstructures in materials modeling: Bridging phase field and crystal plasticity frameworks

    DOE PAGES

    Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.; ...

    2016-04-25

    Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less

  5. Variable classification in the LSST era: exploring a model for quasi-periodic light curves

    NASA Astrophysics Data System (ADS)

    Zinn, J. C.; Kochanek, C. S.; Kozłowski, S.; Udalski, A.; Szymański, M. K.; Soszyński, I.; Wyrzykowski, Ł.; Ulaczyk, K.; Poleski, R.; Pietrukowicz, P.; Skowron, J.; Mróz, P.; Pawlak, M.

    2017-06-01

    The Large Synoptic Survey Telescope (LSST) is expected to yield ˜107 light curves over the course of its mission, which will require a concerted effort in automated classification. Stochastic processes provide one means of quantitatively describing variability with the potential advantage over simple light-curve statistics that the parameters may be physically meaningful. Here, we survey a large sample of periodic, quasi-periodic and stochastic Optical Gravitational Lensing Experiment-III variables using the damped random walk (DRW; CARMA(1,0)) and quasi-periodic oscillation (QPO; CARMA(2,1)) stochastic process models. The QPO model is described by an amplitude, a period and a coherence time-scale, while the DRW has only an amplitude and a time-scale. We find that the periodic and quasi-periodic stellar variables are generally better described by a QPO than a DRW, while quasars are better described by the DRW model. There are ambiguities in interpreting the QPO coherence time due to non-sinusoidal light-curve shapes, signal-to-noise ratio, error mischaracterizations and cadence. Higher order implementations of the QPO model that better capture light-curve shapes are necessary for the coherence time to have its implied physical meaning. Independent of physical meaning, the extra parameter of the QPO model successfully distinguishes most of the classes of periodic and quasi-periodic variables we consider.

  6. Wavefield complexity and stealth structures: Resolution constraints by wave physics

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Leng, K.

    2017-12-01

    Imaging the Earth's interior relies on understanding how waveforms encode information from heterogeneous multi-scale structure. This relation is given by elastodynamics, but forward modeling in the context of tomography primarily serves to deliver synthetic waveforms and gradients for the inversion procedure. While this is entirely appropriate, it depreciates a wealth of complementary inference that can be obtained from the complexity of the wavefield. Here, we are concerned with the imprint of realistic multi-scale Earth structure on the wavefield, and the question on the inherent physical resolution limit of structures encoded in seismograms. We identify parameter and scattering regimes where structures remain invisible as a function of seismic wavelength, structural multi-scale geometry, scattering strength, and propagation path. Ultimately, this will aid in interpreting tomographic images by acknowledging the scope of "forgotten" structures, and shall offer guidance for optimising the selection of seismic data for tomography. To do so, we use our novel 3D modeling method AxiSEM3D which tackles global wave propagation in visco-elastic, anisotropic 3D structures with undulating boundaries at unprecedented resolution and efficiency by exploiting the inherent azimuthal smoothness of wavefields via a coupled Fourier expansion-spectral-element approach. The method links computational cost to wavefield complexity and thereby lends itself well to exploring the relation between waveforms and structures. We will show various examples of multi-scale heterogeneities which appear or disappear in the waveform, and argue that the nature of the structural power spectrum plays a central role in this. We introduce the concept of wavefield learning to examine the true wavefield complexity for a complexity-dependent modeling framework and discriminate which scattering structures can be retrieved by surface measurements. This leads to the question of physical invisibility and the tomographic resolution limit, and offers insight as to why tomographic images still show stark differences for smaller-scale heterogeneities despite progress in modeling and data resolution. Finally, we give an outlook on how we expand this modeling framework towards an inversion procedure guided by wavefield complexity.

  7. Strategy for long-term 3D cloud-resolving simulations over the ARM SGP site and preliminary results

    NASA Astrophysics Data System (ADS)

    Lin, W.; Liu, Y.; Song, H.; Endo, S.

    2011-12-01

    Parametric representations of cloud/precipitation processes continue having to be adopted in climate simulations with increasingly higher spatial resolution or with emerging adaptive mesh framework; and it is only becoming more critical that such parameterizations have to be scale aware. Continuous cloud measurements at DOE's ARM sites have provided a strong observational basis for novel cloud parameterization research at various scales. Despite significant progress in our observational ability, there are important cloud-scale physical and dynamical quantities that are either not currently observable or insufficiently sampled. To complement the long-term ARM measurements, we have explored an optimal strategy to carry out long-term 3-D cloud-resolving simulations over the ARM SGP site using Weather Research and Forecasting (WRF) model with multi-domain nesting. The factors that are considered to have important influences on the simulated cloud fields include domain size, spatial resolution, model top, forcing data set, model physics and the growth of model errors. The hydrometeor advection that may play a significant role in hydrological process within the observational domain but is often lacking, and the limitations due to the constraint of domain-wide uniform forcing in conventional cloud system-resolving model simulations, are at least partly accounted for in our approach. Conventional and probabilistic verification approaches are employed first for selected cases to optimize the model's capability of faithfully reproducing the observed mean and statistical distributions of cloud-scale quantities. This then forms the basis of our setup for long-term cloud-resolving simulations over the ARM SGP site. The model results will facilitate parameterization research, as well as understanding and dissecting parameterization deficiencies in climate models.

  8. Simplifying and upscaling water resources systems models that combine natural and engineered components

    NASA Astrophysics Data System (ADS)

    McIntyre, N.; Keir, G.

    2014-12-01

    Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.

  9. Multiscale Cloud System Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell W.

    2009-01-01

    The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.

  10. A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks

    PubMed Central

    Wang, Ping; Zhang, Lin; Li, Victor O. K.

    2013-01-01

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708

  11. A stratified acoustic model accounting for phase shifts for underwater acoustic networks.

    PubMed

    Wang, Ping; Zhang, Lin; Li, Victor O K

    2013-05-13

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.

  12. DOES A SCALING LAW EXIST BETWEEN SOLAR ENERGETIC PARTICLE EVENTS AND SOLAR FLARES?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahler, S. W., E-mail: AFRL.RVB.PA@kirtland.af.mil

    2013-05-20

    Among many other natural processes, the size distributions of solar X-ray flares and solar energetic particle (SEP) events are scale-invariant power laws. The measured distributions of SEP events prove to be distinctly flatter, i.e., have smaller power-law slopes, than those of the flares. This has led to speculation that the two distributions are related through a scaling law, first suggested by Hudson, which implies a direct nonlinear physical connection between the processes producing the flares and those producing the SEP events. We present four arguments against this interpretation. First, a true scaling must relate SEP events to all flare X-raymore » events, and not to a small subset of the X-ray event population. We also show that the assumed scaling law is not mathematically valid and that although the flare X-ray and SEP event data are correlated, they are highly scattered and not necessarily related through an assumed scaling of the two phenomena. An interpretation of SEP events within the context of a recent model of fractal-diffusive self-organized criticality by Aschwanden provides a physical basis for why the SEP distributions should be flatter than those of solar flares. These arguments provide evidence against a close physical connection of flares with SEP production.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicks, E. P.; Rosner, R., E-mail: eph2001@columbia.edu

    In this paper, we provide support for the Rayleigh-Taylor-(RT)-based subgrid model used in full-star simulations of deflagrations in Type Ia supernovae explosions. We use the results of a parameter study of two-dimensional direct numerical simulations of an RT unstable model flame to distinguish between the two main types of subgrid models (RT or turbulence dominated) in the flamelet regime. First, we give scalings for the turbulent flame speed, the Reynolds number, the viscous scale, and the size of the burning region as the non-dimensional gravity (G) is varied. The flame speed is well predicted by an RT-based flame speed model.more » Next, the above scalings are used to calculate the Karlovitz number (Ka) and to discuss appropriate combustion regimes. No transition to thin reaction zones is seen at Ka = 1, although such a transition is expected by turbulence-dominated subgrid models. Finally, we confirm a basic physical premise of the RT subgrid model, namely, that the flame is fractal, and thus self-similar. By modeling the turbulent flame speed, we demonstrate that it is affected more by large-scale RT stretching than by small-scale turbulent wrinkling. In this way, the RT instability controls the flame directly from the large scales. Overall, these results support the RT subgrid model.« less

  14. Power Law Patch Scaling and Lack of Characteristic Wavelength Suggest "Scale-Free" Processes Drive Pattern Formation in the Florida Everglades

    NASA Astrophysics Data System (ADS)

    Kaplan, D. A.; Casey, S. T.; Cohen, M. J.; Acharya, S.; Jawitz, J. W.

    2016-12-01

    A century of hydrologic modification has altered the physical and biological drivers of landscape processes in the Everglades (Florida, USA). Restoring the ridge-slough patterned landscape, a dominant feature of the historical system, is a priority, but requires an understanding of pattern genesis and degradation mechanisms. Physical experiments to evaluate alternative pattern formation mechanisms are limited by the long time scales of peat accumulation and loss, necessitating model-based comparisons, where support for a particular mechanism is based on model replication of extant patterning and trajectories of degradation. However, multiple mechanisms yield patch elongation in the direction of historical flow (a central feature of ridge-slough patterning), limiting the utility of that characteristic for discriminating among alternatives. Using data from vegetation maps, we investigated the statistical features of ridge-slough spatial patterning (ridge density, patch perimeter, elongation, patch-size distributions, and spatial periodicity) to establish more rigorous criteria for evaluating model performance and to inform controls on pattern variation across the contemporary system. Two independent analyses (2-D periodograms and patch size distributions) provide strong evidence against regular patterning, with the landscape exhibiting neither a characteristic wavelength nor a characteristic patch size, both of which are expected under conditions that produce regular patterns. Rather, landscape properties suggest robust scale-free patterning, indicating genesis from the coupled effects of local facilitation and a global negative feedback operating uniformly at the landscape-scale. This finding challenges widespread invocation of scale-dependent negative feedbacks for explaining ridge-slough pattern origins. These results help discern among genesis mechanisms and provide an improved statistical description of the landscape that can be used to compare among model outputs, as well as to assess the success of future restoration projects.

  15. Using simulated 3D surface fuelbeds and terrestrial laser scan data to develop inputs to fire behavior models

    Treesearch

    Eric Rowell; E. Louise Loudermilk; Carl Seielstad; Joseph O' Brien

    2016-01-01

    Understanding fine-scale variability in understory fuels is increasingly important as physics-based fire behavior modelsdrive needs for higher-resolution data. Describing fuelbeds 3Dly is critical in determining vertical and horizontal distributions offuel elements and the mass, especially in frequently burned pine ecosystems where fine-scale...

  16. An order statistics approach to the halo model for galaxies

    NASA Astrophysics Data System (ADS)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-04-01

    We use the halo model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the 'central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the lognormal distribution around this mean and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering; however, this model predicts no luminosity dependence of large-scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically underpredicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the halo model for galaxies with more physically motivated galaxy formation models.

  17. A generic framework for individual-based modelling and physical-biological interaction

    PubMed Central

    2018-01-01

    The increased availability of high-resolution ocean data globally has enabled more detailed analyses of physical-biological interactions and their consequences to the ecosystem. We present IBMlib, which is a versatile, portable and computationally effective framework for conducting Lagrangian simulations in the marine environment. The purpose of the framework is to handle complex individual-level biological models of organisms, combined with realistic 3D oceanographic model of physics and biogeochemistry describing the environment of the organisms without assumptions about spatial or temporal scales. The open-source framework features a minimal robust interface to facilitate the coupling between individual-level biological models and oceanographic models, and we provide application examples including forward/backward simulations, habitat connectivity calculations, assessing ocean conditions, comparison of physical circulation models, model ensemble runs and recently posterior Eulerian simulations using the IBMlib framework. We present the code design ideas behind the longevity of the code, our implementation experiences, as well as code performance benchmarking. The framework may contribute substantially to progresses in representing, understanding, predicting and eventually managing marine ecosystems. PMID:29351280

  18. Modeling Time-Dependent Behavior of Concrete Affected by Alkali Silica Reaction in Variable Environmental Conditions.

    PubMed

    Alnaggar, Mohammed; Di Luzio, Giovanni; Cusatis, Gianluca

    2017-04-28

    Alkali Silica Reaction (ASR) is known to be a serious problem for concrete worldwide, especially in high humidity and high temperature regions. ASR is a slow process that develops over years to decades and it is influenced by changes in environmental and loading conditions of the structure. The problem becomes even more complicated if one recognizes that other phenomena like creep and shrinkage are coupled with ASR. This results in synergistic mechanisms that can not be easily understood without a comprehensive computational model. In this paper, coupling between creep, shrinkage and ASR is modeled within the Lattice Discrete Particle Model (LDPM) framework. In order to achieve this, a multi-physics formulation is used to compute the evolution of temperature, humidity, cement hydration, and ASR in both space and time, which is then used within physics-based formulations of cracking, creep and shrinkage. The overall model is calibrated and validated on the basis of experimental data available in the literature. Results show that even during free expansions (zero macroscopic stress), a significant degree of coupling exists because ASR induced expansions are relaxed by meso-scale creep driven by self-equilibriated stresses at the meso-scale. This explains and highlights the importance of considering ASR and other time dependent aging and deterioration phenomena at an appropriate length scale in coupled modeling approaches.

  19. Uranium plume persistence impacted by hydrologic and geochemical heterogeneity in the groundwater and river water interaction zone of Hanford site

    NASA Astrophysics Data System (ADS)

    Chen, X.; Zachara, J. M.; Vermeul, V. R.; Freshley, M.; Hammond, G. E.

    2015-12-01

    The behavior of a persistent uranium plume in an extended groundwater- river water (GW-SW) interaction zone at the DOE Hanford site is dominantly controlled by river stage fluctuations in the adjacent Columbia River. The plume behavior is further complicated by substantial heterogeneity in physical and geochemical properties of the host aquifer sediments. Multi-scale field and laboratory experiments and reactive transport modeling were integrated to understand the complex plume behavior influenced by highly variable hydrologic and geochemical conditions in time and space. In this presentation we (1) describe multiple data sets from field-scale uranium adsorption and desorption experiments performed at our experimental well-field, (2) develop a reactive transport model that incorporates hydrologic and geochemical heterogeneities characterized from multi-scale and multi-type datasets and a surface complexation reaction network based on laboratory studies, and (3) compare the modeling and observation results to provide insights on how to refine the conceptual model and reduce prediction uncertainties. The experimental results revealed significant spatial variability in uranium adsorption/desorption behavior, while modeling demonstrated that ambient hydrologic and geochemical conditions and heterogeneities in sediment physical and chemical properties both contributed to complex plume behavior and its persistence. Our analysis provides important insights into the characterization, understanding, modeling, and remediation of groundwater contaminant plumes influenced by surface water and groundwater interactions.

  20. Modeling Time-Dependent Behavior of Concrete Affected by Alkali Silica Reaction in Variable Environmental Conditions

    PubMed Central

    Alnaggar, Mohammed; Di Luzio, Giovanni; Cusatis, Gianluca

    2017-01-01

    Alkali Silica Reaction (ASR) is known to be a serious problem for concrete worldwide, especially in high humidity and high temperature regions. ASR is a slow process that develops over years to decades and it is influenced by changes in environmental and loading conditions of the structure. The problem becomes even more complicated if one recognizes that other phenomena like creep and shrinkage are coupled with ASR. This results in synergistic mechanisms that can not be easily understood without a comprehensive computational model. In this paper, coupling between creep, shrinkage and ASR is modeled within the Lattice Discrete Particle Model (LDPM) framework. In order to achieve this, a multi-physics formulation is used to compute the evolution of temperature, humidity, cement hydration, and ASR in both space and time, which is then used within physics-based formulations of cracking, creep and shrinkage. The overall model is calibrated and validated on the basis of experimental data available in the literature. Results show that even during free expansions (zero macroscopic stress), a significant degree of coupling exists because ASR induced expansions are relaxed by meso-scale creep driven by self-equilibriated stresses at the meso-scale. This explains and highlights the importance of considering ASR and other time dependent aging and deterioration phenomena at an appropriate length scale in coupled modeling approaches. PMID:28772829

Top