Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
LES Modeling of Lateral Dispersion in the Ocean on Scales of 10 m to 10 km
2015-10-20
ocean on scales of 0.1-10 km that can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local...ocean on scales of 0.1-10 km that can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local...www.fields.utoronto.ca/video-archive/static/2013/06/166-1766/mergedvideo.ogv) and at the Nonlinear Effects in Internal Waves Conference held at Cornell University
Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony
2012-08-17
A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.
COMPARING AND LINKING PLUMES ACROSS MODELING APPROACHES
River plumes carry many pollutants, including microorganisms, into lakes and the coastal ocean. The physical scales of many stream and river plumes often lie between the scales for mixing zone plume models, such as the EPA Visual Plumes model, and larger-sized grid scales for re...
The physics behind the larger scale organization of DNA in eukaryotes.
Emanuel, Marc; Radja, Nima Hamedani; Henriksson, Andreas; Schiessel, Helmut
2009-07-01
In this paper, we discuss in detail the organization of chromatin during a cell cycle at several levels. We show that current experimental data on large-scale chromatin organization have not yet reached the level of precision to allow for detailed modeling. We speculate in some detail about the possible physics underlying the larger scale chromatin organization.
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
An increase in aerosol burden due to the land-sea warming contrast
NASA Astrophysics Data System (ADS)
Hassan, T.; Allen, R.; Randles, C. A.
2017-12-01
Climate models simulate an increase in most aerosol species in response to warming, particularly over the tropics and Northern Hemisphere midlatitudes. This increase in aerosol burden is related to a decrease in wet removal, primarily due to reduced large-scale precipitation. Here, we show that the increase in aerosol burden, and the decrease in large-scale precipitation, is related to a robust climate change phenomenon—the land/sea warming contrast. Idealized simulations with two state of the art climate models, the National Center for Atmospheric Research Community Atmosphere Model version 5 (NCAR CAM5) and the Geophysical Fluid Dynamics Laboratory Atmospheric Model 3 (GFDL AM3), show that muting the land-sea warming contrast negates the increase in aerosol burden under warming. This is related to smaller decreases in near-surface relative humidity over land, and in turn, smaller decreases in large-scale precipitation over land—especially in the NH midlatitudes. Furthermore, additional idealized simulations with an enhanced land/sea warming contrast lead to the opposite result—larger decreases in relative humidity over land, larger decreases in large-scale precipitation, and larger increases in aerosol burden. Our results, which relate the increase in aerosol burden to the robust climate projection of enhanced land warming, adds confidence that a warmer world will be associated with a larger aerosol burden.
NASA Technical Reports Server (NTRS)
Mudrick, S.
1985-01-01
The validity of quasi-geostrophic (QG) dynamics were tested on compared to primitive equation (PE) dynamics, for modeling the effect of cyclone waves on the larger scale flow. The formation of frontal cyclones and the dynamics of occluded frontogenesis were studied. Surface friction runs with the PE model and the wavelength of maximum instability is described. Also fine resolution PE simulation of a polar low is described.
NASA Astrophysics Data System (ADS)
Schreiner, Anne; Saur, Joachim
2017-02-01
In hydrodynamic turbulence, it is well established that the length of the dissipation scale depends on the energy cascade rate, I.e., the larger the energy input rate per unit mass, the more the turbulent fluctuations need to be driven to increasingly smaller scales to dissipate the larger energy flux. Observations of magnetic spectral energy densities indicate that this intuitive picture is not valid in solar wind turbulence. Dissipation seems to set in at the same length scale for different solar wind conditions independently of the energy flux. To investigate this difference in more detail, we present an analytic dissipation model for solar wind turbulence at electron scales, which we compare with observed spectral densities. Our model combines the energy transport from large to small scales and collisionless damping, which removes energy from the magnetic fluctuations in the kinetic regime. We assume wave-particle interactions of kinetic Alfvén waves (KAWs) to be the main damping process. Wave frequencies and damping rates of KAWs are obtained from the hot plasma dispersion relation. Our model assumes a critically balanced turbulence, where larger energy cascade rates excite larger parallel wavenumbers for a certain perpendicular wavenumber. If the dissipation is additionally wave driven such that the dissipation rate is proportional to the parallel wavenumber—as with KAWs—then an increase of the energy cascade rate is counterbalanced by an increased dissipation rate for the same perpendicular wavenumber, leading to a dissipation length independent of the energy cascade rate.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
Multi-scale Slip Inversion Based on Simultaneous Spatial and Temporal Domain Wavelet Transform
NASA Astrophysics Data System (ADS)
Liu, W.; Yao, H.; Yang, H. Y.
2017-12-01
Finite fault inversion is a widely used method to study earthquake rupture processes. Some previous studies have proposed different methods to implement finite fault inversion, including time-domain, frequency-domain, and wavelet-domain methods. Many previous studies have found that different frequency bands show different characteristics of the seismic rupture (e.g., Wang and Mori, 2011; Yao et al., 2011, 2013; Uchide et al., 2013; Yin et al., 2017). Generally, lower frequency waveforms correspond to larger-scale rupture characteristics while higher frequency data are representative of smaller-scale ones. Therefore, multi-scale analysis can help us understand the earthquake rupture process thoroughly from larger scale to smaller scale. By the use of wavelet transform, the wavelet-domain methods can analyze both the time and frequency information of signals in different scales. Traditional wavelet-domain methods (e.g., Ji et al., 2002) implement finite fault inversion with both lower and higher frequency signals together to recover larger-scale and smaller-scale characteristics of the rupture process simultaneously. Here we propose an alternative strategy with a two-step procedure, i.e., firstly constraining the larger-scale characteristics with lower frequency signals, and then resolving the smaller-scale ones with higher frequency signals. We have designed some synthetic tests to testify our strategy and compare it with the traditional one. We also have applied our strategy to study the 2015 Gorkha Nepal earthquake using tele-seismic waveforms. Both the traditional method and our two-step strategy only analyze the data in different temporal scales (i.e., different frequency bands), while the spatial distribution of model parameters also shows multi-scale characteristics. A more sophisticated strategy is to transfer the slip model into different spatial scales, and then analyze the smooth slip distribution (larger scales) with lower frequency data firstly and more detailed slip distribution (smaller scales) with higher frequency data subsequently. We are now implementing the slip inversion using both spatial and temporal domain wavelets. This multi-scale analysis can help us better understand frequency-dependent rupture characteristics of large earthquakes.
Taraphdar, S.; Mukhopadhyay, P.; Leung, L. Ruby; ...
2016-12-05
The prediction skill of tropical synoptic scale transients (SSTR) such as monsoon low and depression during the boreal summer of 2007–2009 are assessed using high resolution ECMWF and NCEP TIGGE forecasts data. By analyzing 246 forecasts for lead times up to 10 days, it is found that the models have good skills in forecasting the planetary scale means but the skills of SSTR remain poor, with the latter showing no skill beyond 2 days for the global tropics and Indian region. Consistent forecast skills among precipitation, velocity potential, and vorticity provide evidence that convection is the primary process responsible formore » precipitation. The poor skills of SSTR can be attributed to the larger random error in the models as they fail to predict the locations and timings of SSTR. Strong correlation between the random error and synoptic precipitation suggests that the former starts to develop from regions of convection. As the NCEP model has larger biases of synoptic scale precipitation, it has a tendency to generate more random error that ultimately reduces the prediction skill of synoptic systems in that model. Finally, the larger biases in NCEP may be attributed to the model moist physics and/or coarser horizontal resolution compared to ECMWF.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taraphdar, S.; Mukhopadhyay, P.; Leung, L. Ruby
The prediction skill of tropical synoptic scale transients (SSTR) such as monsoon low and depression during the boreal summer of 2007–2009 are assessed using high resolution ECMWF and NCEP TIGGE forecasts data. By analyzing 246 forecasts for lead times up to 10 days, it is found that the models have good skills in forecasting the planetary scale means but the skills of SSTR remain poor, with the latter showing no skill beyond 2 days for the global tropics and Indian region. Consistent forecast skills among precipitation, velocity potential, and vorticity provide evidence that convection is the primary process responsible formore » precipitation. The poor skills of SSTR can be attributed to the larger random error in the models as they fail to predict the locations and timings of SSTR. Strong correlation between the random error and synoptic precipitation suggests that the former starts to develop from regions of convection. As the NCEP model has larger biases of synoptic scale precipitation, it has a tendency to generate more random error that ultimately reduces the prediction skill of synoptic systems in that model. Finally, the larger biases in NCEP may be attributed to the model moist physics and/or coarser horizontal resolution compared to ECMWF.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, Anne; Saur, Joachim, E-mail: schreiner@geo.uni-koeln.de
In hydrodynamic turbulence, it is well established that the length of the dissipation scale depends on the energy cascade rate, i.e., the larger the energy input rate per unit mass, the more the turbulent fluctuations need to be driven to increasingly smaller scales to dissipate the larger energy flux. Observations of magnetic spectral energy densities indicate that this intuitive picture is not valid in solar wind turbulence. Dissipation seems to set in at the same length scale for different solar wind conditions independently of the energy flux. To investigate this difference in more detail, we present an analytic dissipation modelmore » for solar wind turbulence at electron scales, which we compare with observed spectral densities. Our model combines the energy transport from large to small scales and collisionless damping, which removes energy from the magnetic fluctuations in the kinetic regime. We assume wave–particle interactions of kinetic Alfvén waves (KAWs) to be the main damping process. Wave frequencies and damping rates of KAWs are obtained from the hot plasma dispersion relation. Our model assumes a critically balanced turbulence, where larger energy cascade rates excite larger parallel wavenumbers for a certain perpendicular wavenumber. If the dissipation is additionally wave driven such that the dissipation rate is proportional to the parallel wavenumber—as with KAWs—then an increase of the energy cascade rate is counterbalanced by an increased dissipation rate for the same perpendicular wavenumber, leading to a dissipation length independent of the energy cascade rate.« less
Organelle Size Scaling of the Budding Yeast Vacuole by Relative Growth and Inheritance.
Chan, Yee-Hung M; Reyes, Lorena; Sohail, Saba M; Tran, Nancy K; Marshall, Wallace F
2016-05-09
It has long been noted that larger animals have larger organs compared to smaller animals of the same species, a phenomenon termed scaling [1]. Julian Huxley proposed an appealingly simple model of "relative growth"-in which an organ and the whole body grow with their own intrinsic rates [2]-that was invoked to explain scaling in organs from fiddler crab claws to human brains. Because organ size is regulated by complex, unpredictable pathways [3], it remains unclear whether scaling requires feedback mechanisms to regulate organ growth in response to organ or body size. The molecular pathways governing organelle biogenesis are simpler than organogenesis, and therefore organelle size scaling in the cell provides a more tractable case for testing Huxley's model. We ask the question: is it possible for organelle size scaling to arise if organelle growth is independent of organelle or cell size? Using the yeast vacuole as a model, we tested whether mutants defective in vacuole inheritance, vac8Δ and vac17Δ, tune vacuole biogenesis in response to perturbations in vacuole size. In vac8Δ/vac17Δ, vacuole scaling increases with the replicative age of the cell. Furthermore, vac8Δ/vac17Δ cells continued generating vacuole at roughly constant rates even when they had significantly larger vacuoles compared to wild-type. With support from computational modeling, these results suggest there is no feedback between vacuole biogenesis rates and vacuole or cell size. Rather, size scaling is determined by the relative growth rates of the vacuole and the cell, thus representing a cellular version of Huxley's model. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
Review of forest landscape models: types, methods, development and applications
Weimin Xi; Robert N. Coulson; Andrew G. Birt; Zong-Bo Shang; John D. Waldron; Charles W. Lafon; David M. Cairns; Maria D. Tchakerian; Kier D. Klepzig
2009-01-01
Forest landscape models simulate forest change through time using spatially referenced data across a broad spatial scale (i.e. landscape scale) generally larger than a single forest stand. Spatial interactions between forest stands are a key component of such models. These models can incorporate other spatio-temporal processes such as...
Transfer of movement sequences: bigger is better.
Dean, Noah J; Kovacs, Attila J; Shea, Charles H
2008-02-01
Experiment 1 was conducted to determine if proportional transfer from "small to large" scale movements is as effective as transferring from "large to small." We hypothesize that the learning of larger scale movement will require the participant to learn to manage the generation, storage, and dissipation of forces better than when practicing smaller scale movements. Thus, we predict an advantage for transfer of larger scale movements to smaller scale movements relative to transfer from smaller to larger scale movements. Experiment 2 was conducted to determine if adding a load to a smaller scale movement would enhance later transfer to a larger scale movement sequence. It was hypothesized that the added load would require the participants to consider the dynamics of the movement to a greater extent than without the load. The results replicated earlier findings of effective transfer from large to small movements, but consistent with our hypothesis, transfer was less effective from small to large (Experiment 1). However, when a load was added during acquisition transfer from small to large was enhanced even though the load was removed during the transfer test. These results are consistent with the notion that the transfer asymmetry noted in Experiment 1 was due to factors related to movement dynamics that were enhanced during practice of the larger scale movement sequence, but not during the practice of the smaller scale movement sequence. The findings that the movement structure is unaffected by transfer direction but the movement dynamics are influenced by transfer direction is consistent with hierarchal models of sequence production.
Measuring the topology of large-scale structure in the universe
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III
1988-01-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Measuring the topology of large-scale structure in the universe
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III
1988-11-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Factors affecting economies of scale in combined sewer systems.
Maurer, Max; Wolfram, Martin; Anja, Herlyn
2010-01-01
A generic model is introduced that represents the combined sewer infrastructure of a settlement quantitatively. A catchment area module first calculates the length and size distribution of the required sewer pipes on the basis of rain patterns, housing densities and area size. These results are fed into the sewer-cost module in order to estimate the combined sewer costs of the entire catchment area. A detailed analysis of the relevant input parameters for Swiss settlements is used to identify the influence of size on costs. The simulation results confirm that an economy of scale exists for combined sewer systems. This is the result of two main opposing cost factors: (i) increased construction costs for larger sewer systems due to larger pipes and increased rain runoff in larger settlements, and (ii) lower costs due to higher population and building densities in larger towns. In Switzerland, the more or less organically grown settlement structures and limited land availability emphasise the second factor to show an apparent economy of scale. This modelling approach proved to be a powerful tool for understanding the underlying factors affecting the cost structure for water infrastructures.
Avalanches and scaling collapse in the large-N Kuramoto model
NASA Astrophysics Data System (ADS)
Coleman, J. Patrick; Dahmen, Karin A.; Weaver, Richard L.
2018-04-01
We study avalanches in the Kuramoto model, defined as excursions of the order parameter due to ephemeral episodes of synchronization. We present scaling collapses of the avalanche sizes, durations, heights, and temporal profiles, extracting scaling exponents, exponent relations, and scaling functions that are shown to be consistent with the scaling behavior of the power spectrum, a quantity independent of our particular definition of an avalanche. A comprehensive scaling picture of the noise in the subcritical finite-N Kuramoto model is developed, linking this undriven system to a larger class of driven avalanching systems.
Knoblauch, Andreas; Palm, Günther
2002-09-01
We present further simulation results of the model of two reciprocally connected visual areas proposed in the first paper [Knoblauch and Palm (2002) Biol Cybern 87:151-167]. One area corresponds to the orientation-selective subsystem of the primary visual cortex, the other is modeled as an associative memory representing stimulus objects according to Hebbian learning. We examine the scene-segmentation capability of our model on larger time and space scales, and relate it to experimental findings. Scene segmentation is achieved by attention switching on a time-scale longer than the gamma range. We find that the time-scale can vary depending on habituation parameters in the range of tens to hundreds of milliseconds. The switching process can be related to findings concerning attention and biased competition, and we reproduce experimental poststimulus time histograms (PSTHs) of single neurons under different stimulus and attentional conditions. In a larger variant the model exhibits traveling waves of activity on both slow and fast time-scales, with properties similar to those found in experiments. An apparent weakness of our standard model is the tendency to produce anti-phase correlations for fast activity from the two areas. Increasing the inter-areal delays in our model produces alternations of in-phase and anti-phase oscillations. The experimentally observed in-phase correlations can most naturally be obtained by the involvement of both fast and slow inter-areal connections; e.g., by two axon populations corresponding to fast-conducting myelinated and slow-conducting unmyelinated axons.
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
Multi-scale hydrometeorological observation and modelling for flash flood understanding
NASA Astrophysics Data System (ADS)
Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.
2014-09-01
This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2), where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2), where the river routing and flooding processes become important. These observations are part of the HyMeX (HYdrological cycle in the Mediterranean EXperiment) enhanced observation period (EOP), which will last 4 years (2012-2015). In terms of hydrological modelling, the objective is to set up regional-scale models, while addressing small and generally ungauged catchments, which represent the scale of interest for flood risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set-up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes on various scales.
Multi-scale hydrometeorological observation and modelling for flash-flood understanding
NASA Astrophysics Data System (ADS)
Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.
2014-02-01
This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2) where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2) where the river routing and flooding processes become important. These observations are part of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) Enhanced Observation Period (EOP) and lasts four years (2012-2015). In terms of hydrological modelling the objective is to set up models at the regional scale, while addressing small and generally ungauged catchments, which is the scale of interest for flooding risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses, in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes at various scales.
NASA Astrophysics Data System (ADS)
Ray, Nadja; Rupp, Andreas; Prechtel, Alexander
2017-09-01
Upscaling transport in porous media including both biomass development and simultaneous structural changes in the solid matrix is extremely challenging. This is because both affect the medium's porosity as well as mass transport parameters and flow paths. We address this challenge by means of a multiscale model. At the pore scale, the local discontinuous Galerkin (LDG) method is used to solve differential equations describing particularly the bacteria's and the nutrient's development. Likewise, a sticky agent tightening together solid or bio cells is considered. This is combined with a cellular automaton method (CAM) capturing structural changes of the underlying computational domain stemming from biomass development and solid restructuring. Findings from standard homogenization theory are applied to determine the medium's characteristic time- and space-dependent properties. Investigating these results enhances our understanding of the strong interplay between a medium's functional properties and its geometric structure. Finally, integrating such properties as model parameters into models defined on a larger scale enables reflecting the impact of pore scale processes on the larger scale.
Enrichment scale determines herbivore control of primary producers.
Gil, Michael A; Jiao, Jing; Osenberg, Craig W
2016-03-01
Anthropogenic nutrient enrichment stimulates primary production and threatens natural communities worldwide. Herbivores may counteract deleterious effects of enrichment by increasing their consumption of primary producers. However, field tests of herbivore control are often done by adding nutrients at small (e.g., sub-meter) scales, while enrichment in real systems often occurs at much larger scales (e.g., kilometers). Therefore, experimental results may be driven by processes that are not relevant at larger scales. Using a mathematical model, we show that herbivores can control primary producer biomass in experiments by concentrating their foraging in small enriched plots; however, at larger, realistic scales, the same mechanism may not lead to herbivore control of primary producers. Instead, other demographic mechanisms are required, but these are not examined in most field studies (and may not operate in many systems). This mismatch between experiments and natural processes suggests that many ecosystems may be less resilient to degradation via enrichment than previously believed.
Ong, Jason C.; Hedeker, Donald; Wyatt, James K.; Manber, Rachel
2016-01-01
Study Objectives: The purpose of this study was to introduce a novel statistical technique called the location-scale mixed model that can be used to analyze the mean level and intra-individual variability (IIV) using longitudinal sleep data. Methods: We applied the location-scale mixed model to examine changes from baseline in sleep efficiency on data collected from 54 participants with chronic insomnia who were randomized to an 8-week Mindfulness-Based Stress Reduction (MBSR; n = 19), an 8-week Mindfulness-Based Therapy for Insomnia (MBTI; n = 19), or an 8-week self-monitoring control (SM; n = 16). Sleep efficiency was derived from daily sleep diaries collected at baseline (days 1–7), early treatment (days 8–21), late treatment (days 22–63), and post week (days 64–70). The behavioral components (sleep restriction, stimulus control) were delivered during late treatment in MBTI. Results: For MBSR and MBTI, the pre-to-post change in mean levels of sleep efficiency were significantly larger than the change in mean levels for the SM control, but the change in IIV was not significantly different. During early and late treatment, MBSR showed a larger increase in mean levels of sleep efficiency and a larger decrease in IIV relative to the SM control. At late treatment, MBTI had a larger increase in the mean level of sleep efficiency compared to SM, but the IIV was not significantly different. Conclusions: The location-scale mixed model provides a two-dimensional analysis on the mean and IIV using longitudinal sleep diary data with the potential to reveal insights into treatment mechanisms and outcomes. Citation: Ong JC, Hedeker D, Wyatt JK, Manber R. Examining the variability of sleep patterns during treatment for chronic insomnia: application of a location-scale mixed model. J Clin Sleep Med 2016;12(6):797–804. PMID:26951414
NASA Astrophysics Data System (ADS)
Cassani, Mary Kay Kuhr
The objective of this study was to evaluate the effect of two pedagogical models used in general education science on non-majors' science teaching self-efficacy. Science teaching self-efficacy can be influenced by inquiry and cooperative learning, through cognitive mechanisms described by Bandura (1997). The Student Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) model of inquiry and cooperative learning incorporates cooperative learning and inquiry-guided learning in large enrollment combined lecture-laboratory classes (Oliver-Hoyo & Beichner, 2004). SCALE-UP was adopted by a small but rapidly growing public university in the southeastern United States in three undergraduate, general education science courses for non-science majors in the Fall 2006 and Spring 2007 semesters. Students in these courses were compared with students in three other general education science courses for non-science majors taught with the standard teaching model at the host university. The standard model combines lecture and laboratory in the same course, with smaller enrollments and utilizes cooperative learning. Science teaching self-efficacy was measured using the Science Teaching Efficacy Belief Instrument - B (STEBI-B; Bleicher, 2004). A science teaching self-efficacy score was computed from the Personal Science Teaching Efficacy (PTSE) factor of the instrument. Using non-parametric statistics, no significant difference was found between teaching models, between genders, within models, among instructors, or among courses. The number of previous science courses was significantly correlated with PTSE score. Student responses to open-ended questions indicated that students felt the larger enrollment in the SCALE-UP room reduced individual teacher attention but that the large round SCALE-UP tables promoted group interaction. Students responded positively to cooperative and hands-on activities, and would encourage inclusion of more such activities in all of the courses. The large enrollment SCALE-UP model as implemented at the host university did not increase science teaching self-efficacy of non-science majors, as hypothesized. This was likely due to limited modification of standard cooperative activities according to the inquiry-guided SCALE-UP model. It was also found that larger SCALE-UP enrollments did not decrease science teaching self-efficacy when standard cooperative activities were used in the larger class.
Current challenges in quantifying preferential flow through the vadose zone
NASA Astrophysics Data System (ADS)
Koestel, John; Larsbo, Mats; Jarvis, Nick
2017-04-01
In this presentation, we give an overview of current challenges in quantifying preferential flow through the vadose zone. A review of the literature suggests that current generation models do not fully reflect the present state of process understanding and empirical knowledge of preferential flow. We believe that the development of improved models will be stimulated by the increasingly widespread application of novel imaging technologies as well as future advances in computational power and numerical techniques. One of the main challenges in this respect is to bridge the large gap between the scales at which preferential flow occurs (pore to Darcy scales) and the scale of interest for management (fields, catchments, regions). Studies at the pore scale are being supported by the development of 3-D non-invasive imaging and numerical simulation techniques. These studies are leading to a better understanding of how macropore network topology and initial/boundary conditions control key state variables like matric potential and thus the strength of preferential flow. Extrapolation of this knowledge to larger scales would require support from theoretical frameworks such as key concepts from percolation and network theory, since we lack measurement technologies to quantify macropore networks at these large scales. Linked hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data enable investigation of the larger-scale heterogeneities that can generate preferential flow patterns at pedon, hillslope and field scales. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help in parameterizing models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).
Thogmartin, W.E.; Knutson, M.G.
2007-01-01
Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.
Ong, Jason C; Hedeker, Donald; Wyatt, James K; Manber, Rachel
2016-06-15
The purpose of this study was to introduce a novel statistical technique called the location-scale mixed model that can be used to analyze the mean level and intra-individual variability (IIV) using longitudinal sleep data. We applied the location-scale mixed model to examine changes from baseline in sleep efficiency on data collected from 54 participants with chronic insomnia who were randomized to an 8-week Mindfulness-Based Stress Reduction (MBSR; n = 19), an 8-week Mindfulness-Based Therapy for Insomnia (MBTI; n = 19), or an 8-week self-monitoring control (SM; n = 16). Sleep efficiency was derived from daily sleep diaries collected at baseline (days 1-7), early treatment (days 8-21), late treatment (days 22-63), and post week (days 64-70). The behavioral components (sleep restriction, stimulus control) were delivered during late treatment in MBTI. For MBSR and MBTI, the pre-to-post change in mean levels of sleep efficiency were significantly larger than the change in mean levels for the SM control, but the change in IIV was not significantly different. During early and late treatment, MBSR showed a larger increase in mean levels of sleep efficiency and a larger decrease in IIV relative to the SM control. At late treatment, MBTI had a larger increase in the mean level of sleep efficiency compared to SM, but the IIV was not significantly different. The location-scale mixed model provides a two-dimensional analysis on the mean and IIV using longitudinal sleep diary data with the potential to reveal insights into treatment mechanisms and outcomes. © 2016 American Academy of Sleep Medicine.
Preferential flow from pore to landscape scales
NASA Astrophysics Data System (ADS)
Koestel, J. K.; Jarvis, N.; Larsbo, M.
2017-12-01
In this presentation, we give a brief personal overview of some recent progress in quantifying preferential flow in the vadose zone, based on our own work and those of other researchers. One key challenge is to bridge the gap between the scales at which preferential flow occurs (i.e. pore to Darcy scales) and the scales of interest for management (i.e. fields, catchments, regions). We present results of recent studies that exemplify the potential of 3-D non-invasive imaging techniques to visualize and quantify flow processes at the pore scale. These studies should lead to a better understanding of how the topology of macropore networks control key state variables like matric potential and thus the strength of preferential flow under variable initial and boundary conditions. Extrapolation of this process knowledge to larger scales will remain difficult, since measurement technologies to quantify macropore networks at these larger scales are lacking. Recent work suggests that the application of key concepts from percolation theory could be useful in this context. Investigation of the larger Darcy-scale heterogeneities that generate preferential flow patterns at the soil profile, hillslope and field scales has been facilitated by hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help to parameterize models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).
Fractionaly Integrated Flux model and Scaling Laws in Weather and Climate
NASA Astrophysics Data System (ADS)
Schertzer, Daniel; Lovejoy, Shaun
2013-04-01
The Fractionaly Integrated Flux model (FIF) has been extensively used to model intermittent observables, like the velocity field, by defining them with the help of a fractional integration of a conservative (i.e. strictly scale invariant) flux, such as the turbulent energy flux. It indeed corresponds to a well-defined modelling that yields the observed scaling laws. Generalised Scale Invariance (GSI) enables FIF to deal with anisotropic fractional integrations and has been rather successful to define and model a unique regime of scaling anisotropic turbulence up to planetary scales. This turbulence has an effective dimension of 23/9=2.55... instead of the classical hypothesised 2D and 3D turbulent regimes, respectively for large and small spatial scales. It therefore theoretically eliminates a non plausible "dimension transition" between these two regimes and the resulting requirement of a turbulent energy "mesoscale gap", whose empirical evidence has been brought more and more into question. More recently, GSI-FIF was used to analyse climate, therefore at much larger time scales. Indeed, the 23/9-dimensional regime necessarily breaks up at the outer spatial scales. The corresponding transition range, which can be called "macroweather", seems to have many interesting properties, e.g. it rather corresponds to a fractional differentiation in time with a roughly flat frequency spectrum. Furthermore, this transition yields the possibility to have at much larger time scales scaling space-time climate fluctuations with a much stronger scaling anisotropy between time and space. Lovejoy, S. and D. Schertzer (2013). The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge Press (in press). Schertzer, D. et al. (1997). Fractals 5(3): 427-471. Schertzer, D. and S. Lovejoy (2011). International Journal of Bifurcation and Chaos 21(12): 3417-3456.
Application of SIR-C SAR to Hydrology
NASA Technical Reports Server (NTRS)
Engman, Edwin T.; ONeill, Peggy; Wood, Eric; Pauwels, Valentine; Hsu, Ann; Jackson, Tom; Shi, J. C.; Prietzsch, Corinna
1996-01-01
The progress, results and future plans regarding the following objectives are presented: (1) Determine and compare soil moisture patterns within one or more humid watersheds using SAR data, ground-based measurements, and hydrologic modeling; (2) Use radar data to characterize the hydrologic regime within a catchment and to identify the runoff producing characteristics of humid zone watersheds; and (3) Use radar data as the basis for scaling up from small scale, near-point process models to larger scale water balance models necessary to define and quantify the land phase of GCM's (Global Circulation Models).
Inflation at the electroweak scale
NASA Technical Reports Server (NTRS)
Knox, Lloyd; Turner, Michael S.
1993-01-01
We present a model for slow-rollover inflation where the vacuum energy that drives inflation is of the order of G(F) exp -2; unlike most models, the conversion of vacuum energy to radiation ('reheating') is moderately efficient. The scalar field responsible for inflation is a standard-model singlet, develops a vacuum expectation value of 4 x 10 exp 6 GeV, has a mass of about 1 GeV, and can play a role in electroweak phenomena. We also discuss models where the energy scale of inflation is somewhat larger, but still well below the unification scale.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
NASA Astrophysics Data System (ADS)
Kerkweg, Astrid; Hofmann, Christiane; Jöckel, Patrick; Mertens, Mariano; Pante, Gregor
2018-03-01
As part of the Modular Earth Submodel System (MESSy), the Multi-Model-Driver (MMD v1.0) was developed to couple online the regional Consortium for Small-scale Modeling (COSMO) model into a driving model, which can be either the regional COSMO model or the global European Centre Hamburg general circulation model (ECHAM) (see Part 2 of the model documentation). The coupled system is called MECO(n), i.e., MESSy-fied ECHAM and COSMO models nested n times. In this article, which is part of the model documentation of the MECO(n) system, the second generation of MMD is introduced. MMD comprises the message-passing infrastructure required for the parallel execution (multiple programme multiple data, MPMD) of different models and the communication of the individual model instances, i.e. between the driving and the driven models. Initially, the MMD library was developed for a one-way coupling between the global chemistry-climate ECHAM/MESSy atmospheric chemistry (EMAC) model and an arbitrary number of (optionally cascaded) instances of the regional chemistry-climate model COSMO/MESSy. Thus, MMD (v1.0) provided only functions for unidirectional data transfer, i.e. from the larger-scale to the smaller-scale models.Soon, extended applications requiring data transfer from the small-scale model back to the larger-scale model became of interest. For instance, the original fields of the larger-scale model can directly be compared to the upscaled small-scale fields to analyse the improvements gained through the small-scale calculations, after the results are upscaled. Moreover, the fields originating from the two different models might be fed into the same diagnostic tool, e.g. the online calculation of the radiative forcing calculated consistently with the same radiation scheme. Last but not least, enabling the two-way data transfer between two models is the first important step on the way to a fully dynamical and chemical two-way coupling of the various model instances.In MMD (v1.0), interpolation between the base model grids is performed via the COSMO preprocessing tool INT2LM, which was implemented into the MMD submodel for online interpolation, specifically for mapping onto the rotated COSMO grid. A more flexible algorithm is required for the backward mapping. Thus, MMD (v2.0) uses the new MESSy submodel GRID for the generalised definition of arbitrary grids and for the transformation of data between them.In this article, we explain the basics of the MMD expansion and the newly developed generic MESSy submodel GRID (v1.0) and show some examples of the abovementioned applications.
Unterberger, Michael J; Holzapfel, Gerhard A
2014-11-01
The protein actin is a part of the cytoskeleton and, therefore, responsible for the mechanical properties of the cells. Starting with the single molecule up to the final structure, actin creates a hierarchical structure of several levels exhibiting a remarkable behavior. The hierarchy spans several length scales and limitations in computational power; therefore, there is a call for different mechanical modeling approaches for the different scales. On the molecular level, we may consider each atom in molecular dynamics simulations. Actin forms filaments by combining the molecules into a double helix. In a model, we replace molecular subdomains using coarse-graining methods, allowing the investigation of larger systems of several atoms. These models on the nanoscale inform continuum mechanical models of large filaments, which are based on worm-like chain models for polymers. Assemblies of actin filaments are connected with cross-linker proteins. Models with discrete filaments, so-called Mikado models, allow us to investigate the dependence of the properties of networks on the parameters of the constituents. Microstructurally motivated continuum models of the networks provide insights into larger systems containing cross-linked actin networks. Modeling of such systems helps to gain insight into the processes on such small scales. On the other hand, they call for verification and hence trigger the improvement of established experiments and the development of new methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bossavit, A.
The authors show how to pass from the local Bean`s model, assumed to be valid as a behavior law for a homogeneous superconductor, to a model of similar form, valid on a larger space scale. The process, which can be iterated to higher and higher space scales, consists in solving for the fields e and j over a ``periodicity cell`` with periodic boundary conditions.
Dynamics of a neural system with a multiscale architecture
Breakspear, Michael; Stam, Cornelis J
2005-01-01
The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448
USDA-ARS?s Scientific Manuscript database
Agriculture covers 40% of Earth’s ice-free land area and has broad impacts on global biogeochemical cycles. While some agricultural management changes are small in scale or impact, others have the potential to shift biogeochemical cycles at landscape and larger scales if widely adopted. Understandin...
Multiscale Approach to Small River Plumes off California
NASA Astrophysics Data System (ADS)
Basdurak, N. B.; Largier, J. L.; Nidzieko, N.
2012-12-01
While larger scale plumes have received significant attention, the dynamics of plumes associated with small rivers typical of California are little studied. Since small streams are not dominated by a momentum flux, their plumes are more susceptible to conditions in the coastal ocean such as wind and waves. In order to correctly model water transport at smaller scales, there is a need to capture larger scale processes. To do this, one-way nested grids with varying grid resolution (1 km and 10 m for the parent and the child grid respectively) were constructed. CENCOOS (Central and Northern California Ocean Observing System) model results were used as boundary conditions to the parent grid. Semi-idealized model results for Santa Rosa Creek, California are presented from an implementation of the Regional Ocean Modeling System (ROMS v3.0), a three-dimensional, free-surface, terrain-following numerical model. In these preliminary results, the interaction between tides, winds, and buoyancy forcing in plume dynamics is explored for scenarios including different strengths of freshwater flow with different modes (steady and pulsed). Seasonal changes in transport dynamics and dispersion patterns are analyzed.
NASA Astrophysics Data System (ADS)
Fei, T.; Skidmore, A.; Liu, Y.
2012-07-01
Thermal environment is especially important to ectotherm because a lot of physiological functions rely on the body temperature such as thermoregulation. The so-called behavioural thermoregulation function made use of the heterogeneity of the thermal properties within an individual's habitat to sustain the animal's physiological processes. This function links the spatial utilization and distribution of individual ectotherm with the thermal properties of habitat (thermal habitat). In this study we modelled the relationship between the two by a spatial explicit model that simulates the movements of a lizard in a controlled environment. The model incorporates a lizard's transient body temperatures with a cellular automaton algorithm as a way to link the physiology knowledge of the animal with the spatial utilization of its microhabitat. On a larger spatial scale, 'thermal roughness' of the habitat was defined and used to predict the habitat occupancy of the target species. The results showed the habitat occupancy can be modelled by the cellular automaton based algorithm at a smaller scale, and can be modelled by the thermal roughness index at a larger scale.
Modeling elasticity in crystal growth.
Elder, K R; Katakowski, Mark; Haataja, Mikko; Grant, Martin
2002-06-17
A new model of crystal growth is presented that describes the phenomena on atomic length and diffusive time scales. The former incorporates elastic and plastic deformation in a natural manner, and the latter enables access to time scales much larger than conventional atomic methods. The model is shown to be consistent with the predictions of Read and Shockley for grain boundary energy, and Matthews and Blakeslee for misfit dislocations in epitaxial growth.
NASA Astrophysics Data System (ADS)
Tartakovsky, G. D.; Tartakovsky, A. M.; Scheibe, T. D.; Fang, Y.; Mahadevan, R.; Lovley, D. R.
2013-09-01
Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).
NASA Astrophysics Data System (ADS)
Scheibe, T. D.; Tartakovsky, G.; Tartakovsky, A. M.; Fang, Y.; Mahadevan, R.; Lovley, D. R.
2012-12-01
Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Guzel D.; Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2013-09-07
Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated withmore » microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparisonto prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).« less
Capel, P.D.; Zhang, H.
2000-01-01
In assessing the occurrence, behavior, and effects of agricultural chemicals in surface water, the scales of study (i.e., watershed, county, state, and regional areas) are usually much larger than the scale of agricultural fields, where much of the understanding of processes has been developed. Field-scale areas are characterized by relatively homogeneous conditions. The combination of process-based simulation models and geographic information system technology can be used to help extend our understanding of field processes to water-quality concerns at larger scales. To demonstrate this, the model "Groundwater Loading Effects of Agricultural Management Systems" was used to estimate the potential loss of two pesticides (atrazine and permethrin) in runoff to surface water in Fillmore County in southeastern Minnesota. The county was divided into field-scale areas on the basis of a 100 m by 100 m grid, and the influences of soil type and surface topography on the potential losses of the two pesticides in runoff was evaluated for each individual grid cell. The results could be used for guidance for agricultural management and regulatory decisions, for planning environmental monitoring programs, and as an educational tool for the public.
Concepts and models of coupled systems
NASA Astrophysics Data System (ADS)
Ertsen, Maurits
2017-04-01
In this paper, I will especially focus on the question of the position of human agency, social networks and complex co-evolutionary interactions in socio-hydrological models. The long term perspective of complex systems' modeling typically focuses on regional or global spatial scales and century/millennium time scales. It is still a challenge to relate correlations in outcomes defined at those longer and larger scales to the causalities at the shorter and smaller scales. How do we move today to the next 1000 years in the same way that our ancestors did move from their today to our present, in the small steps that produce reality? Please note, I am not arguing long term work is not interesting or the like. I just pose the question how to deal with the problem that we employ relations with hindsight that matter to us, but not necessarily to the agents that produced the relations we think we have observed. I would like to push the socio-hydrological community a little into rethinking how to deal with complexity, with the aim to bring together the timescales of humans and complexity. I will provide one or two examples of how larger-scale and longer-term observations on water flows and environmental loads can be broken down into smaller-scale and shorter-term production processes of these same loads.
Theoretical prediction and impact of fundamental electric dipole moments
Ellis, Sebastian A. R.; Kane, Gordon L.
2016-01-13
The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less
Theoretical prediction and impact of fundamental electric dipole moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, Sebastian A. R.; Kane, Gordon L.
The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less
Effects of spatial variability and scale on areal -average evapotranspiration
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.
Acoustic characteristics of 1/20-scale model helicopter rotors
NASA Technical Reports Server (NTRS)
Shenoy, Rajarama K.; Kohlhepp, Fred W.; Leighton, Kenneth P.
1986-01-01
A wind tunnel test to study the effects of geometric scale on acoustics and to investigate the applicability of very small scale models for the study of acoustic characteristics of helicopter rotors was conducted in the United Technologies Research Center Acoustic Research Tunnel. The results show that the Reynolds number effects significantly alter the Blade-Vortex-Interaction (BVI) Noise characteristics by enhancing the lower frequency content and suppressing the higher frequency content. In the time domain this is observed as an inverted thickness noise impulse rather than the typical positive-negative impulse of BVI noise. At higher advance ratio conditions, in the absence of BVI, the 1/20 scale model acoustic trends with Mach number follow those of larger scale models. However, the 1/20 scale model acoustic trends appear to indicate stall at higher thrust and advance ratio conditions.
A Multi-Scale Perspective of the Effects of Forest Fragmentation on Birds in Eastern Forests
Frank R. Thompson; Therese M. Donovan; Richard M. DeGraff; John Faaborg; Scott K. Robinson
2002-01-01
We propose a model that considers forest fragmentation within a spatial hierarchy that includes regional or biogeographic effects, landscape-level fragmentation effects, and local habitat effects. We hypothesize that effects operate "top down" in that larger scale effects provide constraints or context for smaller scale effects. Bird species' abundance...
NASA Astrophysics Data System (ADS)
Visser, Philip W.; Kooi, Henk; Stuyfzand, Pieter J.
2015-05-01
Results are presented of a comprehensive thermal impact study on an aquifer thermal energy storage (ATES) system in Bilthoven, the Netherlands. The study involved monitoring of the thermal impact and modeling of the three-dimensional temperature evolution of the storage aquifer and over- and underlying units. Special attention was paid to non-uniformity of the background temperature, which varies laterally and vertically in the aquifer. Two models were applied with different levels of detail regarding initial conditions and heterogeneity of hydraulic and thermal properties: a fine-scale heterogeneity model which construed the lateral and vertical temperature distribution more realistically, and a simplified model which represented the aquifer system with only a limited number of homogeneous layers. Fine-scale heterogeneity was shown to be important to accurately model the ATES-impacted vertical temperature distribution and the maximum and minimum temperatures in the storage aquifer, and the spatial extent of the thermal plumes. The fine-scale heterogeneity model resulted in larger thermally impacted areas and larger temperature anomalies than the simplified model. The models showed that scattered and scarce monitoring data of ATES-induced temperatures can be interpreted in a useful way by groundwater and heat transport modeling, resulting in a realistic assessment of the thermal impact.
Coarse-Grained Models for Protein-Cell Membrane Interactions
Bradley, Ryan; Radhakrishnan, Ravi
2015-01-01
The physiological properties of biological soft matter are the product of collective interactions, which span many time and length scales. Recent computational modeling efforts have helped illuminate experiments that characterize the ways in which proteins modulate membrane physics. Linking these models across time and length scales in a multiscale model explains how atomistic information propagates to larger scales. This paper reviews continuum modeling and coarse-grained molecular dynamics methods, which connect atomistic simulations and single-molecule experiments with the observed microscopic or mesoscale properties of soft-matter systems essential to our understanding of cells, particularly those involved in sculpting and remodeling cell membranes. PMID:26613047
Scale dependence of entrainment-mixing mechanisms in cumulus clouds
Lu, Chunsong; Liu, Yangang; Niu, Shengjie; ...
2014-12-17
This work empirically examines the dependence of entrainment-mixing mechanisms on the averaging scale in cumulus clouds using in situ aircraft observations during the Routine Atmospheric Radiation Measurement Aerial Facility Clouds with Low Optical Water Depths Optical Radiative Observations (RACORO) field campaign. A new measure of homogeneous mixing degree is defined that can encompass all types of mixing mechanisms. Analysis of the dependence of the homogenous mixing degree on the averaging scale shows that, on average, the homogenous mixing degree decreases with increasing averaging scales, suggesting that apparent mixing mechanisms gradually approach from homogeneous mixing to extreme inhomogeneous mixing with increasingmore » scales. The scale dependence can be well quantified by an exponential function, providing first attempt at developing a scale-dependent parameterization for the entrainment-mixing mechanism. The influences of three factors on the scale dependence are further examined: droplet-free filament properties (size and fraction), microphysical properties (mean volume radius and liquid water content of cloud droplet size distributions adjacent to droplet-free filaments), and relative humidity of entrained dry air. It is found that the decreasing rate of homogeneous mixing degree with increasing averaging scales becomes larger with larger droplet-free filament size and fraction, larger mean volume radius and liquid water content, or higher relative humidity. The results underscore the necessity and possibility of considering averaging scale in representation of entrainment-mixing processes in atmospheric models.« less
NASA Astrophysics Data System (ADS)
Brooks, P. D.; Barnard, H. R.; Biederman, J. A.; Borkhuu, B.; Edburg, S. L.; Ewers, B. E.; Gochis, D. J.; Gutmann, E. D.; Harpold, A. A.; Hicke, J. A.; Pendall, E.; Reed, D. E.; Somor, A. J.; Troch, P. A.
2011-12-01
Widespread tree mortality caused by insect infestations and drought has impacted millions of hectares across western North America in recent years. Although previous work on post-disturbance responses (e.g. experimental manipulations, fire, and logging) provides insight into how water and biogeochemical cycles may respond to insect infestations and drought, we find that the unique nature of these drivers of tree mortality complicates extrapolation to larger scales. Building from previous work on forest disturbance, we present a conceptual model of how temporal changes in forest structure impact the individual components of energy balance, hydrologic partitioning, and biogeochemical cycling and the interactions among them. We evaluate and refine this model using integrated observations and process modeling on multiple scales including plot, stand, flux tower footprint, hillslope, and catchment to identify scaling relationships and emergent patterns in hydrological and biogeochemical responses. Our initial results suggest that changes in forest structure at point or plot scales largely have predictable effects on energy, water, and biogeochemical cycles that are well captured by land surface, hydrological, and biogeochemical models. However, observations from flux towers and nested catchments suggest that both the hydrological and biogeochemical effects observed at tree and plot scales may be attenuated or exacerbated at larger scales. Compensatory processes are associated with attenuation (e.g. as transpiration decreases, evaporation and sublimation increase), whereas both attenuation and exacerbation may result from nonlinear scaling behavior across transitions in topography and ecosystem structure that affect the redistribution of energy, water, and solutes. Consequently, the effects of widespread tree mortality on ecosystem services of water supply and carbon sequestration will likely depend on how spatial patterns in mortality severity across the landscape affect large-scale hydrological partitioning.
Crotty, Patrick; García-Bellido, Juan; Lesgourgues, Julien; Riazuelo, Alain
2003-10-24
We obtain very stringent bounds on the possible cold dark matter, baryon, and neutrino isocurvature contributions to the primordial fluctuations in the Universe, using recent cosmic microwave background and large scale structure data. Neglecting the possible effects of spatial curvature, tensor perturbations, and reionization, we perform a Bayesian likelihood analysis with nine free parameters, and find that the amplitude of the isocurvature component cannot be larger than about 31% for the cold dark matter mode, 91% for the baryon mode, 76% for the neutrino density mode, and 60% for the neutrino velocity mode, at 2sigma, for uncorrelated models. For correlated adiabatic and isocurvature components, the fraction could be slightly larger. However, the cross-correlation coefficient is strongly constrained, and maximally correlated/anticorrelated models are disfavored. This puts strong bounds on the curvaton model.
Povey, Jane F; O'Malley, Christopher J; Root, Tracy; Martin, Elaine B; Montague, Gary A; Feary, Marc; Trim, Carol; Lang, Dietmar A; Alldread, Richard; Racher, Andrew J; Smales, C Mark
2014-08-20
Despite many advances in the generation of high producing recombinant mammalian cell lines over the last few decades, cell line selection and development is often slowed by the inability to predict a cell line's phenotypic characteristics (e.g. growth or recombinant protein productivity) at larger scale (large volume bioreactors) using data from early cell line construction at small culture scale. Here we describe the development of an intact cell MALDI-ToF mass spectrometry fingerprinting method for mammalian cells early in the cell line construction process whereby the resulting mass spectrometry data are used to predict the phenotype of mammalian cell lines at larger culture scale using a Partial Least Squares Discriminant Analysis (PLS-DA) model. Using MALDI-ToF mass spectrometry, a library of mass spectrometry fingerprints was generated for individual cell lines at the 96 deep well plate stage of cell line development. The growth and productivity of these cell lines were evaluated in a 10L bioreactor model of Lonza's large-scale (up to 20,000L) fed-batch cell culture processes. Using the mass spectrometry information at the 96 deep well plate stage and phenotype information at the 10L bioreactor scale a PLS-DA model was developed to predict the productivity of unknown cell lines at the 10L scale based upon their MALDI-ToF fingerprint at the 96 deep well plate scale. This approach provides the basis for the very early prediction of cell lines' performance in cGMP manufacturing-scale bioreactors and the foundation for methods and models for predicting other mammalian cell phenotypes from rapid, intact-cell mass spectrometry based measurements. Copyright © 2014 Elsevier B.V. All rights reserved.
The Importance of Precise Digital Elevation Models (DEM) in Modelling Floods
NASA Astrophysics Data System (ADS)
Demir, Gokben; Akyurek, Zuhal
2016-04-01
Digital elevation Models (DEM) are important inputs for topography for the accurate modelling of floodplain hydrodynamics. Floodplains have a key role as natural retarding pools which attenuate flood waves and suppress flood peaks. GPS, LIDAR and bathymetric surveys are well known surveying methods to acquire topographic data. It is not only time consuming and expensive to obtain topographic data through surveying but also sometimes impossible for remote areas. In this study it is aimed to present the importance of accurate modelling of topography for flood modelling. The flood modelling for Samsun-Terme in Blacksea region of Turkey is done. One of the DEM is obtained from the point observations retrieved from 1/5000 scaled orthophotos and 1/1000 scaled point elevation data from field surveys at x-sections. The river banks are corrected by using the orthophotos and elevation values. This DEM is named as scaled DEM. The other DEM is obtained from bathymetric surveys. 296 538 number of points and the left/right bank slopes were used to construct the DEM having 1 m spatial resolution and this DEM is named as base DEM. Two DEMs were compared by using 27 x-sections. The maximum difference at thalweg of the river bed is 2m and the minimum difference is 20 cm between two DEMs. The channel conveyance capacity in base DEM is larger than the one in scaled DEM and floodplain is modelled in detail in base DEM. MIKE21 with flexible grid is used in 2- dimensional shallow water flow modelling. The model by using two DEMs were calibrated for a flood event (July 9, 2012). The roughness is considered as the calibration parameter. From comparison of input hydrograph at the upstream of the river and output hydrograph at the downstream of the river, the attenuation is obtained as 91% and 84% for the base DEM and scaled DEM, respectively. The time lag in hydrographs does not show any difference for two DEMs and it is obtained as 3 hours. Maximum flood extents differ for the two DEMs, larger flooded area is simulated from scaled DEM. The main difference is observed for the braided and meandering parts of the river. For the meandering part of the river, additional 1.82 106 m3 water (5% of the total volume) is calculated as the flooded volume simulated by using the scaled DEM. For the braided stream part 0.187 106 m3 more water is simulated as the flooded volume by the scaled DEM. The flood extent around the braided part of the river is 27.6 ha larger in the simulated flood map obtained from scaled DEM compared to the one obtained from base DEM. Around the meandering part of the river scaled DEM gave 59.8 ha more flooded area. The importance of correct topography of the braided and meandering part of the river in flood modelling and the uncertainty it brings to modelling are discussed in detail.
Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport
NASA Astrophysics Data System (ADS)
Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.
2016-12-01
Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.
NASA Astrophysics Data System (ADS)
Toohey, R.; Boll, J.; Brooks, E.; Jones, J.
2009-12-01
Surface runoff and percolation to ground water are two hydrological processes of concern to the Atlantic slope of Costa Rica because of their impacts on flooding and drinking water contamination. As per legislation, the Costa Rican Government funds land use management from the farm to the regional scale to improve or conserve hydrological ecosystem services. In this study, we examined how land use (e.g., forest, coffee, sugar cane, and pasture) affects hydrological response at the point, plot (1 m2), and the field scale (1-6ha) to empirically conceptualize the dominant hydrological processes in each land use. Using our field data, we upscaled these conceptual processes into a physically-based distributed hydrological model at the field, watershed (130 km2), and regional (1500 km2) scales. At the point and plot scales, the presence of macropores and large roots promoted greater vertical percolation and subsurface connectivity in the forest and coffee field sites. The lack of macropores and large roots, plus the addition of management artifacts (e.g., surface compaction and a plough layer), altered the dominant hydrological processes by increasing lateral flow and surface runoff in the pasture and sugar cane field sites. Macropores and topography were major influences on runoff generation at the field scale. Also at the field scale, antecedent moisture conditions suggest a threshold behavior as a temporal control on surface runoff generation. However, in this tropical climate with very intense rainstorms, annual surface runoff was less than 10% of annual precipitation at the field scale. Significant differences in soil and hydrological characteristics observed at the point and plot scales appear to have less significance when upscaled to the field scale. At the point and plot scales, percolation acted as the dominant hydrological process in this tropical environment. However, at the field scale for sugar cane and pasture sites, saturation-excess runoff increased as irrigation intensity and duration (e.g., quantity) increased. Upscaling our conceptual models to the watershed and regional scales, historical data (1970-2004) was used to investigate whether dominant hydrological processes changed over time due to land use change. Preliminary investigations reveal much higher runoff coefficients (<30%) at the larger watershed scales. The increase in importance of runoff at the larger geographic scales suggests an emerging process and process non-linearity between the smaller and larger scales. Upscaling is an important and useful concept when investigating catchment response using the tools of field work and/or physically distributed hydrological modeling.
Scaling properties of European research units
Jamtveit, Bjørn; Jettestuen, Espen; Mathiesen, Joachim
2009-01-01
A quantitative characterization of the scale-dependent features of research units may provide important insight into how such units are organized and how they grow. The relative importance of top-down versus bottom-up controls on their growth may be revealed by their scaling properties. Here we show that the number of support staff in Scandinavian research units, ranging in size from 20 to 7,800 staff members, is related to the number of academic staff by a power law. The scaling exponent of ≈1.30 is broadly consistent with a simple hierarchical model of the university organization. Similar scaling behavior between small and large research units with a wide range of ambitions and strategies argues against top-down control of the growth. Top-down effects, and externally imposed effects from changing political environments, can be observed as fluctuations around the main trend. The observed scaling law implies that cost-benefit arguments for merging research institutions into larger and larger units may have limited validity unless the productivity per academic staff and/or the quality of the products are considerably higher in larger institutions. Despite the hierarchical structure of most large-scale research units in Europe, the network structures represented by the academic component of such units are strongly antihierarchical and suboptimal for efficient communication within individual units. PMID:19625626
Numerical evaluation of the scale problem on the wind flow of a windbreak
Liu, Benli; Qu, Jianjun; Zhang, Weimin; Tan, Lihai; Gao, Yanhong
2014-01-01
The airflow field around wind fences with different porosities, which are important in determining the efficiency of fences as a windbreak, is typically studied via scaled wind tunnel experiments and numerical simulations. However, the scale problem in wind tunnels or numerical models is rarely researched. In this study, we perform a numerical comparison between a scaled wind-fence experimental model and an actual-sized fence via computational fluid dynamics simulations. The results show that although the general field pattern can be captured in a reduced-scale wind tunnel or numerical model, several flow characteristics near obstacles are not proportional to the size of the model and thus cannot be extrapolated directly. For example, the small vortex behind a low-porosity fence with a scale of 1:50 is approximately 4 times larger than that behind a full-scale fence. PMID:25311174
NASA Astrophysics Data System (ADS)
Murphy, J.; Lammers, R. B.; Proussevitch, A. A.; Ozik, J.; Altaweel, M.; Collier, N. T.; Alessa, L.; Kliskey, A. D.
2014-12-01
The global hydrological cycle intersects with human decision making at multiple scales, from dams and irrigation works to the taps in individuals' homes. Residential water consumers are commonly encouraged to conserve; these messages are heard against a background of individual values and conceptions about water quality, uses, and availability. The degree to which these values impact the larger-hydrological dynamics, the way that changes in those values have impacts on the hydrological cycle through time, and the feedbacks by which water availability and quality in turn shape those values, are not well explored. To investigate this domain we employ a global-scale water balance model (WBM) coupled with a social-science-grounded agent-based model (ABM). The integration of a hydrological model with an agent-based model allows us to explore driving factors in the dynamics in coupled human-natural systems. From the perspective of the physical hydrologist, the ABM offers a richer means of incorporating the human decisions that drive the hydrological system; from the view of the social scientist, a physically-based hydrological model allows the decisions of the agents to play out against constraints faithful to the real world. We apply the interconnected models to a study of Tucson, Arizona, USA, and its role in the larger Colorado River system. Our core concept is Technology-Induced Environmental Distancing (TIED), which posits that layers of technology can insulate consumers from direct knowledge of a resource. In Tucson, multiple infrastructure and institutional layers have arguably increased the conceptual distance between individuals and their water supply, offering a test case of the TIED framework. Our coupled simulation allows us to show how the larger system transforms a resource with high temporal and spatial variability into a consumer constant, and the effects of this transformation on the regional system. We use this to explore how pricing, messaging, and social dynamics impact demand, how changes in demand affect the regional water system, and under what system challenges the values of the individuals are likely to change. This study is a preamble to modeling multiple regionally connected cities and larger systems with impacts on hydrology at the continental and global scales.
Radiative transfer calculations of the diffuse ionized gas in disc galaxies with cosmic ray feedback
NASA Astrophysics Data System (ADS)
Vandenbroucke, Bert; Wood, Kenneth; Girichidis, Philipp; Hill, Alex S.; Peters, Thomas
2018-05-01
The large vertical scale heights of the diffuse ionized gas (DIG) in disc galaxies are challenging to model, as hydrodynamical models including only thermal feedback seem to be unable to support gas at these heights. In this paper, we use a three-dimensional Monte Carlo radiation transfer code to post-process disc simulations of the Simulating the Life-Cycle of Molecular Clouds project that include feedback by cosmic rays. We show that the more extended discs in simulations including cosmic ray feedback naturally lead to larger scale heights for the DIG which are more in line with observed scale heights. We also show that including a fiducial cosmic ray heating term in our model can help to increase the temperature as a function of disc scale height, but fails to reproduce observed DIG nitrogen and sulphur forbidden line intensities. We show that, to reproduce these line emissions, we require a heating mechanism that affects gas over a larger density range than is achieved by cosmic ray heating, which can be achieved by fine tuning the total luminosity of ionizing sources to get an appropriate ionizing spectrum as a function of scale height. This result sheds a new light on the relation between forbidden line emissions and temperature profiles for realistic DIG gas distributions.
The scaling of maximum and basal metabolic rates of mammals and birds
NASA Astrophysics Data System (ADS)
Barbosa, Lauro A.; Garcia, Guilherme J. M.; da Silva, Jafferson K. L.
2006-01-01
Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here, we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as M, maximum heart rate as M, and muscular capillary density as M, in agreement with data.
Impacts of insect disturbance on the structure, composition, and functioning of oak-pine forests
NASA Astrophysics Data System (ADS)
Medvigy, D.; Schafer, K. V.; Clark, K. L.
2011-12-01
Episodic disturbance is an essential feature of terrestrial ecosystems, and strongly modulates their structure, composition, and functioning. However, dynamic global vegetation models that are commonly used to make ecosystem and terrestrial carbon budget predictions rarely have an explicit representation of disturbance. One reason why disturbance is seldom included is that disturbance tends to operate on spatial scales that are much smaller than typical model resolutions. In response to this problem, the Ecosystem Demography model 2 (ED2) was developed as a way of tracking the fine-scale heterogeneity arising from disturbances. In this study, we used ED2 to simulate an oak-pine forest that experiences episodic defoliation by gypsy moth (Lymantria dispar L). The model was carefully calibrated against site-level data, and then used to simulate changes in ecosystem composition, structure, and functioning on century time scales. Compared to simulations that include gypsy moth defoliation, we show that simulations that ignore defoliation events lead to much larger ecosystem carbon stores and a larger fraction of deciduous trees relative to evergreen trees. Furthermore, we find that it is essential to preserve the fine-scale nature of the disturbance. Attempts to "smooth out" the defoliation event over an entire grid cells led to large biases in ecosystem structure and functioning.
Micro- and meso-scale pore structure in mortar in relation to aggregate content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Yun, E-mail: yun.gao@ugent.be; De Schutter, Geert; Ye, Guang
2013-10-15
Mortar is often viewed as a three-phase composite consisting of aggregate, bulk paste, and an interfacial transition zone (ITZ). However, this description is inconsistent with experimental findings because of the basic assumption that larger pores are only present within the ITZ. In this paper, we use backscattered electron (BSE) imaging to investigate the micro- and meso-scale structure of mortar with varying aggregate content. The results indicate that larger pores are present not only within the ITZ but also within areas far from aggregates. This phenomenon is discussed in detail based on a series of analytical calculations, such as the effectivemore » water binder ratio and the inter-aggregate spacing. We developed a modified computer model that includes a two-phase structure for bulk paste. This model interprets previous mercury intrusion porosimetry data very well. -- Highlights: •Based on BSE, we examine the HCSS model. •We develop the HCSS-DBLB model. •We use the modified model to interpret the MIP data.« less
Multi-scale landslide hazard assessment: Advances in global and regional methodologies
NASA Astrophysics Data System (ADS)
Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Hong, Yang
2010-05-01
The increasing availability of remotely sensed surface data and precipitation provides a unique opportunity to explore how smaller-scale landslide susceptibility and hazard assessment methodologies may be applicable at larger spatial scales. This research first considers an emerging satellite-based global algorithm framework, which evaluates how the landslide susceptibility and satellite derived rainfall estimates can forecast potential landslide conditions. An analysis of this algorithm using a newly developed global landslide inventory catalog suggests that forecasting errors are geographically variable due to improper weighting of surface observables, resolution of the current susceptibility map, and limitations in the availability of landslide inventory data. These methodological and data limitation issues can be more thoroughly assessed at the regional level, where available higher resolution landslide inventories can be applied to empirically derive relationships between surface variables and landslide occurrence. The regional empirical model shows improvement over the global framework in advancing near real-time landslide forecasting efforts; however, there are many uncertainties and assumptions surrounding such a methodology that decreases the functionality and utility of this system. This research seeks to improve upon this initial concept by exploring the potential opportunities and methodological structure needed to advance larger-scale landslide hazard forecasting and make it more of an operational reality. Sensitivity analysis of the surface and rainfall parameters in the preliminary algorithm indicates that surface data resolution and the interdependency of variables must be more appropriately quantified at local and regional scales. Additionally, integrating available surface parameters must be approached in a more theoretical, physically-based manner to better represent the physical processes underlying slope instability and landslide initiation. Several rainfall infiltration and hydrological flow models have been developed to model slope instability at small spatial scales. This research investigates the potential of applying a more quantitative hydrological model to larger spatial scales, utilizing satellite and surface data inputs that are obtainable over different geographic regions. Due to the significant role that data and methodological uncertainties play in the effectiveness of landslide hazard assessment outputs, the methodology and data inputs are considered within an ensemble uncertainty framework in order to better resolve the contribution and limitations of model inputs and to more effectively communicate the model skill for improved landslide hazard assessment.
Modeling habitat for Marbled Murrelets on the Siuslaw National Forest, Oregon, using lidar data
Hagar, Joan C.; Aragon, Ramiro; Haggerty, Patricia; Hollenbeck, Jeff P.
2018-03-28
Habitat models using lidar-derived variables that quantify fine-scale variation in vegetation structure can improve the accuracy of occupancy estimates for canopy-dwelling species over models that use variables derived from other remote sensing techniques. However, the ability of models developed at such a fine spatial scale to maintain accuracy at regional or larger spatial scales has not been tested. We tested the transferability of a lidar-based habitat model for the threatened Marbled Murrelet (Brachyramphus marmoratus) between two management districts within a larger regional conservation zone in coastal western Oregon. We compared the performance of the transferred model against models developed with data from the application location. The transferred model had good discrimination (AUC = 0.73) at the application location, and model performance was further improved by fitting the original model with coefficients from the application location dataset (AUC = 0.79). However, the model selection procedure indicated that neither of these transferred models were considered competitive with a model trained on local data. The new model trained on data from the application location resulted in the selection of a slightly different set of lidar metrics from the original model, but both transferred and locally trained models consistently indicated positive relationships between the probability of occupancy and lidar measures of canopy structural complexity. We conclude that while the locally trained model had superior performance for local application, the transferred model could reasonably be applied to the entire conservation zone.
Numerical Simulations of Vortical Mode Stirring: Effects of Large Scale Shear and Strain
2015-09-30
Numerical Simulations of Vortical Mode Stirring: Effects of Large-Scale Shear and Strain M.-Pascale Lelong NorthWest Research Associates...can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local ambient conditions including latitude...talk at the 1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Nonlinear Effects in Internal Waves Conference held
SRNL PARTICIPATION IN THE MULTI-SCALE ENSEMBLE EXERCISES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, R
2007-10-29
Consequence assessment during emergency response often requires atmospheric transport and dispersion modeling to guide decision making. A statistical analysis of the ensemble of results from several models is a useful way of estimating the uncertainty for a given forecast. ENSEMBLE is a European Union program that utilizes an internet-based system to ingest transport results from numerous modeling agencies. A recent set of exercises required output on three distinct spatial and temporal scales. The Savannah River National Laboratory (SRNL) uses a regional prognostic model nested within a larger-scale synoptic model to generate the meteorological conditions which are in turn used inmore » a Lagrangian particle dispersion model. A discussion of SRNL participation in these exercises is given, with particular emphasis on requirements for provision of results in a timely manner with regard to the various spatial scales.« less
Role of natural analogs in performance assessment of nuclear waste repositories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagar, B.; Wittmeyer, G.W.
1995-09-01
Mathematical models of the flow of water and transport of radionuclides in porous media will be used to assess the ability of deep geologic repositories to safely contain nuclear waste. These models must, in some sense, be validated to ensure that they adequately describe the physical processes occurring within the repository and its geologic setting. Inasmuch as the spatial and temporal scales over which these models must be applied in performance assessment are very large, validation of these models against laboratory and small-scale field experiments may be considered inadequate. Natural analogs may provide validation data that are representative of physico-chemicalmore » processes that occur over spatial and temporal scales as large or larger than those relevant to repository design. The authors discuss the manner in which natural analog data may be used to increase confidence in performance assessment models and conclude that, while these data may be suitable for testing the basic laws governing flow and transport, there is insufficient control of boundary and initial conditions and forcing functions to permit quantitative validation of complex, spatially distributed flow and transport models. The authors also express their opinion that, for collecting adequate data from natural analogs, resources will have to be devoted to them that are much larger than are devoted to them at present.« less
Aleatory Uncertainty and Scale Effects in Computational Damage Models for Failure and Fragmentation
2014-09-01
larger specimens, small specimens have, on average, higher strengths. Equivalently, because curves for small specimens fall below those of larger...the material strength associated with each realization parameter R in Equation (7), and strength distribution curves associated with multiple...effects in brittle media [58], which applies micromorphological dimensional analysis to obtain a universal curve which closely fits rate-dependent
Sparse and Large-Scale Learning Models and Algorithms for Mining Heterogeneous Big Data
ERIC Educational Resources Information Center
Cai, Xiao
2013-01-01
With the development of PC, internet as well as mobile devices, we are facing a data exploding era. On one hand, more and more features can be collected to describe the data, making the size of the data descriptor larger and larger. On the other hand, the number of data itself explodes and can be collected from multiple resources. When the data…
NASA Astrophysics Data System (ADS)
Warner, J. C.; Armstrong, B. N.; He, R.; Zambon, J. B.; Olabarrieta, M.; Voulgaris, G.; Kumar, N.; Haas, K. A.
2012-12-01
Understanding processes responsible for coastal change is important for managing both our natural and economic coastal resources. Coastal processes respond from both local scale and larger regional scale forcings. Understanding these processes can lead to significant insight into how the coastal zone evolves. Storms are one of the primary driving forces causing coastal change from a coupling of wave and wind driven flows. Here we utilize a numerical modeling approach to investigate these dynamics of coastal storm impacts. We use the Coupled Ocean - Atmosphere - Wave - Sediment Transport (COAWST) Modeling System that utilizes the Model Coupling Toolkit to exchange prognostic variables between the ocean model ROMS, atmosphere model WRF, wave model SWAN, and the Community Sediment Transport Modeling System (CSTMS) sediment routines. The models exchange fields of sea-surface temperature, ocean currents, water levels, bathymetry, wave heights, lengths, periods, bottom orbital velocities, and atmospheric surface heat and momentum fluxes, atmospheric pressure, precipitation, and evaporation. Data fields are exchanged using regridded flux conservative sparse matrix interpolation weights computed from the SCRIP spherical coordinate remapping interpolation package. We describe the modeling components and the model field exchange methods. As part of the system, the wave and ocean models run with cascading, refined, spatial grids to provide increased resolution, scaling down to resolve nearshore wave driven flows simulated by the vortex force formulation, all within selected regions of a larger, coarser-scale coastal modeling system. The ocean and wave models are driven by the atmospheric component, which is affected by wave dependent ocean-surface roughness and sea surface temperature which modify the heat and momentum fluxes at the ocean-atmosphere interface. We describe the application of the modeling system to several regions of multi-scale complexity to identify the significance of larger scale forcing cascading down to smaller scales and to investigate the interactions of the coupled system with increasing degree of model-model interactions. Three examples include the impact of Hurricane Ivan in 2004 in the Gulf of Mexico, Hurricane Ida in 2009 that evolved into a tropical storm on the US East coast, and passage of strong cold fronts across the US southeast. Results identify that hurricane intensity is extremely sensitive to sea-surface temperature, with a reduction in intensity when the atmosphere is coupled to the ocean model due to rapid cooling of the ocean from the surface through the mixed layer. Coupling of the ocean to the atmosphere also results in decreased boundary layer stress and coupling of the waves to the atmosphere results in increased sea-surface stress. Wave results are sensitive to both ocean and atmospheric coupling due to wave-current interactions with the ocean and wave-growth from the atmospheric wind stress. Sediment resuspension at regional scale during the hurricane is controlled by shelf width and wave propagation during hurricane approach. Results from simulation of passage of cold fronts suggest that synoptic meteorological systems can strongly impact surf zone and inner shelf response, therefore act as a strong driver for long term littoral sediment transport. We will also present some of the challenges faced to develop the modeling system.
Multiscale turbulence models based on convected fluid microstructure
NASA Astrophysics Data System (ADS)
Holm, Darryl D.; Tronci, Cesare
2012-11-01
The Euler-Poincaré approach to complex fluids is used to derive multiscale equations for computationally modeling Euler flows as a basis for modeling turbulence. The model is based on a kinematic sweeping ansatz (KSA) which assumes that the mean fluid flow serves as a Lagrangian frame of motion for the fluctuation dynamics. Thus, we regard the motion of a fluid parcel on the computationally resolvable length scales as a moving Lagrange coordinate for the fluctuating (zero-mean) motion of fluid parcels at the unresolved scales. Even in the simplest two-scale version on which we concentrate here, the contributions of the fluctuating motion under the KSA to the mean motion yields a system of equations that extends known results and appears to be suitable for modeling nonlinear backscatter (energy transfer from smaller to larger scales) in turbulence using multiscale methods.
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
NASA Astrophysics Data System (ADS)
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
Scale-up of ecological experiments: Density variation in the mobile bivalve Macomona liliana
Schneider, Davod C.; Walters, R.; Thrush, S.; Dayton, P.
1997-01-01
At present the problem of scaling up from controlled experiments (necessarily at a small spatial scale) to questions of regional or global importance is perhaps the most pressing issue in ecology. Most of the proposed techniques recommend iterative cycling between theory and experiment. We present a graphical technique that facilitates this cycling by allowing the scope of experiments, surveys, and natural history observations to be compared to the scope of models and theory. We apply the scope analysis to the problem of understanding the population dynamics of a bivalve exposed to environmental stress at the scale of a harbour. Previous lab and field experiments were found not to be 1:1 scale models of harbour-wide processes. Scope analysis allowed small scale experiments to be linked to larger scale surveys and to a spatially explicit model of population dynamics.
A Sub-filter Scale Noise Equation far Hybrid LES Simulations
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.
2006-01-01
Hybrid LES/subscale modeling approaches have an important advantage over the current noise prediction methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence . Previous hybrid approaches use approximate statistical techniques or extrapolation methods to obtain the requisite information about the sub-filter scale motion. An alternative approach would be to adopt the modeling techniques used in the current noise prediction methods and determine the unknown stresses from experimental data. The present paper derives an equation for predicting the sub scale sound from information that can be obtained with currently available experimental procedures. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid techniques.
Large-angle cosmic microwave background anisotropies in an open universe
NASA Technical Reports Server (NTRS)
Kamionkowski, Marc; Spergel, David N.
1994-01-01
If the universe is open, scales larger than the curvature scale may be probed by observation of large-angle fluctuations in the cosmic microwave background (CMB). We consider primordial adiabatic perturbations and discuss power spectra that are power laws in volume, wavelength, and eigenvalue of the Laplace operator. Such spectra may have arisen if, for example, the universe underwent a period of `frustated' inflation. The resulting large-angle anisotropies of the CMB are computed. The amplitude generally increases as Omega is decreased but decreases as h is increased. Interestingly enough, for all three Ansaetze, anisotropies on angular scales larger than the curvature scale are suppressed relative to the anisotropies on scales smaller than the curvature scale, but cosmic variance makes discrimination between various models difficult. Models with 0.2 approximately less than Omega h approximately less than 0.3 appear compatible with CMB fluctuations detected by Cosmic Background Explorer Satellite (COBE) and the Tenerife experiment and with the amplitude and spectrum of fluctuations of galaxy counts in the APM, CfA, and 1.2 Jy IRAS surveys. COBE normalization for these models yields sigma(sub 8) approximately = 0.5 - 0.7. Models with smaller values of Omega h when normalized to COBE require bias factors in excess of 2 to be compatible with the observed galaxy counts on the 8/h Mpc scale. Requiring that the age of the universe exceed 10 Gyr implies that Omega approximately greater than 0.25, while requiring that from the last-scattering term in the Sachs-Wolfe formula, large-angle anisotropies come primarily from the decay of potential fluctuations at z approximately less than 1/Omega. Thus, if the universe is open, COBE has been detecting temperature fluctuations produced at moderate redshift rather than at z approximately 1300.
Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun
2018-09-01
Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.
A stochastic two-scale model for pressure-driven flow between rough surfaces
Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas
2016-01-01
Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975
Measuring the Power Spectrum with Peculiar Velocities
NASA Astrophysics Data System (ADS)
Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-01-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
Power spectrum estimation from peculiar velocity catalogues
NASA Astrophysics Data System (ADS)
Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-09-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
Scaling laws and technology development strategies for biorefineries and bioenergy plants.
Jack, Michael W
2009-12-01
The economies of scale of larger biorefineries or bioenergy plants compete with the diseconomies of scale of transporting geographically distributed biomass to a central location. This results in an optimum plant size that depends on the scaling parameters of the two contributions. This is a fundamental aspect of biorefineries and bioenergy plants and has important consequences for technology development as "bigger is better" is not necessarily true. In this paper we explore the consequences of these scaling effects via a simplified model of biomass transportation and plant costs. Analysis of this model suggests that there is a need for much more sophisticated technology development strategies to exploit the consequences of these scaling effects. We suggest three potential strategies in terms of the scaling parameters of the system.
Attention in recent years has focused on the trans-boundary transport of ozone and fine particulate matte between the United States and Mexico and Canada and across state boundaries in the United States. In a similar manner, but on a larger spatial scale, the export of pollutant...
Stoichiometric vs hydroclimatic controls on soil biogeochemical processes
NASA Astrophysics Data System (ADS)
Manzoni, Stefano; Porporato, Amilcare
2010-05-01
Soil nutrient cycles are controlled by both stoichiometric constraints (e.g., carbon to nutrient ratios) and hydroclimatic conditions (e.g., soil moisture and temperature). Both controls tend to act in a nonlinear manner and give rise to complex dynamics in soil biogeochemistry at different space-time scales. We first review the theoretical basis of soil biogeochemical models, looking for the general principles underlying these models across space-time scales and scientific disciplines. By comparing more than 250 models, we show that similar kinetic and stoichiometric laws, formulated to mechanistically represent the complex biochemical constraints to decomposition, are common to most models, providing a basis for their classification. Moreover, a historic analysis reveals that the complexity (e.g., phase space dimension, model architecture) and degree and number of nonlinearities generally increased with date, while they decreased with increasing spatial and temporal scale of interest. Soil biogeochmical dynamics may be suitable conceptualized using a number of compartments (e.g., decomposers, organic substrates, inorganic ions) interacting among each other at rates that depend (nonlinearly) on climatic drivers. As a consequence, hydroclimatic-induced fluctuations at the daily scale propagate through the various soil compartments leading to cascading effects ranging from short-term fluctuations in the smaller pools to long-lasting changes in the larger ones. Such cascading effects are known to occur in dryland ecosystems, and are increasingly being recongnized to control the long-term carbon and nutrient balances in more mesic ecosystems. We also show that separating biochemical from climatic impacts on organic matter decomposition results in universal curves describing data of plant residue decomposition and nutrient mineralization across the globe. Future extensions to larger spatial scales and managed ecosystems are also briefly outlined. It is critical that future modeling efforts carefully account for the scale-dependence of their mathematical formulations, especially when applied to a wide range of scales.
Power-law expansion of the Universe from the bosonic Lorentzian type IIB matrix model
NASA Astrophysics Data System (ADS)
Ito, Yuta; Nishimura, Jun; Tsuchiya, Asato
2015-11-01
Recent studies on the Lorentzian version of the type IIB matrix model show that (3+1)D expanding universe emerges dynamically from (9+1)D space-time predicted by superstring theory. Here we study a bosonic matrix model obtained by omitting the fermionic matrices. With the adopted simplification and the usage of a large-scale parallel computer, we are able to perform Monte Carlo calculations with matrix size up to N = 512, which is twenty times larger than that used previously for the studies of the original model. When the matrix size is larger than some critical value N c ≃ 110, we find that (3+1)D expanding universe emerges dynamically with a clear large- N scaling property. Furthermore, the observed increase of the spatial extent with time t at sufficiently late times is consistent with a power-law behavior t 1/2, which is reminiscent of the expanding behavior of the Friedmann-Robertson-Walker universe in the radiation dominated era. We discuss possible implications of this result on the original supersymmetric model including fermionic matrices.
NASA Astrophysics Data System (ADS)
Baroni, Gabriele; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine
2017-04-01
The advances in computer science and the availability of new detailed data-sets have led to a growing number of distributed hydrological models applied to finer and finer grid resolutions for larger and larger catchment areas. It was argued, however, that this trend does not necessarily guarantee better understanding of the hydrological processes or it is even not necessary for specific modelling applications. In the present study, this topic is further discussed in relation to the soil spatial heterogeneity and its effect on simulated hydrological state and fluxes. To this end, three methods are developed and used for the characterization of the soil heterogeneity at different spatial scales. The methods are applied at the soil map of the upper Neckar catchment (Germany), as example. The different soil realizations are assessed regarding their impact on simulated state and fluxes using the distributed hydrological model mHM. The results are analysed by aggregating the model outputs at different spatial scales based on the Representative Elementary Scale concept (RES) proposed by Refsgaard et al. (2016). The analysis is further extended in the present study by aggregating the model output also at different temporal scales. The results show that small scale soil variabilities are not relevant when the integrated hydrological responses are considered e.g., simulated streamflow or average soil moisture over sub-catchments. On the contrary, these small scale soil variabilities strongly affect locally simulated states and fluxes i.e., soil moisture and evapotranspiration simulated at the grid resolution. A clear trade-off is also detected by aggregating the model output by spatial and temporal scales. Despite the scale at which the soil variabilities are (or are not) relevant is not universal, the RES concept provides a simple and effective framework to quantify the predictive capability of distributed models and to identify the need for further model improvements e.g., finer resolution input. For this reason, the integration in this analysis of all the relevant input factors (e.g., precipitation, vegetation, geology) could provide a strong support for the definition of the right scale for each specific model application. In this context, however, the main challenge for a proper model assessment will be the correct characterization of the spatio- temporal variability of each input factor. Refsgaard, J.C., Højberg, A.L., He, X., Hansen, A.L., Rasmussen, S.H., Stisen, S., 2016. Where are the limits of model predictive capabilities?: Representative Elementary Scale - RES. Hydrol. Process. doi:10.1002/hyp.11029
Personality Facets and RIASEC Interests: An Integrated Model
ERIC Educational Resources Information Center
Armstrong, Patrick Ian; Anthoney, Sarah Fetter
2009-01-01
Research examining links between personality and interest have typically focused on links between measures of the five factor model and Holland's RIASEC types. However, the five factor model of personality can be divided in to a larger set of narrow domain personality scales measuring facets of the "big five" traits. Research in a number of fields…
Power Scaling and Seasonal Evolution of Floe Areas in the Arctic East Siberian Sea
NASA Astrophysics Data System (ADS)
Barton, C. C.; Geise, G. R.; Tebbens, S. F.
2016-12-01
The size distribution of floes and its evolution during the Arctic summer season and a model of fragmentation that generates a power law scaling distribution of fragment sizes are the subject of this paper. This topic is of relevance to marine vessels that encounter floes, to the calculation of sea ice albedo, to the determination of Arctic heat exchange which is strongly influenced by ice concentrations and the amount of open water between floes, and to photosynthetic marine organisms which are dependent upon sunlight penetrating the spaces between floes. Floes are 2-3 m thick and initially range in area from one to millions of square meters. The cumulative number versus floe area distribution of seasonal sea floes from six satellite images of the Arctic Ocean during the summer breakup and melting is well fit by two scale-invariant power law scaling regimes for floe areas ranging from 30 m2 to 28,400,000 m2. Scaling exponents, B, for larger floe areas range from -0.6 to -1.0 with an average of -0.8. Scaling exponents, B, for smaller floe areas range from -0.3 to -0.6 with an average of -0.5. The inflection point between the two scaling regimes ranges from 283 x 102 m2 to 4850 x 102 m2 and generally moves from larger to smaller floe areas through the summer melting season. We observe that the two scaling regimes and the inflection between them are established during the initial breakup of sea ice solely by the process of fracture. The distributions of floe size regimes retain their scaling exponents as the floe pack evolves from larger to smaller floe areas from the initial breakup through the summer season, due to grinding, crushing, fracture, and melting. The scaling exponents for floe area distribution are in the same range as those reported in previous studies of Arctic floes and for the single scaling exponents found for crushed and ground geologic materials including streambed gravel, lunar debris, and artificially crushed quartz. A probabilistic fragmentation model that produces a power distribution of particle sizes has been developed and will be presented.
Conceptual design and analysis of a dynamic scale model of the Space Station Freedom
NASA Technical Reports Server (NTRS)
Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.
1994-01-01
This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.
Inflated Uncertainty in Multimodel-Based Regional Climate Projections.
Madsen, Marianne Sloth; Langen, Peter L; Boberg, Fredrik; Christensen, Jens Hesselbjerg
2017-11-28
Multimodel ensembles are widely analyzed to estimate the range of future regional climate change projections. For an ensemble of climate models, the result is often portrayed by showing maps of the geographical distribution of the multimodel mean results and associated uncertainties represented by model spread at the grid point scale. Here we use a set of CMIP5 models to show that presenting statistics this way results in an overestimation of the projected range leading to physically implausible patterns of change on global but also on regional scales. We point out that similar inconsistencies occur in impact analyses relying on multimodel information extracted using statistics at the regional scale, for example, when a subset of CMIP models is selected to represent regional model spread. Consequently, the risk of unwanted impacts may be overestimated at larger scales as climate change impacts will never be realized as the worst (or best) case everywhere.
Mesoscale Models of Fluid Dynamics
NASA Astrophysics Data System (ADS)
Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.
During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.
A mechanical model of bacteriophage DNA ejection
NASA Astrophysics Data System (ADS)
Arun, Rahul; Ghosal, Sandip
2017-08-01
Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.
NASA Astrophysics Data System (ADS)
Patel, Ravi A.; Perko, Janez; Jacques, Diederik
2017-04-01
Often, especially in the disciplines related to natural porous media, such as for example vadoze zone or aquifer hydrology or contaminant transport, the relevant spatial and temporal scales on which we need to provide information is larger than the scale where the processes actually occur. Usual techniques used to deal with these problems assume the existence of a REV. However, in order to understand the behavior on larger scales it is important to downscale the problem onto the relevant scale of the processes. Due to the limitations of resources (time, memory) the downscaling can only be made up to the certain lower scale. At this lower scale still several scales may co-exist - the scale which can be explicitly described and a scale which needs to be conceptualized by effective properties. Hence, models which are supposed to provide effective properties on relevant scales should therefor be flexible enough to represent complex pore-structure by explicit geometry on one side, and differently defined processes (e.g. by the effective properties) which emerge on lower scale. In this work we present the state-of-the-art lattice Boltzmann method based simulation tool applicable to advection-diffusion equation coupled to geochemical processes. The lattice Boltzmann transport solver can be coupled with an external geochemical solver which allows to account for a wide range of geochemical reaction networks through thermodynamic databases. The applicability to multiphase systems is ongoing. We provide several examples related to the calculation of an effective diffusion properties, permeability and effective reaction rate based on a continuum scale based on the pore scale geometry.
Bridging Empirical and Physical Approaches for Landslide Monitoring and Early Warning
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Kumar, Sujay; Harrison, Ken
2011-01-01
Rainfall-triggered landslides typically occur and are evaluated at local scales, using slope-stability models to calculate coincident changes in driving and resisting forces at the hillslope level in order to anticipate slope failures. Over larger areas, detailed high resolution landslide modeling is often infeasible due to difficulties in quantifying the complex interaction between rainfall infiltration and surface materials as well as the dearth of available in situ soil and rainfall estimates and accurate landslide validation data. This presentation will discuss how satellite precipitation and surface information can be applied within a landslide hazard assessment framework to improve landslide monitoring and early warning by considering two disparate approaches to landslide hazard assessment: an empirical landslide forecasting algorithm and a physical slope-stability model. The goal of this research is to advance near real-time landslide hazard assessment and early warning at larger spatial scales. This is done by employing high resolution surface and precipitation information within a probabilistic framework to provide more physically-based grounding to empirical landslide triggering thresholds. The empirical landslide forecasting tool, running in near real-time at http://trmm.nasa.gov, considers potential landslide activity at the global scale and relies on Tropical Rainfall Measuring Mission (TRMM) precipitation data and surface products to provide a near real-time picture of where landslides may be triggered. The physical approach considers how rainfall infiltration on a hillslope affects the in situ hydro-mechanical processes that may lead to slope failure. Evaluation of these empirical and physical approaches are performed within the Land Information System (LIS), a high performance land surface model processing and data assimilation system developed within the Hydrological Sciences Branch at NASA's Goddard Space Flight Center. LIS provides the capabilities to quantify uncertainty from model inputs and calculate probabilistic estimates for slope failures. Results indicate that remote sensing data can provide many of the spatiotemporal requirements for accurate landslide monitoring and early warning; however, higher resolution precipitation inputs will help to better identify small-scale precipitation forcings that contribute to significant landslide triggering. Future missions, such as the Global Precipitation Measurement (GPM) mission will provide more frequent and extensive estimates of precipitation at the global scale, which will serve as key inputs to significantly advance the accuracy of landslide hazard assessment, particularly over larger spatial scales.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
Cometary atmospheres: Modeling the spatial distribution of observed neutral radicals
NASA Technical Reports Server (NTRS)
Combi, Michael R.
1986-01-01
Progress during the second year of a program of research on the modeling of the spatial distributions of cometary radicals is discussed herein in several major areas. New scale length laws for cometary C2 and CN were determined which explain that the previously-held apparent drop of the C2/CN ratio for large heliocentric distances does not exist and that there is no systematic variation. Monte Carlo particle trajectory model (MCPTM) analysis of sunward and anti-sunward brightness profiles of cometary C2 was completed. This analysis implies a lifetime of 31,000 seconds for the C2 parent and an ejection speed for C2 of approximately 0.5 km/sec upon dissociation from the parent. A systematic reanalysis of published C3 and OH data was begun. Preliminary results find a heliocentric distance dependence for C3 scale lengths with a much larger variation than for C2 and CN. Scale lengths for OH are generally somewhat larger than currently accepted values. The MCPTM was updated to include the coma temperature. Finally, the collaborative effort with the University of Arizona programs has yielded some preliminary CCD images of Comet P/Halley.
Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations
NASA Astrophysics Data System (ADS)
Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara
2018-05-01
Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Van Strien, Jan W; Isbell, Lynne A
2017-04-07
Studies of event-related potentials in humans have established larger early posterior negativity (EPN) in response to pictures depicting snakes than to pictures depicting other creatures. Ethological research has recently shown that macaques and wild vervet monkeys respond strongly to partially exposed snake models and scale patterns on the snake skin. Here, we examined whether snake skin patterns and partially exposed snakes elicit a larger EPN in humans. In Task 1, we employed pictures with close-ups of snake skins, lizard skins, and bird plumage. In task 2, we employed pictures of partially exposed snakes, lizards, and birds. Participants watched a random rapid serial visual presentation of these pictures. The EPN was scored as the mean activity (225-300 ms after picture onset) at occipital and parieto-occipital electrodes. Consistent with previous studies, and with the Snake Detection Theory, the EPN was significantly larger for snake skin pictures than for lizard skin and bird plumage pictures, and for lizard skin pictures than for bird plumage pictures. Likewise, the EPN was larger for partially exposed snakes than for partially exposed lizards and birds. The results suggest that the EPN snake effect is partly driven by snake skin scale patterns which are otherwise rare in nature.
Van Strien, Jan W.; Isbell, Lynne A.
2017-01-01
Studies of event-related potentials in humans have established larger early posterior negativity (EPN) in response to pictures depicting snakes than to pictures depicting other creatures. Ethological research has recently shown that macaques and wild vervet monkeys respond strongly to partially exposed snake models and scale patterns on the snake skin. Here, we examined whether snake skin patterns and partially exposed snakes elicit a larger EPN in humans. In Task 1, we employed pictures with close-ups of snake skins, lizard skins, and bird plumage. In task 2, we employed pictures of partially exposed snakes, lizards, and birds. Participants watched a random rapid serial visual presentation of these pictures. The EPN was scored as the mean activity (225–300 ms after picture onset) at occipital and parieto-occipital electrodes. Consistent with previous studies, and with the Snake Detection Theory, the EPN was significantly larger for snake skin pictures than for lizard skin and bird plumage pictures, and for lizard skin pictures than for bird plumage pictures. Likewise, the EPN was larger for partially exposed snakes than for partially exposed lizards and birds. The results suggest that the EPN snake effect is partly driven by snake skin scale patterns which are otherwise rare in nature. PMID:28387376
3D molecular models of whole HIV-1 virions generated with cellPACK
Goodsell, David S.; Autin, Ludovic; Forli, Stefano; Sanner, Michel F.; Olson, Arthur J.
2014-01-01
As knowledge of individual biological processes grows, it becomes increasingly useful to frame new findings within their larger biological contexts in order to generate new systems-scale hypotheses. This report highlights two major iterations of a whole virus model of HIV-1, generated with the cellPACK software. cellPACK integrates structural and systems biology data with packing algorithms to assemble comprehensive 3D models of cell-scale structures in molecular detail. This report describes the biological data, modeling parameters and cellPACK methods used to specify and construct editable models for HIV-1. Anticipating that cellPACK interfaces under development will enable researchers from diverse backgrounds to critique and improve the biological models, we discuss how cellPACK can be used as a framework to unify different types of data across all scales of biology. PMID:25253262
Environmental stochasticity controls soil erosion variability
Kim, Jongho; Ivanov, Valeriy Y.; Fatichi, Simone
2016-01-01
Understanding soil erosion by water is essential for a range of research areas but the predictive skill of prognostic models has been repeatedly questioned because of scale limitations of empirical data and the high variability of soil loss across space and time scales. Improved understanding of the underlying processes and their interactions are needed to infer scaling properties of soil loss and better inform predictive methods. This study uses data from multiple environments to highlight temporal-scale dependency of soil loss: erosion variability decreases at larger scales but the reduction rate varies with environment. The reduction of variability of the geomorphic response is attributed to a ‘compensation effect’: temporal alternation of events that exhibit either source-limited or transport-limited regimes. The rate of reduction is related to environment stochasticity and a novel index is derived to reflect the level of variability of intra- and inter-event hydrometeorologic conditions. A higher stochasticity index implies a larger reduction of soil loss variability (enhanced predictability at the aggregated temporal scales) with respect to the mean hydrologic forcing, offering a promising indicator for estimating the degree of uncertainty of erosion assessments. PMID:26925542
Inflationary magnetogenesis without the strong coupling problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira, Ricardo J.Z.; Jain, Rajeev Kumar; Sloth, Martin S., E-mail: ferreira@cp3.dias.sdu.dk, E-mail: jain@cp3.dias.sdu.dk, E-mail: sloth@cp3.dias.sdu.dk
2013-10-01
The simplest gauge invariant models of inflationary magnetogenesis are known to suffer from the problems of either large backreaction or strong coupling, which make it difficult to self-consistently achieve cosmic magnetic fields from inflation with a field strength larger than 10{sup −32}G today on the Mpc scale. Such a strength is insufficient to act as seed for the galactic dynamo effect, which requires a magnetic field larger than 10{sup −20}G. In this paper we analyze simple extensions of the minimal model, which avoid both the strong coupling and back reaction problems, in order to generate sufficiently large magnetic fields onmore » the Mpc scale today. First we study the possibility that the coupling function which breaks the conformal invariance of electromagnetism is non-monotonic with sharp features. Subsequently, we consider the effect of lowering the energy scale of inflation jointly with a scenario of prolonged reheating where the universe is dominated by a stiff fluid for a short period after inflation. In the latter case, a systematic study shows upper bounds for the magnetic field strength today on the Mpc scale of 10{sup −13}G for low scale inflation and 10{sup −25}G for high scale inflation, thus improving on the previous result by 7-19 orders of magnitude. These results are consistent with the strong coupling and backreaction constraints.« less
Tropospheric transport differences between models using the same large-scale meteorological fields
NASA Astrophysics Data System (ADS)
Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.
2017-01-01
The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.
Small-Scale Drop-Size Variability: Empirical Models for Drop-Size-Dependent Clustering in Clouds
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Knyazikhin, Yuri; Larsen, Michael L.; Wiscombe, Warren J.
2005-01-01
By analyzing aircraft measurements of individual drop sizes in clouds, it has been shown in a companion paper that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)), where 0 less than or equals D(r) less than or equals 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and a Poisson distribution of cloud drops, these models illustrate strong drop clustering, especially with larger drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics, including how fast rain can form. For radiative transfer theory, clustering of large drops enhances their impact on the cloud optical path. The clustering phenomenon also helps explain why remotely sensed cloud drop size is generally larger than that measured in situ.
ED(MF)n: Humidity-Convection Feedbacks in a Mass Flux Scheme Based on Resolved Size Densities
NASA Astrophysics Data System (ADS)
Neggers, R.
2014-12-01
Cumulus cloud populations remain at least partially unresolved in present-day numerical simulations of global weather and climate, and accordingly their impact on the larger-scale flow has to be represented through parameterization. Various methods have been developed over the years, ranging in complexity from the early bulk models relying on a single plume to more recent approaches that attempt to reconstruct the underlying probability density functions, such as statistical schemes and multiple plume approaches. Most of these "classic" methods capture key aspects of cumulus cloud populations, and have been successfully implemented in operational weather and climate models. However, the ever finer discretizations of operational circulation models, driven by advances in the computational efficiency of supercomputers, is creating new problems for existing sub-grid schemes. Ideally, a sub-grid scheme should automatically adapt its impact on the resolved scales to the dimension of the grid-box within which it is supposed to act. It can be argued that this is only possible when i) the scheme is aware of the range of scales of the processes it represents, and ii) it can distinguish between contributions as a function of size. How to conceptually represent this knowledge of scale in existing parameterization schemes remains an open question that is actively researched. This study considers a relatively new class of models for sub-grid transport in which ideas from the field of population dynamics are merged with the concept of multi plume modelling. More precisely, a multiple mass flux framework for moist convective transport is formulated in which the ensemble of plumes is created in "size-space". It is argued that thus resolving the underlying size-densities creates opportunities for introducing scale-awareness and scale-adaptivity in the scheme. The behavior of an implementation of this framework in the Eddy Diffusivity Mass Flux (EDMF) model, named ED(MF)n, is examined for a standard case of subtropical marine shallow cumulus. We ask if a system of multiple independently resolved plumes is able to automatically create the vertical profile of bulk (mass) flux at which the sub-grid scale transport balances the imposed larger-scale forcings in the cloud layer.
Energy-Water-Land-Climate Nexus: Modeling Impacts from the Asset to Regional Scale
NASA Astrophysics Data System (ADS)
Tidwell, V. C.; Bennett, K. E.; Middleton, R. S.; Behery, S.; Macknick, J.; Corning-Padilla, A.; Brinkman, G.; Meng, M.
2016-12-01
A critical challenge for the energy-water-land nexus is understanding and modeling the connection between the natural system—including changes in climate, land use/cover, and streamflow—and the engineered system including water for energy, agriculture, and society. Equally important is understanding the linkage across scales; that is, how impacts at the asset level aggregate to influence behavior at the local to regional scale. Toward this need, a case study was conducted featuring multi-sector and multi-scale modeling centered on the San Juan River basin (a watershed that accounts for one-tenth of the Colorado River drainage area). Simulations were driven by statistically downscaled climate data from three global climate models (emission scenario RCP 8.5) and planned growth in regional water demand. The Variable Infiltration Capacity (VIC) hydrologic model was fitted with a custom vegetation mortality sub-model and used to estimate tributary inflows to the San Juan River and estimate reservoir evaporation. San Juan River operations, including releases from Navajo Reservoir, were subsequently modeled using RiverWare to estimate impacts on water deliveries out to the year 2100. Major water demands included two large coal-fired power plants, a local electric utility, river-side irrigation, the Navajo Indian Irrigation Project and instream flows managed for endangered aquatic species. Also tracked were basin exports, including water (downstream flows to the Colorado River and interbasin transfers to the Rio Grande) and interstate electric power transmission. Implications for the larger western electric grid were assessed using PLEXOS, a sub-hourly dispatch, electric production-cost model. Results highlight asset-level interactions at the energy-water-land nexus driven by climate and population dynamics; specifically, growing vulnerabilities to shorted water deliveries. Analyses also explored linkages across geographic scales from the San Juan to the larger Colorado River and Rio Grande basins as well as the western power grid.
Mesoscale mechanics of twisting carbon nanotube yarns.
Mirzaeifar, Reza; Qin, Zhao; Buehler, Markus J
2015-03-12
Fabricating continuous macroscopic carbon nanotube (CNT) yarns with mechanical properties close to individual CNTs remains a major challenge. Spinning CNT fibers and ribbons for enhancing the weak interactions between the nanotubes is a simple and efficient method for fabricating high-strength and tough continuous yarns. Here we investigate the mesoscale mechanics of twisting CNT yarns using full atomistic and coarse grained molecular dynamics simulations, considering concurrent mechanisms at multiple length-scales. To investigate the mechanical response of such a complex structure without losing insights into the molecular mechanism, we applied a multiscale strategy. The full atomistic results are used for training a coarse grained model for studying larger systems consisting of several CNTs. The mesoscopic model parameters are updated as a function of the twist angle, based on the full atomistic results, in order to incorporate the atomistic scale deformation mechanisms in larger scale simulations. By bridging across two length scales, our model is capable of accurately predicting the mechanical behavior of twisted yarns while the atomistic level deformations in individual nanotubes are integrated into the model by updating the parameters. Our results focused on studying a bundle of close packed nanotubes provide novel mechanistic insights into the spinning of CNTs. Our simulations reveal how twisting a bundle of CNTs improves the shear interaction between the nanotubes up to a certain level due to increasing the interaction surface. Furthermore, twisting the bundle weakens the intertube interactions due to excessive deformation in the cross sections of individual CNTs in the bundle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakob, Christian
This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveriesmore » about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.« less
The unusual suspect: Land use is a key predictor of biodiversity patterns in the Iberian Peninsula
NASA Astrophysics Data System (ADS)
Martins, Inês Santos; Proença, Vânia; Pereira, Henrique Miguel
2014-11-01
Although land use change is a key driver of biodiversity change, related variables such as habitat area and habitat heterogeneity are seldom considered in modeling approaches at larger extents. To address this knowledge gap we tested the contribution of land use related variables to models describing richness patterns of amphibians, reptiles and passerines in the Iberian Peninsula. We analyzed the relationship between species richness and habitat heterogeneity at two spatial resolutions (i.e., 10 km × 10 km and 50 km × 50 km). Using both ordinary least square and simultaneous autoregressive models, we assessed the relative importance of land use variables, climate variables and topographic variables. We also compare the species-area relationship with a multi-habitat model, the countryside species-area relationship, to assess the role of the area of different types of habitats on species diversity across scales. The association between habitat heterogeneity and species richness varied with the taxa and spatial resolution. A positive relationship was detected for all taxa at a grain size of 10 km × 10 km, but only passerines responded at a grain size of 50 km × 50 km. Species richness patterns were well described by abiotic predictors, but habitat predictors also explained a considerable portion of the variation. Moreover, species richness patterns were better described by a multi-habitat species-area model, incorporating land use variables, than by the classic power model, which only includes area as the single explanatory variable. Our results suggest that the role of land use in shaping species richness patterns goes beyond the local scale and persists at larger spatial scales. These findings call for the need of integrating land use variables in models designed to assess species richness response to large scale environmental changes.
High-scale axions without isocurvature from inflationary dynamics
Kearney, John; Orlofsky, Nicholas; Pierce, Aaron
2016-05-31
Observable primordial tensor modes in the cosmic microwave background (CMB) would point to a high scale of inflation H I. If the scale of Peccei-Quinn (PQ) breaking f a is greater than H I/2π, CMB constraints on isocurvature naively rule out QCD axion dark matter. This assumes the potential of the axion is unmodified during inflation. We revisit models where inflationary dynamics modify the axion potential and discuss how isocurvature bounds can be relaxed. We find that models that rely solely on a larger PQ-breaking scale during inflation f I require either late-time dilution of the axion abundance or highlymore » super-Planckian f I that somehow does not dominate the inflationary energy density. Models that have enhanced explicit breaking of the PQ symmetry during inflation may allow f a close to the Planck scale. Lastly, avoiding disruption of inflationary dynamics provides important limits on the parameter space.« less
Enabling large-scale viscoelastic calculations via neural network acceleration
NASA Astrophysics Data System (ADS)
Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.
2017-12-01
One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.
Frequency analysis of stress relaxation dynamics in model asphalts
NASA Astrophysics Data System (ADS)
Masoori, Mohammad; Greenfield, Michael L.
2014-09-01
Asphalt is an amorphous or semi-crystalline material whose mechanical performance relies on viscoelastic responses to applied strain or stress. Chemical composition and its effect on the viscoelastic properties of model asphalts have been investigated here by computing complex modulus from molecular dynamics simulation results for two different model asphalts whose compositions each resemble the Strategic Highway Research Program AAA-1 asphalt in different ways. For a model system that contains smaller molecules, simulation results for storage and loss modulus at 443 K reach both the low and high frequency scaling limits of the Maxwell model. Results for a model system composed of larger molecules (molecular weights 300-900 g/mol) with longer branches show a quantitatively higher complex modulus that decreases significantly as temperature increases over 400-533 K. Simulation results for its loss modulus approach the low frequency scaling limit of the Maxwell model at only the highest temperature simulated. A Black plot or van Gurp-Palman plot of complex modulus vs. phase angle for the system of larger molecules suggests some overlap among results at different temperatures for less high frequencies, with an interdependence consistent with the empirical Christensen-Anderson-Marasteanu model. Both model asphalts are thermorheologically complex at very high frequencies, where they show a loss peak that appears to be independent of temperature and density.
Large-scale motions in the universe: Using clusters of galaxies as tracers
NASA Technical Reports Server (NTRS)
Gramann, Mirt; Bahcall, Neta A.; Cen, Renyue; Gott, J. Richard
1995-01-01
Can clusters of galaxies be used to trace the large-scale peculiar velocity field of the universe? We answer this question by using large-scale cosmological simulations to compare the motions of rich clusters of galaxies with the motion of the underlying matter distribution. Three models are investigated: Omega = 1 and Omega = 0.3 cold dark matter (CDM), and Omega = 0.3 primeval baryonic isocurvature (PBI) models, all normalized to the Cosmic Background Explorer (COBE) background fluctuations. We compare the cluster and mass distribution of peculiar velocities, bulk motions, velocity dispersions, and Mach numbers as a function of scale for R greater than or = 50/h Mpc. We also present the large-scale velocity and potential maps of clusters and of the matter. We find that clusters of galaxies trace well the large-scale velocity field and can serve as an efficient tool to constrain cosmological models. The recently reported bulk motion of clusters 689 +/- 178 km/s on approximately 150/h Mpc scale (Lauer & Postman 1994) is larger than expected in any of the models studied (less than or = 190 +/- 78 km/s).
Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization
NASA Astrophysics Data System (ADS)
Teixeira, J.
2015-12-01
Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.
Yazdandoost, Fatemeh; Mirzaeifar, Reza; Qin, Zhao; Buehler, Markus J
2017-05-04
While individual carbon nanotubes (CNTs) are known as one of the strongest fibers ever known, even the strongest fabricated macroscale CNT yarns and fibers are still significantly weaker than individual nanotubes. The loss in mechanical properties is mainly because the deformation mechanism of CNT fibers is highly governed by the weak shear strength corresponding to sliding of nanotubes on each other. Adding polymer coating to the bundles, and twisting the CNT yarns to enhance the intertube interactions are both efficient methods to improve the mechanical properties of macroscale yarns. Here, we perform molecular dynamics (MD) simulations to unravel the unknown deformation mechanism in the intertube polymer chains and also local deformations of the CNTs at the atomistic scale. Our results show that the lateral pressure can have both beneficial and adverse effects on shear strength of polymer coated CNTs, depending on the local deformations at the atomistic scale. In this paper we also introduce a bottom-up bridging strategy between a full atomistic model and a coarse-grained (CG) model. Our trained CG model is capable of incorporating the atomistic scale local deformations of each CNT to the larger scale collect behavior of bundles, which enables the model to accurately predict the effect of lateral pressure on larger CNT bundles and yarns. The developed multiscale CG model is implemented to study the effect of lateral pressure on the shear strength of straight polymer coated CNT yarns, and also the effect of twisting on the pull-out force of bundles in spun CNT yarns.
Purves, Murray; Parkes, David
2016-05-01
Three atmospheric dispersion models--DIFFAL, HPAC, and HotSpot--of differing complexities have been validated against the witness plate deposition dataset taken during the Full-Scale Radiological Dispersal Device (FSRDD) Field Trials. The small-scale nature of these trials in comparison to many other historical radiological dispersion trials provides a unique opportunity to evaluate the near-field performance of the models considered. This paper performs validation of these models using two graphical methods of comparison: deposition contour plots and hotline profile graphs. All of the models tested are assessed to perform well, especially considering that previous model developments and validations have been focused on larger-scale scenarios. Of the models, HPAC generally produced the most accurate results, especially at locations within ∼100 m of GZ. Features present within the observed data, such as hot spots, were not well modeled by any of the codes considered. Additionally, it was found that an increase in the complexity of the meteorological data input to the models did not necessarily lead to an improvement in model accuracy; this is potentially due to the small-scale nature of the trials.
NASA Astrophysics Data System (ADS)
Altmoos, Michael; Henle, Klaus
2010-11-01
Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.
Disruption of sheet-like structures in Alfvénic turbulence by magnetic reconnection
NASA Astrophysics Data System (ADS)
Mallet, A.; Schekochihin, A. A.; Chandran, B. D. G.
2017-07-01
We propose a mechanism whereby the intense, sheet-like structures naturally formed by dynamically aligning Alfvénic turbulence are destroyed by magnetic reconnection at a scale \\hat{λ }_D, larger than the dissipation scale predicted by models of intermittent, dynamically aligning turbulence. The reconnection process proceeds in several stages: first, a linear tearing mode with N magnetic islands grows and saturates, and then the X-points between these islands collapse into secondary current sheets, which then reconnect until the original structure is destroyed. This effectively imposes an upper limit on the anisotropy of the structures within the perpendicular plane, which means that at scale \\hat{λ }_D the turbulent dynamics change: at scales larger than \\hat{λ }_D, the turbulence exhibits scale-dependent dynamic alignment and a spectral index approximately equal to -3/2, while at scales smaller than \\hat{λ }_D, the turbulent structures undergo a succession of disruptions due to reconnection, limiting dynamic alignment, steepening the effective spectral index and changing the final dissipation scale. The scaling of \\hat{λ }_D with the Lundquist (magnetic Reynolds) number S_{L_\\perp } depends on the order of the statistics being considered, and on the specific model of intermittency; the transition between the two regimes in the energy spectrum is predicted at approximately \\hat{λ }_D˜ S_{L_\\perp }^{-0.6}. The spectral index below \\hat{λ }_D is bounded between -5/3 and -2.3. The final dissipation scale is at \\hat{λ }_{η ,∞}˜ S_{L_\\perp }^{-3/4}, the same as the Kolmogorov scale arising in theories of turbulence that do not involve scale-dependent dynamic alignment.
NASA Astrophysics Data System (ADS)
Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale
2013-04-01
A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.
NASA Astrophysics Data System (ADS)
Tewes, Walter; Buller, Oleg; Heuer, Andreas; Thiele, Uwe; Gurevich, Svetlana V.
2017-03-01
We employ kinetic Monte Carlo (KMC) simulations and a thin-film continuum model to comparatively study the transversal (i.e., Plateau-Rayleigh) instability of ridges formed by molecules on pre-patterned substrates. It is demonstrated that the evolution of the occurring instability qualitatively agrees between the two models for a single ridge as well as for two weakly interacting ridges. In particular, it is shown for both models that the instability occurs on well defined length and time scales which are, for the KMC model, significantly larger than the intrinsic scales of thermodynamic fluctuations. This is further evidenced by the similarity of dispersion relations characterizing the linear instability modes.
Computational modeling of electrostatic charge and fields produced by hypervelocity impact
Crawford, David A.
2015-05-19
Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effectmore » at larger scales, higher impact velocities, or both.« less
NASA Astrophysics Data System (ADS)
Kofman, W.; Herique, A.; Ciarletti, V.; Lasue, J.; Levasseur-Regourd, AC.; Zine, S.; Plettemeier, D.
2017-09-01
The structure of the nucleus is one of the major unknowns in cometary science. The scientific objectives of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard ESA's spacecraft Rosetta are to perform an interior characterization of comet 67P/Churyumov-Gerasimenko nucleus. This is done by means of a bistatic sounding between the lander Philae laying on the comet's surface and the orbiter Rosetta. Current interpretation of the CONSERT signals is consistent with a highly porous carbon rich primitive body. Internal inhomogeneities are not detected at the wavelength scale and are either smaller, or present a low dielectric contrast. Given the high bulk porosity of 75% inside the sounded part of the nucleus, a likely interior model would be obtained by a mixture, at this 3-m size scale, of voids (vacuum) and blobs with material made of ices and dust with porosity larger than 60%. The absence of any pulse spreading due to scattering allows us to exclude heterogeneity with higher contrast (0.25) and larger size (3m) (but smaller than few wavelengths scale, since larger scales would be responsible for multipath propagation). CONSERT is the first successful radar probe to study the sub-surface of a small body.
Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge
NASA Astrophysics Data System (ADS)
Park, Heon-Joon; Lee, Changyeol
2017-04-01
Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).
Scaling of hydrologic and erosion parameters derived from rainfall simulation
NASA Astrophysics Data System (ADS)
Sheridan, Gary; Lane, Patrick; Noske, Philip; Sherwin, Christopher
2010-05-01
Rainfall simulation experiments conducted at the temporal scale of minutes and the spatial scale of meters are often used to derive parameters for erosion and water quality models that operate at much larger temporal and spatial scales. While such parameterization is convenient, there has been little effort to validate this approach via nested experiments across these scales. In this paper we first review the literature relevant to some of these long acknowledged issues. We then present rainfall simulation and erosion plot data from a range of sources, including mining, roading, and forestry, to explore the issues associated with the scaling of parameters such as infiltration properties and erodibility coefficients.
The void spectrum in two-dimensional numerical simulations of gravitational clustering
NASA Technical Reports Server (NTRS)
Kauffmann, Guinevere; Melott, Adrian L.
1992-01-01
An algorithm for deriving a spectrum of void sizes from two-dimensional high-resolution numerical simulations of gravitational clustering is tested, and it is verified that it produces the correct results where those results can be anticipated. The method is used to study the growth of voids as clustering proceeds. It is found that the most stable indicator of the characteristic void 'size' in the simulations is the mean fractional area covered by voids of diameter d, in a density field smoothed at its correlation length. Very accurate scaling behavior is found in power-law numerical models as they evolve. Eventually, this scaling breaks down as the nonlinearity reaches larger scales. It is shown that this breakdown is a manifestation of the undesirable effect of boundary conditions on simulations, even with the very large dynamic range possible here. A simple criterion is suggested for deciding when simulations with modest large-scale power may systematically underestimate the frequency of larger voids.
NASA Astrophysics Data System (ADS)
Vicente, Renato; de Toledo, Charles M.; Leite, Vitor B. P.; Caticha, Nestor
2006-02-01
We investigate the Heston model with stochastic volatility and exponential tails as a model for the typical price fluctuations of the Brazilian São Paulo Stock Exchange Index (IBOVESPA). Raw prices are first corrected for inflation and a period spanning 15 years characterized by memoryless returns is chosen for the analysis. Model parameters are estimated by observing volatility scaling and correlation properties. We show that the Heston model with at least two time scales for the volatility mean reverting dynamics satisfactorily describes price fluctuations ranging from time scales larger than 20 min to 160 days. At time scales shorter than 20 min we observe autocorrelated returns and power law tails incompatible with the Heston model. Despite major regulatory changes, hyperinflation and currency crises experienced by the Brazilian market in the period studied, the general success of the description provided may be regarded as an evidence for a general underlying dynamics of price fluctuations at intermediate mesoeconomic time scales well approximated by the Heston model. We also notice that the connection between the Heston model and Ehrenfest urn models could be exploited for bringing new insights into the microeconomic market mechanics.
Community assembly of the ferns of Florida.
Sessa, Emily B; Chambers, Sally M; Li, Daijiang; Trotta, Lauren; Endara, Lorena; Burleigh, J Gordon; Baiser, Benjamin
2018-03-01
Many ecological and evolutionary processes shape the assembly of organisms into local communities from a regional pool of species. We analyzed phylogenetic and functional diversity to understand community assembly of the ferns of Florida at two spatial scales. We built a phylogeny for 125 of the 141 species of ferns in Florida using five chloroplast markers. We calculated mean pairwise dissimilarity (MPD) and mean nearest taxon distance (MNTD) from phylogenetic distances and functional trait data for both spatial scales and compared the results to null models to assess significance. Our results for over vs. underdispersion in functional and phylogenetic diversity differed depending on spatial scale and metric considered. At the county scale, MPD revealed evidence for phylogenetic overdispersion, while MNTD revealed phylogenetic and functional underdispersion, and at the conservation area scale, MPD revealed phylogenetic and functional underdispersion while MNTD revealed evidence only of functional underdispersion. Our results are consistent with environmental filtering playing a larger role at the smaller, conservation area scale. The smaller spatial units are likely composed of fewer local habitat types that are selecting for closely related species, with the larger-scale units more likely to be composed of multiple habitat types that bring together a larger pool of species from across the phylogeny. Several aspects of fern biology, including their unique physiology and water relations and the importance of the independent gametophyte stage of the life cycle, make ferns highly sensitive to local, microhabitat conditions. © 2018 The Authors. American Journal of Botany is published by Wiley Periodicals, Inc. on behalf of the Botanical Society of America.
Modeling the QBO and SAO Driven by Gravity Waves
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Mengel, J. G.; Chan, K. L.; Porter, H. S.
1999-01-01
Hines' Doppler spread parameterization (DSP) for small scale gravity waves (GW) is applied in a global scale numerical spectral model (NSM) to describe the semi-annual and quasi-biennial oscillations (SAO and QBO) as well as the long term interannual variations that are driven by wave mean flow interactions. This model has been successful in simulating the salient features observed near the equator at altitudes above 20 km, including the QBO extension into the upper mesosphere inferred from UARS measurements. The model has now been extended to describe also the mean zonal and meridional circulations of the upper troposphere and lower stratosphere that affect the equatorial QBO and its global scale extension. This is accomplished in part through tuning of the GW parameterization, and preliminary results lead to the following conclusions: (1) To reproduce the upwelling at equatorial latitudes associated with the Brewer/Dobson circulation that in part is modulated in the model by the vertical component of the Coriolis force, the eddy diffusivity in the lower stratosphere had to be enhanced and the related GW spectrum modified to bring it in closer agreement with the form recommended for the DSP. (2) To compensate for the required increase in the diffusivity, the observed QBO requires a larger GW source that is closer to the middle of the range recommended for the DSP. (3) Through global scale momentum redistribution, the above developments are conducive to extending the QBO and SAO oscillations to higher latitudes. Multi-year interannual oscillations are generated through wave filtering by the solar driven annual oscillation in the zonal circulation. (4) In a 3D version of the model, wave momentum is absorbed and dissipated by tides and planetary waves. Thus, a somewhat larger GW source is required to generate realistic amplitudes for the QBO and SAO.
Strategic Expansion Models in Academic Radiology.
Natesan, Rajni; Yang, Wei T; Tannir, Habib; Parikh, Jay
2016-03-01
In response to economic pressures, academic institutions in the United States and their radiology practices, are expanding into the community to build a larger network, thereby driving growth and achieving economies of scale. These economies of scale are being achieved variously via brick-and-mortar construction, community practice acquisition, and partnership-based network expansion. We describe and compare these three expansion models within a 4-part framework of: (1) upfront investment; (2) profitability impact; (3) brand impact; and (4) risk of execution. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Flood, Johnna; Minkler, Meredith; Lavery, Susana Hennessey; Estrada, Jessica; Falbe, Jennifer
2015-01-01
As resources for health promotion become more constricted, it is increasingly important to collaborate across sectors, including the private sector. Although many excellent models for cross-sector collaboration have shown promise in the health field, collective impact (CI), an emerging model for creating larger scale change, has yet to receive…
Brett G. Dickson; Thomas D. Sisk; Steven E. Sesnie; Richard T. Reynolds; Steven S. Rosenstock; Christina D. Vojta; Michael F. Ingraldi; Jill M. Rundall
2014-01-01
Conservation planners and land managers are often confronted with scale-associated challenges when assessing the relationship between land management objectives and species conservation. Conservation of individual species typically involves site-level analyses of habitat, whereas land management focuses on larger spatial extents. New models are needed to more...
ERIC Educational Resources Information Center
Fox, Danielle Polizzi; Gottfredson, Denise C.; Kumpfer, Karol K.; Beatty, Penny D.
2004-01-01
This article discusses the challenges faced when a popular model program, the Strengthening Families Program, which in the past has been implemented on a smaller scale in single organizations, moves to a larger, multiorganization endeavor. On the basis of 42 interviews conducted with program staff, the results highlight two main themes that…
Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...
2015-01-20
Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less
A general model for the scaling of offspring size and adult size.
Falster, Daniel S; Moles, Angela T; Westoby, Mark
2008-09-01
Understanding evolutionary coordination among different life-history traits is a key challenge for ecology and evolution. Here we develop a general quantitative model predicting how offspring size should scale with adult size by combining a simple model for life-history evolution with a frequency-dependent survivorship model. The key innovation is that larger offspring are afforded three different advantages during ontogeny: higher survivorship per time, a shortened juvenile phase, and advantage during size-competitive growth. In this model, it turns out that size-asymmetric advantage during competition is the factor driving evolution toward larger offspring sizes. For simplified and limiting cases, the model is shown to produce the same predictions as the previously existing theory on which it is founded. The explicit treatment of different survival advantages has biologically important new effects, mainly through an interaction between total maternal investment in reproduction and the duration of competitive growth. This goes on to explain alternative allometries between log offspring size and log adult size, as observed in mammals (slope = 0.95) and plants (slope = 0.54). Further, it suggests how these differences relate quantitatively to specific biological processes during recruitment. In these ways, the model generalizes across previous theory and provides explanations for some differences between major taxa.
Zhu, Tong; Moussa, Ehab M; Witting, Madeleine; Zhou, Deliang; Sinha, Kushal; Hirth, Mario; Gastens, Martin; Shang, Sherwin; Nere, Nandkishor; Somashekar, Shubha Chetan; Alexeenko, Alina; Jameel, Feroz
2018-07-01
Scale-up and technology transfer of lyophilization processes remains a challenge that requires thorough characterization of the laboratory and larger scale lyophilizers. In this study, computational fluid dynamics (CFD) was employed to develop computer-based models of both laboratory and manufacturing scale lyophilizers in order to understand the differences in equipment performance arising from distinct designs. CFD coupled with steady state heat and mass transfer modeling of the vial were then utilized to study and predict independent variables such as shelf temperature and chamber pressure, and response variables such as product resistance, product temperature and primary drying time for a given formulation. The models were then verified experimentally for the different lyophilizers. Additionally, the models were applied to create and evaluate a design space for a lyophilized product in order to provide justification for the flexibility to operate within a certain range of process parameters without the need for validation. Published by Elsevier B.V.
How model and input uncertainty impact maize yield simulations in West Africa
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli
2015-02-01
Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.
Greater sage-grouse population trends across Wyoming
Edmunds, David; Aldridge, Cameron L.; O'Donnell, Michael; Monroe, Adrian
2018-01-01
The scale at which analyses are performed can have an effect on model results and often one scale does not accurately describe the ecological phenomena of interest (e.g., population trends) for wide-ranging species: yet, most ecological studies are performed at a single, arbitrary scale. To best determine local and regional trends for greater sage-grouse (Centrocercus urophasianus) in Wyoming, USA, we modeled density-independent and -dependent population growth across multiple spatial scales relevant to management and conservation (Core Areas [habitat encompassing approximately 83% of the sage-grouse population on ∼24% of surface area in Wyoming], local Working Groups [7 regional areas for which groups of local experts are tasked with implementing Wyoming's statewide sage-grouse conservation plan at the local level], Core Area status (Core Area vs. Non-Core Area) by Working Groups, and Core Areas by Working Groups). Our goal was to determine the influence of fine-scale population trends (Core Areas) on larger-scale populations (Working Group Areas). We modeled the natural log of change in population size ( peak M lek counts) by time to calculate the finite rate of population growth (λ) for each population of interest from 1993 to 2015. We found that in general when Core Area status (Core Area vs. Non-Core Area) was investigated by Working Group Area, the 2 populations trended similarly and agreed with the overall trend of the Working Group Area. However, at the finer scale where Core Areas were analyzed separately, Core Areas within the same Working Group Area often trended differently and a few large Core Areas could influence the overall Working Group Area trend and mask trends occurring in smaller Core Areas. Relatively close fine-scale populations of sage-grouse can trend differently, indicating that large-scale trends may not accurately depict what is occurring across the landscape (e.g., local effects of gas and oil fields may be masked by increasing larger populations).
Livi, Kenneth J T; Villalobos, Mario; Leary, Rowan; Varela, Maria; Barnard, Jon; Villacís-García, Milton; Zanella, Rodolfo; Goodridge, Anna; Midgley, Paul
2017-09-12
Two synthetic goethites of varying crystal size distributions were analyzed by BET, conventional TEM, cryo-TEM, atomic resolution STEM and HRTEM, and electron tomography in order to determine the effects of crystal size, shape, and atomic scale surface roughness on their adsorption capacities. The two samples were determined by BET to have very different site densities based on Cr VI adsorption experiments. Model specific surface areas generated from TEM observations showed that, based on size and shape, there should be little difference in their adsorption capacities. Electron tomography revealed that both samples crystallized with an asymmetric {101} tablet habit. STEM and HRTEM images showed a significant increase in atomic-scale surface roughness of the larger goethite. This difference in roughness was quantified based on measurements of relative abundances of crystal faces {101} and {201} for the two goethites, and a reactive surface site density was calculated for each goethite. Singly coordinated sites on face {210} are 2.5 more dense than on face {101}, and the larger goethite showed an average total of 36% {210} as compared to 14% for the smaller goethite. This difference explains the considerably larger adsorption capacitiy of the larger goethite vs the smaller sample and points toward the necessity of knowing the atomic scale surface structure in predicting mineral adsorption processes.
A Pseudo-Vertical Equilibrium Model for Slow Gravity Drainage Dynamics
NASA Astrophysics Data System (ADS)
Becker, Beatrix; Guo, Bo; Bandilla, Karl; Celia, Michael A.; Flemisch, Bernd; Helmig, Rainer
2017-12-01
Vertical equilibrium (VE) models are computationally efficient and have been widely used for modeling fluid migration in the subsurface. However, they rely on the assumption of instant gravity segregation of the two fluid phases which may not be valid especially for systems that have very slow drainage at low wetting phase saturations. In these cases, the time scale for the wetting phase to reach vertical equilibrium can be several orders of magnitude larger than the time scale of interest, rendering conventional VE models unsuitable. Here we present a pseudo-VE model that relaxes the assumption of instant segregation of the two fluid phases by applying a pseudo-residual saturation inside the plume of the injected fluid that declines over time due to slow vertical drainage. This pseudo-VE model is cast in a multiscale framework for vertically integrated models with the vertical drainage solved as a fine-scale problem. Two types of fine-scale models are developed for the vertical drainage, which lead to two pseudo-VE models. Comparisons with a conventional VE model and a full multidimensional model show that the pseudo-VE models have much wider applicability than the conventional VE model while maintaining the computational benefit of the conventional VE model.
Impact of small-scale structures on estuarine circulation
NASA Astrophysics Data System (ADS)
Liu, Zhuo; Zhang, Yinglong J.; Wang, Harry V.; Huang, Hai; Wang, Zhengui; Ye, Fei; Sisson, Mac
2018-05-01
We present a novel and challenging application of a 3D estuary-shelf model to the study of the collective impact of many small-scale structures (bridge pilings of 1 m × 2 m in size) on larger-scale circulation in a tributary (James River) of Chesapeake Bay. We first demonstrate that the model is capable of effectively transitioning grid resolution from 400 m down to 1 m near the pilings without introducing undue numerical artifact. We then show that despite their small sizes and collectively small area as compared to the total channel cross-sectional area, the pilings exert a noticeable impact on the large-scale circulation, and also create a rich structure of vortices and wakes around the pilings. As a result, the water quality and local sedimentation patterns near the bridge piling area are likely to be affected as well. However, when evaluating over the entire waterbody of the project area, the near field effects are weighed with the areal percentage which is small compared to that for the larger unaffected area, and therefore the impact on the lower James River as a whole becomes relatively insignificant. The study highlights the importance of the use of high resolution in assessing the near-field impact of structures.
NASA Astrophysics Data System (ADS)
Wagener, Thorsten
2017-04-01
We increasingly build and apply hydrologic models that simulate systems beyond the catchment scale. Such models run at regional, national or even continental scales. They therefore offer opportunities for new scientific insights, for example by enabling comparative hydrology or connectivity studies, and for water management, where we might better understand changes to water resources from larger scale activities like agriculture or from hazards such as droughts. However, these models also require us to rethink how we build and evaluate them given that some of the unsolved problems from the catchment scale have not gone away. So what role should such models play in scientific advancement in hydrology? What problems do we still have to resolve before they can fulfill their role? What opportunities for solving these problems are there, but have not yet been utilized? I will provide some thoughts on these issues in the context of the IAHS Panta Rhei initiative and the scientific challenges it has set out for hydrology (Montanari et al., 2013, Hydrological Sciences Journal; McMillan et al., 2016, Hydrological Sciences Journal).
Saturation of the turbulent dynamo.
Schober, J; Schleicher, D R G; Federrath, C; Bovino, S; Klessen, R S
2015-08-01
The origin of strong magnetic fields in the Universe can be explained by amplifying weak seed fields via turbulent motions on small spatial scales and subsequently transporting the magnetic energy to larger scales. This process is known as the turbulent dynamo and depends on the properties of turbulence, i.e., on the hydrodynamical Reynolds number and the compressibility of the gas, and on the magnetic diffusivity. While we know the growth rate of the magnetic energy in the linear regime, the saturation level, i.e., the ratio of magnetic energy to turbulent kinetic energy that can be reached, is not known from analytical calculations. In this paper we present a scale-dependent saturation model based on an effective turbulent resistivity which is determined by the turnover time scale of turbulent eddies and the magnetic energy density. The magnetic resistivity increases compared to the Spitzer value and the effective scale on which the magnetic energy spectrum is at its maximum moves to larger spatial scales. This process ends when the peak reaches a characteristic wave number k☆ which is determined by the critical magnetic Reynolds number. The saturation level of the dynamo also depends on the type of turbulence and differs for the limits of large and small magnetic Prandtl numbers Pm. With our model we find saturation levels between 43.8% and 1.3% for Pm≫1 and between 2.43% and 0.135% for Pm≪1, where the higher values refer to incompressible turbulence and the lower ones to highly compressible turbulence.
NASA Astrophysics Data System (ADS)
Sidle, R. C.
2013-12-01
Hydrologic, pedologic, and geomorphic processes are strongly interrelated and affected by scale. These interactions exert important controls on runoff generation, preferential flow, contaminant transport, surface erosion, and mass wasting. Measurement of hydraulic conductivity (K) and infiltration capacity at small scales generally underestimates these values for application at larger field, hillslope, or catchment scales. Both vertical and slope-parallel saturated flow and related contaminant transport are often influenced by interconnected networks of preferential flow paths, which are not captured in K measurements derived from soil cores. Using such K values in models may underestimate water and contaminant fluxes and runoff peaks. As shown in small-scale runoff plot studies, infiltration rates are typically lower than integrated infiltration across a hillslope or in headwater catchments. The resultant greater infiltration-excess overland flow in small plots compared to larger landscapes is attributed to the lack of preferential flow continuity; plot border effects; greater homogeneity of rainfall inputs, topography and soil physical properties; and magnified effects of hydrophobicity in small plots. At the hillslope scale, isolated areas with high infiltration capacity can greatly reduce surface runoff and surface erosion at the hillslope scale. These hydropedologic and hydrogeomorphic processes are also relevant to both occurrence and timing of landslides. The focus of many landslide studies has typically been either on small-scale vadose zone process and how these affect soil mechanical properties or on larger scale, more descriptive geomorphic studies. One of the issues in translating laboratory-based investigations on geotechnical behavior of soils to field scales where landslides occur is the characterization of large-scale hydrological processes and flow paths that occur in heterogeneous and anisotropic porous media. These processes are not only affected by the spatial distribution of soil physical properties and bioturbations, but also by geomorphic attributes. Interactions among preferential flow paths can induce rapid pore water pressure response within soil mantles and trigger landslides during storm peaks. Alternatively, in poorly developed and unstructured soils, infiltration occurs mainly through the soil matrix and a lag time exists between the rainfall peak and development of pore water pressures at depth. Deep, slow-moving mass failures are also strongly controlled by secondary porosity within the regolith with the timing of activation linked to recharge dynamics. As such, understanding both small and larger scale processes is needed to estimate geomorphic impacts, as well as streamflow generation and contaminant migration.
Scaling Laws of Discrete-Fracture-Network Models
NASA Astrophysics Data System (ADS)
Philippe, D.; Olivier, B.; Caroline, D.; Jean-Raynald, D.
2006-12-01
The statistical description of fracture networks through scale still remains a concern for geologists, considering the complexity of fracture networks. A challenging task of the last 20-years studies has been to find a solid and rectifiable rationale to the trivial observation that fractures exist everywhere and at all sizes. The emergence of fractal models and power-law distributions quantifies this fact, and postulates in some ways that small-scale fractures are genetically linked to their larger-scale relatives. But the validation of these scaling concepts still remains an issue considering the unreachable amount of information that would be necessary with regards to the complexity of natural fracture networks. Beyond the theoretical interest, a scaling law is a basic and necessary ingredient of Discrete-Fracture-Network models (DFN) that are used for many environmental and industrial applications (groundwater resources, mining industry, assessment of the safety of deep waste disposal sites, ..). Indeed, such a function is necessary to assemble scattered data, taken at different scales, into a unified scaling model, and to interpolate fracture densities between observations. In this study, we discuss some important issues related to scaling laws of DFN: - We first describe a complete theoretical and mathematical framework that takes account of both the fracture- size distribution and the fracture clustering through scales (fractal dimension). - We review the scaling laws that have been obtained, and we discuss the ability of fracture datasets to really constrain the parameters of the DFN model. - And finally we discuss the limits of scaling models.
Pore-Scale Modeling of Pore Structure Effects on P-Wave Scattering Attenuation in Dry Rocks
Li, Tianyang; Qiu, Hao; Wang, Feifei
2015-01-01
Underground rocks usually have complex pore system with a variety of pore types and a wide range of pore size. The effects of pore structure on elastic wave attenuation cannot be neglected. We investigated the pore structure effects on P-wave scattering attenuation in dry rocks by pore-scale modeling based on the wave theory and the similarity principle. Our modeling results indicate that pore size, pore shape (such as aspect ratio), and pore density are important factors influencing P-wave scattering attenuation in porous rocks, and can explain the variation of scattering attenuation at the same porosity. From the perspective of scattering attenuation, porous rocks can safely suit to the long wavelength assumption when the ratio of wavelength to pore size is larger than 15. Under the long wavelength condition, the scattering attenuation coefficient increases as a power function as the pore density increases, and it increases exponentially with the increase in aspect ratio. For a certain porosity, rocks with smaller aspect ratio and/or larger pore size have stronger scattering attenuation. When the pore aspect ratio is larger than 0.5, the variation of scattering attenuation at the same porosity is dominantly caused by pore size and almost independent of the pore aspect ratio. These results lay a foundation for pore structure inversion from elastic wave responses in porous rocks. PMID:25961729
High Performance Computing for Modeling Wind Farms and Their Impact
NASA Astrophysics Data System (ADS)
Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.
2016-12-01
As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.
Roadmap for Scaling and Multifractals in Geosciences: still a long way to go ?
NASA Astrophysics Data System (ADS)
Schertzer, Daniel; Lovejoy, Shaun
2010-05-01
The interest in scale symmetries (scaling) in Geosciences has never lessened since the first pioneering EGS session on chaos and fractals 22 years ago. The corresponding NP activities have been steadily increasing, covering a wider and wider diversity of geophysical phenomena and range of space-time scales. Whereas interest was initially largely focused on atmospheric turbulence, rain and clouds at small scales, it has quickly broadened to much larger scales and to much wider scale ranges, to include ocean sciences, solid earth and space physics. Indeed, the scale problem being ubiquitous in Geosciences, it is indispensable to share the efforts and the resulting knowledge as much as possible. There have been numerous achievements which have followed from the exploration of larger and larger datasets with finer and finer resolutions, from both modelling and theoretical discussions, particularly on formalisms for intermittency, anisotropy and scale symmetry, multiple scaling (multifractals) vs. simple scaling,. We are now way beyond the early pioneering but tentative attempts using crude estimates of unique scaling exponents to bring some credence to the fact that scale symmetries are key to most nonlinear geoscience problems. Nowadays, we need to better demonstrate that scaling brings effective solutions to geosciences and therefore to society. A large part of the answer corresponds to our capacity to create much more universal and flexible tools to multifractally analyse in straightforward and reliable manners complex and complicated systems such as the climate. Preliminary steps in this direction are already quite encouraging: they show that such approaches explain both the difficulty of classical techniques to find trends in climate scenarios (particularly for extremes) and resolve them with the help of scaling estimators. The question of the reliability and accuracy of these methods is not trivial. After discussing these important, but rather short term issues, we will point out more general questions, which can be put together into the following provocative question: how to convert the classical time evolving deterministic PDE's into dynamical multifractal systems? We will argue that this corresponds to an already active field of research, which include: multifractals as generic solutions of nonlinear PDE (exact results for 1D Burgers equation and a few other caricatures of Navier-Stokes equations, prospects for 3D Burgers equations), cascade structures of numerical weather models, links between multifractal processes and random dynamical systems, and the challenging debate on the most relevant stochastic multifractal formalism, whereas there is already a rather general consent about the deterministic one.
Does Nudging Squelch the Extremes in Regional Climate Modeling?
An important question in regional climate downscaling is whether to constrain (nudge) the interior of the limited-area domain toward the larger-scale driving fields. Prior research has demonstrated that interior nudging can increase the skill of regional climate predictions origin...
Evaluating local crop residue biomass supply: Economic and environmental impacts
USDA-ARS?s Scientific Manuscript database
The increasing interest in energy production from biomass requires a better understanding of potential local production and environmental impacts. This information is needed by local producers, biomass industry, and other stakeholders, and for larger scale analyses. This study models biomass product...
Laboratory generated M -6 earthquakes
McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.
2014-01-01
We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.
What spatial scales are believable for climate model projections of sea surface temperature?
NASA Astrophysics Data System (ADS)
Kwiatkowski, Lester; Halloran, Paul R.; Mumby, Peter J.; Stephenson, David B.
2014-09-01
Earth system models (ESMs) provide high resolution simulations of variables such as sea surface temperature (SST) that are often used in off-line biological impact models. Coral reef modellers have used such model outputs extensively to project both regional and global changes to coral growth and bleaching frequency. We assess model skill at capturing sub-regional climatologies and patterns of historical warming. This study uses an established wavelet-based spatial comparison technique to assess the skill of the coupled model intercomparison project phase 5 models to capture spatial SST patterns in coral regions. We show that models typically have medium to high skill at capturing climatological spatial patterns of SSTs within key coral regions, with model skill typically improving at larger spatial scales (≥4°). However models have much lower skill at modelling historical warming patters and are shown to often perform no better than chance at regional scales (e.g. Southeast Asian) and worse than chance at finer scales (<8°). Our findings suggest that output from current generation ESMs is not yet suitable for making sub-regional projections of change in coral bleaching frequency and other marine processes linked to SST warming.
Reiter, Michael A; Saintil, Max; Yang, Ziming; Pokrajac, Dragoljub
2009-08-01
Conceptual modeling is a useful tool for identifying pathways between drivers, stressors, Valued Ecosystem Components (VECs), and services that are central to understanding how an ecosystem operates. The St. Jones River watershed, DE is a complex ecosystem, and because management decisions must include ecological, social, political, and economic considerations, a conceptual model is a good tool for accommodating the full range of inputs. In 2002, a Four-Component, Level 1 conceptual model was formed for the key habitats of the St. Jones River watershed, but since the habitat level of resolution is too fine for some important watershed-scale issues we developed a functional watershed-scale model using the existing narrowed habitat-scale models. The narrowed habitat-scale conceptual models and associated matrices developed by Reiter et al. (2006) were combined with data from the 2002 land use/land cover (LULC) GIS-based maps of Kent County in Delaware to assemble a diagrammatic and numerical watershed-scale conceptual model incorporating the calculated weight of each habitat within the watershed. The numerical component of the assembled watershed model was subsequently subjected to the same Monte Carlo narrowing methodology used for the habitat versions to refine the diagrammatic component of the watershed-scale model. The narrowed numerical representation of the model was used to generate forecasts for changes in the parameters "Agriculture" and "Forest", showing that land use changes in these habitats propagated through the results of the model by the weighting factor. Also, the narrowed watershed-scale conceptual model identified some key parameters upon which to focus research attention and management decisions at the watershed scale. The forecast and simulation results seemed to indicate that the watershed-scale conceptual model does lead to different conclusions than the habitat-scale conceptual models for some issues at the larger watershed scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo
High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less
Turbulent kinetics of a large wind farm and their impact in the neutral boundary layer
Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo; ...
2015-12-28
High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less
NASA Astrophysics Data System (ADS)
Wagener, T.
2017-12-01
Current societal problems and questions demand that we increasingly build hydrologic models for regional or even continental scale assessment of global change impacts. Such models offer new opportunities for scientific advancement, for example by enabling comparative hydrology or connectivity studies, and for improved support of water management decision, since we might better understand regional impacts on water resources from large scale phenomena such as droughts. On the other hand, we are faced with epistemic uncertainties when we move up in scale. The term epistemic uncertainty describes those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past (e.g. due to climate change), because the historical data is unreliable (e.g. because it is imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is no observation network available at all). In this talk I will explore: (1) how we might build a bridge between what we have learned about catchment scale processes and hydrologic model development and evaluation at larger scales. (2) How we can understand the impact of epistemic uncertainty in large scale hydrologic models. And (3) how we might utilize large scale hydrologic predictions to understand climate change impacts, e.g. on infectious disease risk.
Coarse-grained molecular dynamics simulations for giant protein-DNA complexes
NASA Astrophysics Data System (ADS)
Takada, Shoji
Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.
This program is part of a larger program called ECOHAB: Florida that includes this study as well as physical oceanography, circulation patterns, and shelf scale modeling for predicting the occurrence and transport of Karenia brevis (=Gymnodinium breve) red tides. The physical par...
The baryonic self similarity of dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alard, C., E-mail: alard@iap.fr
2014-06-20
The cosmological simulations indicates that dark matter halos have specific self-similar properties. However, the halo similarity is affected by the baryonic feedback. By using momentum-driven winds as a model to represent the baryon feedback, an equilibrium condition is derived which directly implies the emergence of a new type of similarity. The new self-similar solution has constant acceleration at a reference radius for both dark matter and baryons. This model receives strong support from the observations of galaxies. The new self-similar properties imply that the total acceleration at larger distances is scale-free, the transition between the dark matter and baryons dominatedmore » regime occurs at a constant acceleration, and the maximum amplitude of the velocity curve at larger distances is proportional to M {sup 1/4}. These results demonstrate that this self-similar model is consistent with the basics of modified Newtonian dynamics (MOND) phenomenology. In agreement with the observations, the coincidence between the self-similar model and MOND breaks at the scale of clusters of galaxies. Some numerical experiments show that the behavior of the density near the origin is closely approximated by a Einasto profile.« less
Similarity Rules for Scaling Solar Sail Systems
NASA Technical Reports Server (NTRS)
Canfield, Stephen L.; Peddieson, John; Garbe, Gregory
2010-01-01
Future science missions will require solar sails on the order of 200 square meters (or larger). However, ground demonstrations and flight demonstrations must be conducted at significantly smaller sizes, due to limitations of ground-based facilities and cost and availability of flight opportunities. For this reason, the ability to understand the process of scalability, as it applies to solar sail system models and test data, is crucial to the advancement of this technology. This paper will approach the problem of scaling in solar sail models by developing a set of scaling laws or similarity criteria that will provide constraints in the sail design process. These scaling laws establish functional relationships between design parameters of a prototype and model sail that are created at different geometric sizes. This work is applied to a specific solar sail configuration and results in three (four) similarity criteria for static (dynamic) sail models. Further, it is demonstrated that even in the context of unique sail material requirements and gravitational load of earth-bound experiments, it is possible to develop appropriate scaled sail experiments. In the longer term, these scaling laws can be used in the design of scaled experimental tests for solar sails and in analyzing the results from such tests.
Local dynamics and spatiotemporal chaos. The Kuramoto- Sivashinsky equation: A case study
NASA Astrophysics Data System (ADS)
Wittenberg, Ralf Werner
The nature of spatiotemporal chaos in extended continuous systems is not yet well-understood. In this thesis, a model partial differential equation, the Kuramoto- Sivashinsky (KS) equation ut+uxxxx+uxx+uux =0 on a large one-dimensional periodic domain, is studied analytically, numerically, and through modeling to obtain a more detailed understanding of the observed spatiotemporally complex dynamics. In particular, with the aid of a wavelet decomposition, the relevant dynamical interactions are shown to be localized in space and scale. Motivated by these results, and by the idea that the attractor on a large domain may be understood via attractors on smaller domains, a spatially localized low- dimensional model for a minimal chaotic box is proposed. A (de)stabilized extension of the KS equation has recently attracted increased interest; for this situation, dissipativity and analyticity areproven, and an explicit shock-like solution is constructed which sheds light on the difficulties in obtaining optimal bounds for the KS equation. For the usual KS equation, the spatiotemporally chaotic state is carefully characterized in real, Fourier and wavelet space. The wavelet decomposition provides good scale separation which isolates the three characteristic regions of the dynamics: large scales of slow Gaussian fluctuations, active scales containing localized interactions of coherent structures, and small scales. Space localization is shown through a comparison of various correlation lengths and a numerical experiment in which different modes are uncoupled to estimate a dynamic interaction length. A detailed picture of the contributions of different scales to the spatiotemporally complex dynamics is obtained via a Galerkin projection of the KS equation onto the wavelet basis, and an extensive series of numerical experiments in which different combinations of wavelet levels are eliminated or forced. These results, and a formalism to derive an effective equation for periodized subsystems externally forced from a larger system, motivate various models for spatially localized forced systems. There is convincing evidence that short periodized systems, internally forced at the largest scales, form a minimal model for the observed extensively chaotic dynamics in larger domains.
The dynamical landscape of marine phytoplankton diversity
Lévy, Marina; Jahn, Oliver; Dutkiewicz, Stephanie; Follows, Michael J.; d'Ovidio, Francesco
2015-01-01
Observations suggest that the landscape of marine phytoplankton assemblage might be strongly heterogeneous at the dynamical mesoscale and submesoscale (10–100 km, days to months), with potential consequences in terms of global diversity and carbon export. But these variations are not well documented as synoptic taxonomic data are difficult to acquire. Here, we examine how phytoplankton assemblage and diversity vary between mesoscale eddies and submesoscale fronts. We use a multi-phytoplankton numerical model embedded in a mesoscale flow representative of the North Atlantic. Our model results suggest that the mesoscale flow dynamically distorts the niches predefined by environmental contrasts at the basin scale and that the phytoplankton diversity landscape varies over temporal and spatial scales that are one order of magnitude smaller than those of the basin-scale environmental conditions. We find that any assemblage and any level of diversity can occur in eddies and fronts. However, on a statistical level, the results suggest a tendency for larger diversity and more fast-growing types at fronts, where nutrient supplies are larger and where populations of adjacent water masses are constantly brought into contact; and lower diversity in the core of eddies, where water masses are kept isolated long enough to enable competitive exclusion. PMID:26400196
Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models
Cowley, Benjamin R.; Doiron, Brent; Kohn, Adam
2016-01-01
Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. PMID:27926936
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E.
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics. PMID:26381745
Softened gravity and the extension of the standard model up to infinite energy
NASA Astrophysics Data System (ADS)
Giudice, Gian F.; Isidori, Gino; Salvio, Alberto; Strumia, Alessandro
2015-02-01
Attempts to solve naturalness by having the weak scale as the only breaking of classical scale invariance have to deal with two severe difficulties: gravity and the absence of Landau poles. We show that solutions to the first problem require premature modifications of gravity at scales no larger than 1011 GeV, while the second problem calls for many new particles at the weak scale. To build models that fulfill these properties, we classify 4- dimensional Quantum Field Theories that satisfy Total Asymptotic Freedom (TAF): the theory holds up to infinite energy, where all coupling constants flow to zero. We develop a technique to identify such theories and determine their low-energy predictions. Since the Standard Model turns out to be asymptotically free only under the unphysical conditions g 1 = 0, M t = 186 GeV, M τ = 0, M h = 163 GeV, we explore some of its weak-scale extensions that satisfy the requirements for TAF.
Le Mouël, Jean-Louis; Allègre, Claude J.; Narteau, Clément
1997-01-01
A scaling law approach is used to simulate the dynamo process of the Earth’s core. The model is made of embedded turbulent domains of increasing dimensions, until the largest whose size is comparable with the site of the core, pervaded by large-scale magnetic fields. Left-handed or right-handed cyclones appear at the lowest scale, the scale of the elementary domains of the hierarchical model, and disappear. These elementary domains then behave like electromotor generators with opposite polarities depending on whether they contain a left-handed or a right-handed cyclone. To transfer the behavior of the elementary domains to larger ones, a dynamic renormalization approach is used. A simple rule is adopted to determine whether a domain of scale l is a generator—and what its polarity is—in function of the state of the (l − 1) domains it is made of. This mechanism is used as the main ingredient of a kinematic dynamo model, which displays polarity intervals, excursions, and reversals of the geomagnetic field. PMID:11038547
1983-08-01
for larger size ships. The lessons learned related to the behaviour of the propulsion of this ship as well as those related to • scaling methodologies...were addressed. The key phenomenon that effects the scaling is the fracturing behaviour of model ice and how it scales to natural ice. The key...users point of view, based on the Kigoriak experience. Essentially attention is drawn to two areas: 1. The behaviour of ice around the propulsion which
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makedonska, Nataliia; Kwicklis, Edward Michael; Birdsell, Kay Hanson
This progress report for fiscal year 2015 (FY15) describes the development of discrete fracture network (DFN) models for Pahute Mesa. DFN models will be used to upscale parameters for simulations of subsurface flow and transport in fractured media in Pahute Mesa. The research focuses on modeling of groundwater flow and contaminant transport using DFNs generated according to fracture characteristics observed in the Topopah Spring Aquifer (TSA) and the Lava Flow Aquifer (LFA). This work will improve the representation of radionuclide transport processes in large-scale, regulatory-focused models with a view to reduce pessimistic bounding approximations and provide more realistic contaminant boundarymore » calculations that can be used to describe the future extent of contaminated groundwater. Our goal is to refine a modeling approach that can translate parameters to larger-scale models that account for local-scale flow and transport processes, which tend to attenuate migration.« less
Understanding Group/Party Affiliation Using Social Networks and Agent-Based Modeling
NASA Technical Reports Server (NTRS)
Campbell, Kenyth
2012-01-01
The dynamics of group affiliation and group dispersion is a concept that is most often studied in order for political candidates to better understand the most efficient way to conduct their campaigns. While political campaigning in the United States is a very hot topic that most politicians analyze and study, the concept of group/party affiliation presents its own area of study that producers very interesting results. One tool for examining party affiliation on a large scale is agent-based modeling (ABM), a paradigm in the modeling and simulation (M&S) field perfectly suited for aggregating individual behaviors to observe large swaths of a population. For this study agent based modeling was used in order to look at a community of agents and determine what factors can affect the group/party affiliation patterns that are present. In the agent-based model that was used for this experiment many factors were present but two main factors were used to determine the results. The results of this study show that it is possible to use agent-based modeling to explore group/party affiliation and construct a model that can mimic real world events. More importantly, the model in the study allows for the results found in a smaller community to be translated into larger experiments to determine if the results will remain present on a much larger scale.
NASA Astrophysics Data System (ADS)
Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao
2018-04-01
In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.
Understanding the k-5/3 to k-2.4 spectral break in aircraft wind data
NASA Astrophysics Data System (ADS)
Pinel, J.; Lovejoy, S.; Schertzer, D. J.; Tuck, A.
2010-12-01
A fundamental issue in atmospheric dynamics is to understand how the statistics of fluctuations of various fields vary with their space-time scale. The classical - and still “standard” model - dates back to Kraichnan and Charney’s work on 2-D and geostrophic (quasi 2-D) turbulence at the end of the 1960’s and early 1970’s. It postulates an isotropic 2-D turbulent regime at large scales and an isotropic 3D regime at small scales separated by a “dimensional transition” (once called a “mesoscale gap”) near the pressure scale height of ≈10 km. By the early 1980’s a quite different model emerged, the 23/9-D scaling model in which the dynamics were postulated to be dominated (over wide scale ranges) by a strongly anisotropic scale invariant cascade mechanism with structures becoming flatter and flatter at larger and larger scales in a scaling manner: the isotropy assumptions were discarded but the scaling and cascade assumptions retained. Today, thanks to the revolution in geodata and atmospheric models - both in quality and quantity - the 23/9-D model can explain the observed horizontal cascade structures in remotely sensed radiances, in meteorological “reanalyses”, in meteorological models, in high resolution drop sonde vertical analyses, of lidar vertical sections etc. All of these analyses directly contradict the standard model which predicts drastic “dimensional transitions” for scalar quantities. Indeed, until recently the only unexplained feature was a scale break in aircraft spectra of the (vector) horizontal wind somewhere between about 40 and 200 km. However - contrary to repeated claims - and thanks to a reanalysis of the historical papers - the transition that had been observed since the 1980’s was not between k^-5/3 and k^-3 but rather between k^-5/3 and k^-2.4. By 2009, the standard model was thus hanging by a thread. This was cut when careful analysis of scientific aircraft data allowed the 23/9-D model to explain the large scale k-2.4 regime as an artefact of the aircraft following a sloping trajectory: at large enough scales, the spectrum is simply dominated by vertical rather than horizontal fluctuations which have the required k^-2.4 form. Since aircraft frequently follow gently sloping isobars, this neatly explains the last obstacle to wide range anisotropic scaling models finally opening the door to an urgently needed consensus on the statistical structure of the atmosphere. However, objections remain: at large enough scales do isobaric and isoheight spectra really have different exponents? In this presentation we attempted to study this issue in more detail than before by analyzed data measured by commercial aircrafts through the Tropospheric Airborne Meteorological Data Reporting (TAMDAR) system over CONUS during year 2009. The TAMDAR system allows us to calculate the statistical properties of the wind field on constant pressure and altitude levels. Various statistical exponents were calculated (velocity increment in terms of horizontal, vertical displacement, pressure and time) and we show here what we learned and how this analysis can help with solving this question.
Assessing the utility of FIB-SEM images for shale digital rock physics
NASA Astrophysics Data System (ADS)
Kelly, Shaina; El-Sobky, Hesham; Torres-Verdín, Carlos; Balhoff, Matthew T.
2016-09-01
Shales and other unconventional or low permeability (tight) reservoirs house vast quantities of hydrocarbons, often demonstrate considerable water uptake, and are potential repositories for fluid sequestration. The pore-scale topology and fluid transport mechanisms within these nanoporous sedimentary rocks remain to be fully understood. Image-informed pore-scale models are useful tools for studying porous media: a debated question in shale pore-scale petrophysics is whether there is a representative elementary volume (REV) for shale models? Furthermore, if an REV exists, how does it differ among petrophysical properties? We obtain three dimensional (3D) models of the topology of microscale shale volumes from image analysis of focused ion beam-scanning electron microscope (FIB-SEM) image stacks and investigate the utility of these models as a potential REV for shale. The scope of data used in this work includes multiple local groups of neighboring FIB-SEM images of different microscale sizes, corresponding core-scale (milli- and centimeters) laboratory data, and, for comparison, series of two-dimensional (2D) cross sections from broad ion beam SEM images (BIB-SEM), which capture a larger microscale field of view than the FIB-SEM images; this array of data is larger than the majority of investigations with FIB-SEM-derived microscale models of shale. Properties such as porosity, organic matter content, and pore connectivity are extracted from each model. Assessments of permeability with single phase, pressure-driven flow simulations are performed in the connected pore space of the models using the lattice-Boltzmann method. Calculated petrophysical properties are compared to those of neighboring FIB-SEM images and to core-scale measurements of the sample associated with the FIB-SEM sites. Results indicate that FIB-SEM images below ∼5000 μm3 volume (the largest volume analyzed) are not a suitable REV for shale permeability and pore-scale networks; i.e. field of view is compromised at the expense of detailed, but often unconnected, nanopore morphology. Further, we find that it is necessary to acquire several local FIB-SEM or BIB-SEM images and correlate their extracted geometric properties to improve the likelihood of achieving representative values of porosity and organic matter volume. Our work indicates that FIB-SEM images of microscale volumes of shale are a qualitative tool for petrophysical and transport analysis. Finally, we offer alternatives for quantitative pore-scale assessments of shale.
Nuijens, Louise; Medeiros, Brian; Sandu, Irina; ...
2015-11-06
We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuijens, Louise; Medeiros, Brian; Sandu, Irina
We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less
NASA Astrophysics Data System (ADS)
Marras, S.; Suckale, J.; Eguzkitza, B.; Houzeaux, G.; Vázquez, M.; Lesage, A. C.
2016-12-01
The propagation of tsunamis in the open ocean has been studied in detail with many excellent numerical approaches available to researchers. Our understanding of the processes that govern the onshore propagation of tsunamis is less advanced. Yet, the reach of tsunamis on land is an important predictor of the damage associated with a given event, highlighting the need to investigate the factors that govern tsunami propagation onshore. In this study, we specifically focus on understanding the effect of bottom roughness at a variety of scales. The term roughness is to be understood broadly, as it represents scales ranging from small features like rocks, to vegetation, up to the size of larger structures and topography. In this poster, we link applied mathematics, computational fluid dynamics, and tsunami physics to analyze the small scales features of coastal hydrodynamics and the effect of roughness on the motion of tsunamis as they run up a sloping beach and propagate inland. We solve the three-dimensional Navier-Stokes equations of incompressible flows with free surface, which is tracked by a level set function in combination with an accurate re-distancing scheme. We discretize the equations via linear finite elements for space approximation and fully implicit time integration. Stabilization is achieved via the variational multiscale method whereas the subgrid scales for our large eddy simulations are modeled using a dynamically adaptive Smagorinsky eddy viscosity. As the geometrical characteristics of roughness in this study vary greatly across different scales, we implement a scale-dependent representation of the roughness elements. We model the smallest sub-grid scale roughness features by the use of a properly defined law of the wall. Furthermore, we utilize a Manning formula to compute the shear stress at the boundary. As the geometrical scales become larger, we resolve the geometry explicitly and compute the effective volume drag introduced by large scale immersed bodies. This study is a necessary step to verify and validate our model before proceeding further into the simulation of sediment transport in turbulent free surface flows. The simulation of such problems requires a space and time-dependent viscosity to model the effect of solid bodies transported by the incoming flow on onshore tsunami propagation.
A New Model of Size-graded Soil Veneer on the Lunar Surface
NASA Technical Reports Server (NTRS)
Basu, Abhijit; McKay, David S.
2005-01-01
Introduction. We propose a new model of distribution of submillimeter sized lunar soil grains on the lunar surface. We propose that in the uppermost millimeter or two of the lunar surface, soil-grains are size graded with the finest nanoscale dust on top and larger micron-scale particles below. This standard state is perturbed by ejecta deposition of larger grains at the lunar surface, which have a coating of dusty layer that may not have substrates of intermediate sizes. Distribution of solar wind elements (SWE), agglutinates, vapor deposited nanophase Fe0 in size fractions of lunar soils and ir spectra of size fractions of lunar soils are compatible with this model. A direct test of this model requires bringing back glue-impregnated tubes of lunar soil samples to be dissected and examined on Earth.
Rebaudo, François; Faye, Emile; Dangles, Olivier
2016-01-01
A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species abundances. In conclusion, keeping in mind that the mismatch between the size of organisms and the scale at which climate data are collected and modeled remains a key issue, temperature dataset selection should be balanced by the desired output spatiotemporal scale for better predicting pest dynamics and developing efficient pest management strategies.
Rebaudo, François; Faye, Emile; Dangles, Olivier
2016-01-01
A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species abundances. In conclusion, keeping in mind that the mismatch between the size of organisms and the scale at which climate data are collected and modeled remains a key issue, temperature dataset selection should be balanced by the desired output spatiotemporal scale for better predicting pest dynamics and developing efficient pest management strategies. PMID:27148077
A study on the dependence of nuclear viscosity on temperature
NASA Astrophysics Data System (ADS)
Vardaci, E.; Di Nitto, A.; Nadtochy, P. N.; La Rana, G.; Cinausero, M.; Prete, G.; Gelli, N.; Ashaduzzaman, M.; Davide, F.; Pulcini, A.; Quero, D.; Kozulin, E. M.; Knyazheva, G. N.; Itkis, I. M.
2018-05-01
Nuclear viscosity is an irreplaceable ingredient of nuclear fission collective dynamical models. It drives the exchange of energy between the collective variables and the thermal bath of single particle degrees of freedom. Its dependence on the shape and temperature is a matter of controversy. By using systems of intermediate fissility we have demonstrated in a recent study that the viscosity parameters is larger for compact shapes, and decreases for larger deformations of the fissioning system, at variance with the conclusions of the statistical model modified to include empirically viscosity and time scales. In this contribution we propose an experimental scenario to highlight the possible dependence of the viscosity from the temperature.
Tabulated pressure measurements on an executive-type jet transport model with a supercritical wing
NASA Technical Reports Server (NTRS)
Bartlett, D. W.
1975-01-01
A 1/9 scale model of an existing executive type jet transport refitted with a supercritical wing was tested on in the 8 foot transonic pressure tunnel. The supercritical wing had the same sweep as the original airplane wing but had maximum thickness chord ratios 33 percent larger at the mean geometric chord and almost 50 percent larger at the wing-fuselage juncture. Wing pressure distributions and fuselage pressure distributions in the vicinity of the left nacelle were measured at Mach numbers from 0.25 to 0.90 at angles of attack that generally varied from -2 deg to 10 deg. Results are presented in tabular form without analysis.
NASA Astrophysics Data System (ADS)
Shirakata, Hikari; Kawaguchi, Toshihiro; Okamoto, Takashi; Ishiyama, Tomoaki
2017-09-01
We present the galactic stellar age - velocity dispersion relation obtained from a semi-analytic model of galaxy formation. We divide galaxies into two populations: galaxies which have over-massive/under-massive black holes (BHs) against the best-fitting BH mass - velocity dispersion relation. We find that galaxies with larger velocity dispersion have older stellar ages. We also find that galaxies with over-massive BHs have older stellar ages. These results are consistent with observational results obtained from Martin-Navarro et al. (2016). We tested the model with weak AGN feedback and find that galaxies with larger velocity dispersion have a younger stellar age.
Local-Scale Simulations of Nucleate Boiling on Micrometer Featured Surfaces: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Hariswaran; Moreno, Gilberto; Narumanchi, Sreekant V
2017-08-03
A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulationsmore » pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.« less
Local-Scale Simulations of Nucleate Boiling on Micrometer-Featured Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Hariswaran; Moreno, Gilberto; Narumanchi, Sreekant V
2017-07-12
A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulationsmore » pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.« less
NASA Astrophysics Data System (ADS)
Wohlmuth, Johannes; Andersen, Jørgen Vitting
2006-05-01
We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.
NASA Astrophysics Data System (ADS)
Das, Debottam; Ghosh, Kirtiman; Mitra, Manimala; Mondal, Subhadeep
2018-01-01
We consider an extension of the standard model (SM) augmented by two neutral singlet fermions per generation and a leptoquark. In order to generate the light neutrino masses and mixing, we incorporate inverse seesaw mechanism. The right-handed neutrino production in this model is significantly larger than the conventional inverse seesaw scenario. We analyze the different collider signatures of this model and find that the final states associated with three or more leptons, multijet and at least one b -tagged and (or) τ -tagged jet can probe larger RH neutrino mass scale. We have also proposed a same-sign dilepton signal region associated with multiple jets and missing energy that can be used to distinguish the present scenario from the usual inverse seesaw extended SM.
Gravel, Dominique; Beaudet, Marilou; Messier, Christian
2008-10-01
Understanding coexistence of highly shade-tolerant tree species is a longstanding challenge for forest ecologists. A conceptual model for the coexistence of sugar maple (Acer saccharum) and American beech (Fagus grandibfolia) has been proposed, based on a low-light survival/high-light growth trade-off, which interacts with soil fertility and small-scale spatiotemporal variation in the environment. In this study, we first tested whether the spatial distribution of seedlings and saplings can be predicted by the spatiotemporal variability of light availability and soil fertility, and second, the manner in which the process of environmental filtering changes with regeneration size. We evaluate the support for this hypothesis relative to the one for a neutral model, i.e., for seed rain density predicted from the distribution of adult trees. To do so, we performed intensive sampling over 86 quadrats (5 x 5 m) in a 0.24-ha plot in a mature maple-beech community in Quebec, Canada. Maple and beech abundance, soil characteristics, light availability, and growth history (used as a proxy for spatiotemporal variation in light availability) were finely measured to model variation in sapling composition across different size classes. Results indicate that the variables selected to model species distribution do effectively change with size, but not as predicted by the conceptual model. Our results show that variability in the environment is not sufficient to differentiate these species' distributions in space. Although species differ in their spatial distribution in the small size classes, they tend to correlate at the larger size class in which recruitment occurs. Overall, the results are not supportive of a model of coexistence based on small-scale variations in the environment. We propose that, at the scale of a local stand, the lack of fit of the model could result from the high similarity of species in the range of environmental conditions encountered, and we suggest that coexistence would be stable only at larger spatial scales at which variability in the environment is greater.
Modeling the dust cycle from sand dunes to haboobs
NASA Astrophysics Data System (ADS)
Kallos, George; Patlakas, Platon; Bartsotas, Nikolaos; Spyrou, Christos; Qahtani, Jumaan Al; Alexiou, Ioannis; Bar, Ayman M.
2017-04-01
The dust cycle is a rather complicated mechanism depending on various factors. The most important factors affecting dust production is soil characteristics (soil composi-tion, physical and chemical properties, water content, temperature etc). The most known production mechanism at small scale is the saltation-bombardment. This mechanism is able to accurately predict uptake of dust particles up to about 10 μm. Larger dust particles are heavier and fall relatively fast due to the gravitational influ-ence. The other controlling factors of dust uptake and transport are wind speed (to be above a threshold) and turbulence. Weather conditions affecting dust produc-tion/transport/deposition are of multi-scale ranging from small surface inhomoge-neities to mesoscale and large-scale systems. While the typical dust transport mech-anism is related to wind conditions near the surface, larger scale systems play an important role on dust production. Such systems are associated with mesoscale phenomena typical of the specific regions. Usually they are associated with deep convection and strong downdrafts and are known as haboobs. Density currents are formed in the surface with strong winds and turbulence. Density currents can be considered as dust sources by themselves due to high productivity of dust. In this presentation we will discuss characteristics of the dust production mechanisms at multiscale over the Arabian Peninsula by utilizing the RAMS/ICLAMS multiscale model. A series of simulations at small-scale have been performed and mitigation actions will be explored.
A Lagrangian effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; White, Martin; Aviles, Alejandro
We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less
A Lagrangian effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; White, Martin; Aviles, Alejandro, E-mail: zvlah@stanford.edu, E-mail: mwhite@berkeley.edu, E-mail: aviles@berkeley.edu
We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The 'new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. All the perturbative models fare better than linear theory.« less
A Lagrangian effective field theory
Vlah, Zvonimir; White, Martin; Aviles, Alejandro
2015-09-02
We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less
NASA Astrophysics Data System (ADS)
Tai, Y.; Watanabe, T.; Nagata, K.
2018-03-01
A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.
Some effects of horizontal discretization on linear baroclinic and symmetric instabilities
NASA Astrophysics Data System (ADS)
Barham, William; Bachman, Scott; Grooms, Ian
2018-05-01
The effects of horizontal discretization on linear baroclinic and symmetric instabilities are investigated by analyzing the behavior of the hydrostatic Eady problem in ocean models on the B and C grids. On the C grid a spurious baroclinic instability appears at small wavelengths. This instability does not disappear as the grid scale decreases; instead, it simply moves to smaller horizontal scales. The peak growth rate of the spurious instability is independent of the grid scale as the latter decreases. It is equal to cf /√{Ri} where Ri is the balanced Richardson number, f is the Coriolis parameter, and c is a nondimensional constant that depends on the Richardson number. As the Richardson number increases c increases towards an upper bound of approximately 1/2; for large Richardson numbers the spurious instability is faster than the Eady instability. To suppress the spurious instability it is recommended to use fourth-order centered tracer advection along with biharmonic viscosity and diffusion with coefficients (Δx) 4 f /(32√{Ri}) or larger where Δx is the grid scale. On the B grid, the growth rates of baroclinic and symmetric instabilities are too small, and converge upwards towards the correct values as the grid scale decreases; no spurious instabilities are observed. In B grid models at eddy-permitting resolution, the reduced growth rate of baroclinic instability may contribute to partially-resolved eddies being too weak. On the C grid the growth rate of symmetric instability is better (larger) than on the B grid, and converges upwards towards the correct value as the grid scale decreases.
Collisional and dynamical history of Gaspra
NASA Technical Reports Server (NTRS)
Greenberg, R.; Nolan, M. C.; Bottke, W. F., Jr.; Kolvoord, R. A.
1993-01-01
Interpretation of the impact record on Gaspra requires understanding of the effects of collisions on a target body of Gaspra's size and shape, recognition of impact features that may have different morphologies from craters on larger planets, and models of the geological processes that erase and modify impact features. Crater counts on the 140 sq km of Gaspra imaged at highest resolution by the Galileo spacecraft show a steep size-frequency distribution (cumulative power-law index near -3.5) from the smallest resolvable size (150 m diameter) up through the large feature (1.5 km diameter crater) of familiar crater-like morphology. In addition, there appear to be as many as eight roughly circular concavities with diameters greater than 3 km visible on the asteroid. If we restrict our crater counts to features with traditionally recognized crater morphologies, these concavities would not be included. However, if we define craters to include any concave structures that may represent local or regional damage at an impact size, then the larger features on Gaspra are candidates for consideration. Acceptance of the multi-km features as craters has been cautious for several reasons. First, scaling laws (the physically plausible algorithms for extrapolating from experimental data) indicate that Gaspra could not have sustained such large-crater-forming impacts without being disrupted; second, aside from concavity, the larger structures have no other features (e.g. rims) that can be identified with known impact craters; and third, extrapolation of the power-law size distribution for smaller craters predicts no craters larger than 3 km over the entire surface. On the other hand, recent hydrocode modeling of impacts shows that for given impact (albeit into a sphere), the crater size is much larger than given by scaling laws. Gaspra-size bodies can sustain formation of up to 8-km craters without disruption. Besides allowing larger impact craters, this result doubles the lifetime since the last catastrophic fragmentation event up to one billion years. Events that create multi-km craters also globally damage the material structure, such that regolith is produced, whether or not Gaspra 'initially' had a regolith, contrary to other models in which initial regolith is required in order to allow current regolith. Because the globally destructive shock wave precedes basin formation, crater size is closer to the large size extrapolated from gravity-scaling rather than the strength-scaling that had earlier been assumed for such small bodies. This mechanism may also help explain the existence of Stickney on Phobos. Moreover, rejection of the large concavities as craters based on unfamiliar morphology would be premature, because (aside from Stickney) we have no other data on such large impact structures on such a small, irregular body. The eight candidate concavities cover an area greater than that counted for smaller craters, because they are most apparent where small craters cannot be seen: on low resolution images and at the limb on high resolution images. We estimate that there are at least two with diameter greater than 4 km per 140 sq km, which would have to be accounted for in any model that claims these are impact craters.
Unidirectional flow over asymmetric and symmetric ripples
NASA Astrophysics Data System (ADS)
Wiberg, Patricia L.; Nelson, Jonathan M.
1992-08-01
An LDV-equipped flume has yielded detailed measurements of velocity and turbulence over fixed sets of two-dimensional symmetric and asymmetric ripples. The measured velocities over the ripples are compared with the Nelson and Smith (1989)results for flow over larger-scale dunes; the new results are larger in the outer region of the flow, and the velocity profiles exhibit no sharp inflection at the top of the lowest wake. A model for flow over bedforms which has yielded excellent agreement with dune measurements is presently modified to better represent the observed flow over ripples.
Non-Invasive Methods to Characterize Soil-Plant Interactions at Different Scales
NASA Astrophysics Data System (ADS)
Javaux, M.; Kemna, A.; Muench, M.; Oberdoerster, C.; Pohlmeier, A.; Vanderborght, J.; Vereecken, H.
2006-05-01
Root water uptake is a dynamic and non-linear process, which interacts with the soil natural variability and boundary conditions to generate heterogeneous spatial distributions of soil water. Soil-root fluxes are spatially variable due to heterogeneous gradients and hydraulic connections between soil and roots. While 1-D effective representation of the root water uptake has been successfully applied to predict transpiration and average water content profiles, finer spatial characterization of the water distribution may be needed when dealing with solute transport. Indeed, root water uptake affects the water velocity field, which has an effect on solute velocity and dispersion. Although this variability originates from small-scale processes, these may still play an important role at larger scales. Therefore, in addition to investigate the variability of the soil hydraulic properties, experimental and numerical tools for characterizing root water uptake (and its effects on soil water distribution) from the pore to the field scales are needed to predict in a proper way the solute transport. Obviously, non-invasive and modeling techniques which are helpful to achieve this objective will evolve with the scale of interest. At the pore scale, soil structure and root-soil interface phenomena have to be investigated to understand the interactions between soil and roots. Magnetic resonance imaging may help to monitor water gradients and water content changes around roots while spectral induced polarization techniques may be used to characterize the structure of the pore space. At the column scale, complete root architecture of small plants and water content depletion around roots can be imaged by magnetic resonance. At that scale, models should explicitly take into account the three-dimensional gradient dependency of the root water uptake, to be able to predict solute transport. At larger scales however, simplified models, which implicitly take into account the heterogeneous root water uptake along roots, should be preferred given the complexity of the system. At such scales, electrical resistance tomography or ground-penetrating radar can be used to map the water content changes and derive effective parameters for predicting solute transport.
Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.
2009-01-01
The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.
Turbulence- and particle-resolved modeling of self-formed channels
NASA Astrophysics Data System (ADS)
Schmeeckle, M. W.
2016-12-01
A numerical model is presented that combines a large eddy simulation (LES) of turbulent water motion and a discrete element method (DEM) simulation of all sediment particles forming a small alluvial river. All simulations are begun with a relatively narrow and deep channel and a constant body force is applied to the fluid. At very small applied force at the critical shear stress for sediment motion the channel becomes wider and shallower. Transport on the banks becomes very small with larger transport at the center of the channel. However, even the very small bank transport resulted in continued net downslope motion and channel widening; bedload diffusion from higher transport areas of the channel is not sufficient to counteract downslope transport. This simulation will be extended over much longer times to determine whether an equilibrium straight channel with transport is possible without varying the water discharge. Simulations at slightly higher fluid forcing results in the development of alternate bars. Particle size segregation occurs in all simulations at multiple scales. At the smallest scale, turbulent structures induce small scale depressions; larger particles preferentially move to lower elevations of the depressions. Sloping beds at banks and bars also increase size segregation. However, bar translation mixes segregated sediments. Granular modeling of river channels appears to be a fruitful method for testing and developing continuum ideas of channel pattern formation and size segregation.
Turbulence-and particle-resolved modeling of self-formed channels
NASA Astrophysics Data System (ADS)
Schmeeckle, M. W.
2017-12-01
A numerical model is presented that combines a large eddy simulation (LES) of turbulent water motion and a discrete element method (DEM) simulation of all sediment particles forming a small alluvial river. All simulations are begun with a relatively narrow and deep channel and a constant body force is applied to the fluid. At very small applied force at the critical shear stress for sediment motion the channel becomes wider and shallower. Transport on the banks becomes very small with larger transport at the center of the channel. However, even the very small bank transport resulted in continued net downslope motion and channel widening; bedload diffusion from higher transport areas of the channel is not sufficient to counteract downslope transport. This simulation will be extended over much longer times to determine whether an equilibrium straight channel with transport is possible without varying the water discharge. Simulations at slightly higher fluid forcing results in the development of alternate bars. Particle size segregation occurs in all simulations at multiple scales. At the smallest scale, turbulent structures induce small scale depressions; larger particles preferentially move to lower elevations of the depressions. Sloping beds at banks and bars also increase size segregation. However, bar translation mixes segregated sediments. Granular modeling of river channels appears to be a fruitful method for testing and developing continuum ideas of channel pattern formation and size segregation.
Contrasting model complexity under a changing climate in a headwaters catchment.
NASA Astrophysics Data System (ADS)
Foster, L.; Williams, K. H.; Maxwell, R. M.
2017-12-01
Alpine, snowmelt-dominated catchments are the source of water for more than 1/6th of the world's population. These catchments are topographically complex, leading to steep weather gradients and nonlinear relationships between water and energy fluxes. Recent evidence suggests that alpine systems are more sensitive to climate warming, but these regions are vastly simplified in climate models and operational water management tools due to computational limitations. Simultaneously, point-scale observations are often extrapolated to larger regions where feedbacks can both exacerbate or mitigate locally observed changes. It is critical to determine whether projected climate impacts are robust to different methodologies, including model complexity. Using high performance computing and an integrated model of a representative headwater catchment we determined the hydrologic response from 30 projected climate changes to precipitation, temperature and vegetation for the Rocky Mountains. Simulations were run with 100m and 1km resolution, and with and without lateral subsurface flow in order to vary model complexity. We found that model complexity alters nonlinear relationships between water and energy fluxes. Higher-resolution models predicted larger changes per degree of temperature increase than lower resolution models, suggesting that reductions to snowpack, surface water, and groundwater due to warming may be underestimated in simple models. Increases in temperature were found to have a larger impact on water fluxes and stores than changes in precipitation, corroborating previous research showing that mountain systems are significantly more sensitive to temperature changes than to precipitation changes and that increases in winter precipitation are unlikely to compensate for increased evapotranspiration in a higher energy environment. These numerical experiments help to (1) bracket the range of uncertainty in published literature of climate change impacts on headwater hydrology; (2) characterize the role of precipitation and temperature changes on water supply for snowmelt-dominated downstream basins; and (3) identify which climate impacts depend on the scale of simulation.
Multiscale modeling of porous ceramics using movable cellular automaton method
NASA Astrophysics Data System (ADS)
Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.
2017-10-01
The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
V2.2_i6 L2AS Detailed Release Description November 27, 2002
Atmospheric Science Data Center
2013-03-14
... Increase the valid range of BHR and DHR from 1.0 to 1.05. This affects the scaling factors which are used to unscale the ... for heterogeneous surfaces to give a larger residual if (rho_misr - rho_model) becomes negative. In the land surface retrieval, ...
2011-02-01
capabilities for airbags , sensors, and seatbelts have tailored the code for applications in the automotive industry. Currently the code contains...larger intervals. In certain contact scenarios where contacting parts are moving relative to each other in a rapid fashion, such as airbag deployment
The integrated landscape assessment project
Miles A. Hemstrom; Janine Salwasser; Joshua Halofsky; Jimmy Kagan; Cyndi Comfort
2012-01-01
The Integrated Landscape Assessment Project (ILAP) is a three-year effort that produces information, models, data, and tools to help land managers, policymakers, and others examine mid- to broad-scale (e.g., watersheds to states and larger areas) prioritization of land management actions, perform landscape assessments, and estimate potential effects of management...
NASA Astrophysics Data System (ADS)
Tan, Zhenkun; Ke, Xizheng
2017-10-01
The variance of angle-of-arrival fluctuation of the partially coherent Gaussian-Schell Model (GSM) beam propagations in the slant path, based on the extended Huygens-Fresnel principle and the model of atmospheric refraction index structural constant proposed by the international telecommunication union-radio (ITU-R), has been investigated under the modified Hill turbulence model. The expression of that has been obtained. Firstly, the effects of optical wavelength, the inner-and-outer scale of the turbulence and turbulence intensity on the variance of angle-of-arrival fluctuation have been analyzed by comparing with the partially coherent GSM beam and the completely coherent Gaussian beam. Secondly, the variance of angle-of-arrival fluctuation has been compared with the von Karman spectrum and the modified Hill spectrum under the partially coherent GSM beam. Finally, the effects of beam waist radius and partial coherence length on the variance of angle-of-arrival of the collimated (focused) beam have been analyzed under the modified Hill turbulence model. The results show that the influence of the variance of angle-of-arrival fluctuation for the inner scale effect is larger than that of the outer scale effect. The variance of angle-of-arrival fluctuation under the modified Hill spectrum is larger than that of the von Karman spectrum. The influence of the waist radius on the variance of angle-of-arrival for the collimated beam is less than focused the beam. This study will provide a necessary theoretical basis for the experiments of partially coherent GSM beam propagation through atmosphere turbulence.
Evaluation of constant-Weber-number scaling for icing tests
NASA Technical Reports Server (NTRS)
Anderson, David N.
1996-01-01
Previous studies showed that for conditions simulating an aircraft encountering super-cooled water droplets the droplets may splash before freezing. Other surface effects dependent on the water surface tension may also influence the ice accretion process. Consequently, the Weber number appears to be important in accurately scaling ice accretion. A scaling method which uses a constant-Weber-number approach has been described previously; this study provides an evaluation of this scaling method. Tests are reported on cylinders of 2.5 to 15-cm diameter and NACA 0012 airfoils with chords of 18 to 53 cm in the NASA Lewis Icing Research Tunnel (IRT). The larger models were used to establish reference ice shapes, the scaling method was applied to determine appropriate scaled test conditions using the smaller models, and the ice shapes were compared. Icing conditions included warm glaze, horn glaze and mixed. The smallest size scaling attempted was 1/3, and scale and reference ice shapes for both cylinders and airfoils indicated that the constant-Weber-number scaling method was effective for the conditions tested.
NASA Astrophysics Data System (ADS)
Itai, K.
1987-02-01
Two models which describe one-dimensional hopping motion of a heavy particle interacting with phonons are discussed. Model A corresponds to hopping in 1D metals or to the polaron problem. In model B the momentum dependence of the particle-phonon coupling is proportional to k-1/2. The scaling equations show that only in model B does localization occur for a coupling larger than a critical value. In the localization region this model shows close analogy to the Caldeira-Leggett model for macroscopic quantum tunneling.
Coarse graining flow of spin foam intertwiners
NASA Astrophysics Data System (ADS)
Dittrich, Bianca; Schnetter, Erik; Seth, Cameron J.; Steinhaus, Sebastian
2016-12-01
Simplicity constraints play a crucial role in the construction of spin foam models, yet their effective behavior on larger scales is scarcely explored. In this article we introduce intertwiner and spin net models for the quantum group SU (2 )k×SU (2 )k, which implement the simplicity constraints analogous to four-dimensional Euclidean spin foam models, namely the Barrett-Crane (BC) and the Engle-Pereira-Rovelli-Livine/Freidel-Krasnov (EPRL/FK) model. These models are numerically coarse grained via tensor network renormalization, allowing us to trace the flow of simplicity constraints to larger scales. In order to perform these simulations we have substantially adapted tensor network algorithms, which we discuss in detail as they can be of use in other contexts. The BC and the EPRL/FK model behave very differently under coarse graining: While the unique BC intertwiner model is a fixed point and therefore constitutes a two-dimensional topological phase, BC spin net models flow away from the initial simplicity constraints and converge to several different topological phases. Most of these phases correspond to decoupling spin foam vertices; however we find also a new phase in which this is not the case, and in which a nontrivial version of the simplicity constraints holds. The coarse graining flow of the BC spin net models indicates furthermore that the transitions between these phases are not of second order. The EPRL/FK model by contrast reveals a far more intricate and complex dynamics. We observe an immediate flow away from the original simplicity constraints; however, with the truncation employed here, the models generically do not converge to a fixed point. The results show that the imposition of simplicity constraints can indeed lead to interesting and also very complex dynamics. Thus we need to further develop coarse graining tools to efficiently study the large scale behavior of spin foam models, in particular for the EPRL/FK model.
Anisotropic modulus stabilisation: strings at LHC scales with micron-sized extra dimensions
NASA Astrophysics Data System (ADS)
Cicoli, M.; Burgess, C. P.; Quevedo, F.
2011-10-01
We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: ( i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); ( ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; ( iii) a rich spectrum of string and KK states at TeV scales; and ( iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3 or T 4-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are likely to be present on K3 or T 4-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and briefly discuss some of their astrophysical, cosmological and phenomenological implications.
NASA Astrophysics Data System (ADS)
Cuzzi, Jeffrey N.; Weston, B.; Shariff, K.
2013-10-01
Primitive bodies with 10s-100s of km diameter (or even larger) may form directly from small nebula constituents, bypassing the step-by-step “incremental growth” that faces a variety of barriers at cm, m, and even 1-10km sizes. In the scenario of Cuzzi et al (Icarus 2010 and LPSC 2012; see also Chambers Icarus 2010) the immediate precursors of 10-100km diameter asteroid formation are dense clumps of chondrule-(mm-) size objects. These predictions utilize a so-called cascade model, which is popular in turbulence studies. One of its usual assumptions is that certain statistical properties of the process (the so-called multiplier pdfs p(m)) are scale-independent within a cascade of energy from large eddy scales to smaller scales. In similar analyses, Pan et al (2011 ApJ) found discrepancies with results of Cuzzi and coworkers; one possibility was that p(m) for particle concentration is not scale-independent. To assess the situation we have analyzed recent 3D direct numerical simulations of particles in turbulence covering a much wider range of scales than analyzed by either Cuzzi and coworkers or by Pan and coworkers (see Bec et al 2010, J. Flu. Mech 646, 527). We calculated p(m) at scales ranging from 45-1024η where η is the Kolmogorov scale, for both particles with a range of stopping times spanning the optimum value, and for energy dissipation in the fluid. For comparison, the p(m) for dissipation have been observed to be scale-independent in atmospheric flows (at much larger Reynolds number) for scales of at least 30-3000η. We found that, in the numerical simulations, the multiplier distributions for both particle concentration and fluid dissipation are as expected at scales of tens of η, but both become narrower and less intermittent at larger scales. This is consistent with observations of atmospheric flows showing scale independence to >3000η if scale-free behavior is established only after some number 10 of large-scale bifurcations (at scales perhaps 10x smaller than the largest scales in the flow), but become scale-free at smaller scales. Predictions of primitive body initial mass functions can now be redone using a slightly modified cascade.
Dynamic Smagorinsky model on anisotropic grids
NASA Technical Reports Server (NTRS)
Scotti, A.; Meneveau, C.; Fatica, M.
1996-01-01
Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
Mallaney, Mary; Wang, Szu-Han; Sreedhara, Alavattam
2014-01-01
During a small-scale cell culture process producing a monoclonal antibody, a larger than expected difference was observed in the charge variants profile of the harvested cell culture fluid (HCCF) between the 2 L and larger scales (e.g., 400 L and 12 kL). Small-scale studies performed at the 2 L scale consistently showed an increase in acidic species when compared with the material made at larger scale. Since the 2 L bioreactors were made of clear transparent glass while the larger scale reactors are made of stainless steel, the effect of ambient laboratory light on cell culture process in 2 L bioreactors as well as handling the HCCF was carefully evaluated. Photoreactions in the 2 L glass bioreactors including light mediated increase in acidic variants in HCCF and formulation buffers were identified and carefully analyzed. While the acidic variants comprised of a mixture of sialylated, reduced disulfide, crosslinked (nonreducible), glycated, and deamidated forms, an increase in the nonreducible forms, deamidation and Met oxidation was predominantly observed under light stress. The monoclonal antibody produced in glass bioreactors that were protected from light behaved similar to the one produced in the larger scale. Our data clearly indicate that care should be taken when glass bioreactors are used in cell culture studies during monoclonal antibody production. © 2014 American Institute of Chemical Engineers.
Heterogeneity and scaling land-atmospheric water and energy fluxes in climate systems
NASA Technical Reports Server (NTRS)
Wood, Eric F.
1993-01-01
The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, three modeling experiments were performed and are reviewed in the paper. The first is concerned with the aggregation of parameters and inputs for a terrestrial water and energy balance model. The second experiment analyzed the scaling behavior of hydrologic responses during rain events and between rain events. The third experiment compared the hydrologic responses from distributed models with a lumped model that uses spatially constant inputs and parameters. The results show that the patterns of small scale variations can be represented statistically if the scale is larger than a representative elementary area scale, which appears to be about 2 - 3 times the correlation length of the process. For natural catchments this appears to be about 1 - 2 sq km. The results concerning distributed versus lumped representations are more complicated. For conditions when the processes are nonlinear, then lumping results in biases; otherwise a one-dimensional model based on 'equivalent' parameters provides quite good results. Further research is needed to fully understand these conditions.
NASA Astrophysics Data System (ADS)
Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B.
2004-11-01
Large Eddy Simulations (LES) of atmospheric boundary-layer air movement in urban environments are especially challenging due to complex ground topography. Typically in such applications, fairly coarse grids must be used where the subgrid-scale (SGS) model is expected to play a crucial role. A LES code using pseudo-spectral discretization in horizontal planes and second-order differencing in the vertical is implemented in conjunction with the immersed boundary method to incorporate complex ground topography, with the classic equilibrium log-law boundary condition in the new-wall region, and with several versions of the eddy-viscosity model: (1) the constant-coefficient Smagorinsky model, (2) the dynamic, scale-invariant Lagrangian model, and (3) the dynamic, scale-dependent Lagrangian model. Other planar-averaged type dynamic models are not suitable because spatial averaging is not possible without directions of statistical homogeneity. These SGS models are tested in LES of flow around a square cylinder and of flow over surface-mounted cubes. Effects on the mean flow are documented and found not to be major. Dynamic Lagrangian models give a physically more realistic SGS viscosity field, and in general, the scale-dependent Lagrangian model produces larger Smagorinsky coefficient than the scale-invariant one, leading to reduced distributions of resolved rms velocities especially in the boundary layers near the bluff bodies.
Seamount subduction underneath an accretionary wedge: modelling mass wasting and wedge collapse
NASA Astrophysics Data System (ADS)
Mannu, Utsav; Ueda, Kosuke; Willett, Sean; Gerya, Taras; Strasser, Michael
2017-04-01
Seamounts (h >1 km) and knolls (h = 500 m-1000 m) cover about one-fifth of the total ocean floor area. These topographical highs of the ocean floor eventually get subducted. Subduction of these topographical features leads to severe deformation of the overriding plate and can cause extensive tectonic erosion and mass wasting of the frontal prism, which can ultimately cause a forearc wedge collapse. Large submarine landslides and the corresponding wedge collapse have previously been reported, for instance, in the northern part of the Hikurangi margin where the landslide is known as the giant Ruatoria debris avalanche, and have also been frequently reported in several seismic sections along the Costa Rica margin. Size and frequency relation of landslides suggest that the average size of submarine landslides in margins with rough subducting plates tends to be larger. However, this observation has not yet been tested or explained by physical models. In numerical subduction models, landslides take place, if at all, on a much larger timescale (in the order of 104-105 years, depending on the time steps of the model) than in natural cases. On the other hand, numerical models simulating mass wasting events such as avalanches and submarine landslides, typically model single events at a much smaller spatio-temporal domain, and do not consider long-term occurrence patterns of freely forming landslides. In this contribution, we present a multi-scale nested numerical approach to emulate short-term landslides within long-term progressive subduction. The numerical approach dynamically produces instantaneous submarine landslides and the resulting debris flow in the spatially and temporally refined inner model. Then we apply these convoluted changes in topography (e.g. due to the submarine landslide etc.) back to an outer larger-scale model instance that addresses wedge evolution. We use this approach to study the evolution of the accretionary wedge during seamount subduction.
A Multi-Scale, Integrated Approach to Representing Watershed Systems
NASA Astrophysics Data System (ADS)
Ivanov, Valeriy; Kim, Jongho; Fatichi, Simone; Katopodes, Nikolaos
2014-05-01
Understanding and predicting process dynamics across a range of scales are fundamental challenges for basic hydrologic research and practical applications. This is particularly true when larger-spatial-scale processes, such as surface-subsurface flow and precipitation, need to be translated to fine space-time scale dynamics of processes, such as channel hydraulics and sediment transport, that are often of primary interest. Inferring characteristics of fine-scale processes from uncertain coarse-scale climate projection information poses additional challenges. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion, and sediment transport, tRIBS+VEGGIE-FEaST. The model targets to take the advantage of the current generation of wealth of data representing watershed topography, vegetation, soil, and landuse, as well as to explore the hydrological effects of physical factors and their feedback mechanisms over a range of scales. We illustrate how the modeling system connects precipitation-hydrologic runoff partition process to the dynamics of flow, erosion, and sedimentation, and how the soil's substrate condition can impact the latter processes, resulting in a non-unique response. We further illustrate an approach to using downscaled climate change information with a process-based model to infer the moments of hydrologic variables in future climate conditions and explore the impact of climate information uncertainty.
Thermal shallow water models of geostrophic turbulence in Jovian atmospheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warneford, Emma S., E-mail: emma.warneford@maths.ox.ac.uk; Dellar, Paul J., E-mail: dellar@maths.ox.ac.uk
2014-01-15
Conventional shallow water theory successfully reproduces many key features of the Jovian atmosphere: a mixture of coherent vortices and stable, large-scale, zonal jets whose amplitude decreases with distance from the equator. However, both freely decaying and forced-dissipative simulations of the shallow water equations in Jovian parameter regimes invariably yield retrograde equatorial jets, while Jupiter itself has a strong prograde equatorial jet. Simulations by Scott and Polvani [“Equatorial superrotation in shallow atmospheres,” Geophys. Res. Lett. 35, L24202 (2008)] have produced prograde equatorial jets through the addition of a model for radiative relaxation in the shallow water height equation. However, their modelmore » does not conserve mass or momentum in the active layer, and produces mid-latitude jets much weaker than the equatorial jet. We present the thermal shallow water equations as an alternative model for Jovian atmospheres. These equations permit horizontal variations in the thermodynamic properties of the fluid within the active layer. We incorporate a radiative relaxation term in the separate temperature equation, leaving the mass and momentum conservation equations untouched. Simulations of this model in the Jovian regime yield a strong prograde equatorial jet, and larger amplitude mid-latitude jets than the Scott and Polvani model. For both models, the slope of the non-zonal energy spectra is consistent with the classic Kolmogorov scaling, and the slope of the zonal energy spectra is consistent with the much steeper spectrum observed for Jupiter. We also perform simulations of the thermal shallow water equations for Neptunian parameter values, with a radiative relaxation time scale calculated for the same 25 mbar pressure level we used for Jupiter. These Neptunian simulations reproduce the broad, retrograde equatorial jet and prograde mid-latitude jets seen in observations. The much longer radiative time scale for the colder planet Neptune explains the transition from a prograde to a retrograde equatorial jet, while the broader jets are due to the deformation radius being a larger fraction of the planetary radius.« less
Polarization and Compressibility of Oblique Kinetic Alfven Waves
NASA Technical Reports Server (NTRS)
Hunana, Peter; Goldstein, M. L.; Passot, T.; Sulem, P. L.; Laveder, D.; Zank, G. P.
2012-01-01
Even though solar wind, as a collisionless plasma, is properly described by the kineticMaxwell-Vlasov description, it can be argued that much of our understanding of solar wind observational data comes from an interpretation and numerical modeling which is based on a fluid description of magnetohydrodynamics. In recent years, there has been a significant interest in better understanding the importance of kinetic effects, i.e. the differences between the kinetic and usual fluid descriptions. Here we concentrate on physical properties of oblique kinetic Alfvn waves (KAWs), which are often recognized as one of the key ingredients in the solar wind turbulence cascade. We use three different fluid models with various degrees of complexity and calculate polarization and magnetic compressibility of oblique KAWs (propagation angle q = 88), which we compare to solutions derived from linear kinetic theory. We explore a wide range of possible proton plasma b = [0.1,10.0] and a wide range of length scales krL = [0.001,10.0]. It is shown that the classical isotropic two-fluid model is very compressible in comparison with kinetic theory and that the largest discrepancy occurs at scales larger than the proton gyroscale. We also show that the two-fluid model contains a large error in the polarization of electric field, even at scales krL 1. Furthermore, to understand these discrepancies between the two-fluid model and the kinetic theory, we employ two versions of the Landau fluid model that incorporate linear low-frequency kinetic effects such as Landau damping and finite Larmor radius (FLR) corrections into the fluid description. It is shown that Landau damping significantly reduces the magnetic compressibility and that FLR corrections (i.e. nongyrotropic contributions) are required to correctly capture the polarization.We also show that, in addition to Landau damping, FLR corrections are necessary to accurately describe the damping rate of KAWs. We conclude that kinetic effects are important even at scales which are significantly larger than the proton gyroscale krL 1.
NASA Technical Reports Server (NTRS)
Starr, D. OC.; Cox, S. K.
1985-01-01
A simplified cirrus cloud model is presented which may be used to investigate the role of various physical processes in the life cycle of a cirrus cloud. The model is a two-dimensional, time-dependent, Eulerian numerical model where the focus is on cloud-scale processes. Parametrizations are developed to account for phase changes of water, radiative processes, and the effects of microphysical structure on the vertical flux of ice water. The results of a simulation of a thin cirrostratus cloud are given. The results of numerical experiments performed with the model are described in order to demonstrate the important role of cloud-scale processes in determining the cloud properties maintained in response to larger scale forcing. The effects of microphysical composition and radiative processes are considered, as well as their interaction with thermodynamic and dynamic processes within the cloud. It is shown that cirrus clouds operate in an entirely different manner than liquid phase stratiform clouds.
Overview of the Meso-NH model version 5.4 and its applications
NASA Astrophysics Data System (ADS)
Lac, Christine; Chaboureau, Jean-Pierre; Masson, Valéry; Pinty, Jean-Pierre; Tulet, Pierre; Escobar, Juan; Leriche, Maud; Barthe, Christelle; Aouizerats, Benjamin; Augros, Clotilde; Aumond, Pierre; Auguste, Franck; Bechtold, Peter; Berthet, Sarah; Bielli, Soline; Bosseur, Frédéric; Caumont, Olivier; Cohard, Jean-Martial; Colin, Jeanne; Couvreux, Fleur; Cuxart, Joan; Delautier, Gaëlle; Dauhut, Thibaut; Ducrocq, Véronique; Filippi, Jean-Baptiste; Gazen, Didier; Geoffroy, Olivier; Gheusi, François; Honnert, Rachel; Lafore, Jean-Philippe; Lebeaupin Brossier, Cindy; Libois, Quentin; Lunet, Thibaut; Mari, Céline; Maric, Tomislav; Mascart, Patrick; Mogé, Maxime; Molinié, Gilles; Nuissier, Olivier; Pantillon, Florian; Peyrillé, Philippe; Pergaud, Julien; Perraud, Emilie; Pianezze, Joris; Redelsperger, Jean-Luc; Ricard, Didier; Richard, Evelyne; Riette, Sébastien; Rodier, Quentin; Schoetter, Robert; Seyfried, Léo; Stein, Joël; Suhre, Karsten; Taufour, Marie; Thouron, Odile; Turner, Sandra; Verrelle, Antoine; Vié, Benoît; Visentin, Florian; Vionnet, Vincent; Wautelet, Philippe
2018-05-01
This paper presents the Meso-NH model version 5.4. Meso-NH is an atmospheric non hydrostatic research model that is applied to a broad range of resolutions, from synoptic to turbulent scales, and is designed for studies of physics and chemistry. It is a limited-area model employing advanced numerical techniques, including monotonic advection schemes for scalar transport and fourth-order centered or odd-order WENO advection schemes for momentum. The model includes state-of-the-art physics parameterization schemes that are important to represent convective-scale phenomena and turbulent eddies, as well as flows at larger scales. In addition, Meso-NH has been expanded to provide capabilities for a range of Earth system prediction applications such as chemistry and aerosols, electricity and lightning, hydrology, wildland fires, volcanic eruptions, and cyclones with ocean coupling. Here, we present the main innovations to the dynamics and physics of the code since the pioneer paper of Lafore et al. (1998) and provide an overview of recent applications and couplings.
Modeling annual mallard production in the prairie-parkland region
Miller, M.W.
2000-01-01
Biologists have proposed several environmental factors that might influence production of mallards (Anas platyrhynchos) nesting in the prairie-parkland region of the United States and Canada. These factors include precipitation, cold spring temperatures, wetland abundance, and upland breeding habitat. I used long-term historical data sets of climate, wetland numbers, agricultural land use, and size of breeding mallard populations in multiple regression analyses to model annual indices of mallard production. Models were constructed at 2 scales: a continental scale that encompassed most of the mid-continental breeding range of mallards and a stratum-level scale that included 23 portions of that same breeding range. The production index at the continental scale was the estimated age ratio of mid-continental mallards in early fall; at the stratum scale my production index was the estimated number of broods of all duck species within an aerial survey stratum. Size of breeding mallard populations in May, and pond numbers in May and July, best modeled production at the continental scale. Variables that best modeled production at the stratum scale differed by region. Crop variables tended to appear more in models for western Canadian strata; pond variables predominated in models for United States strata; and spring temperature and pond variables dominated models for eastern Canadian strata. An index of cold spring temperatures appeared in 4 of 6 models for aspen parkland strata, and in only 1 of 11 models for strata dominated by prairie. Stratum-level models suggest that regional factors influencing mallard production are not evident at a larger scale. Testing these potential factors in a manipulative fashion would improve our understanding of mallard population dynamics, improving our ability to manage the mid-continental mallard population.
On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models
NASA Astrophysics Data System (ADS)
Jan, A.; Painter, S. L.; Coon, E. T.
2017-12-01
Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the DOE Office of Science.
NASA Astrophysics Data System (ADS)
McNamara, J. P.; Semenova, O.; Restrepo, P. J.
2011-12-01
Highly instrumented research watersheds provide excellent opportunities for investigating hydrologic processes. A danger, however, is that the processes observed at a particular research watershed are too specific to the watershed and not representative even of the larger scale watershed that contains that particular research watershed. Thus, models developed based on those partial observations may not be suitable for general hydrologic use. Therefore demonstrating the upscaling of hydrologic process from research watersheds to larger watersheds is essential to validate concepts and test model structure. The Hydrograph model has been developed as a general-purpose process-based hydrologic distributed system. In its applications and further development we evaluate the scaling of model concepts and parameters in a wide range of hydrologic landscapes. All models, either lumped or distributed, are based on a discretization concept. It is common practice that watersheds are discretized into so called hydrologic units or hydrologic landscapes possessing assumed homogeneous hydrologic functioning. If a model structure is fixed, the difference in hydrologic functioning (difference in hydrologic landscapes) should be reflected by a specific set of model parameters. Research watersheds provide the possibility for reasonable detailed combining of processes into some typical hydrologic concept such as hydrologic units, hydrologic forms, and runoff formation complexes in the Hydrograph model. And here by upscaling we imply not the upscaling of a single process but upscaling of such unified hydrologic functioning. The simulation of runoff processes for the Dry Creek research watershed, Idaho, USA (27 km2) was undertaken using the Hydrograph model. The information on the watershed was provided by Boise State University and included a GIS database of watershed characteristics and a detailed hydrometeorological observational dataset. The model provided good simulation results in terms of runoff and variable states of soil and snow over a simulation period 2000 - 2009. The parameters of the model were hand-adjusted based on rational sense, observational data and available understanding of underlying processes. For the first run some processes as riparian vegetation impact on runoff and streamflow/groundwater interaction were handled in a conceptual way. It was shown that the use of Hydrograph model which requires modest amount of parameter calibration may serve also as a quality control for observations. Based on the obtained parameters values and process understanding at the research watershed the model was applied to the larger scale watersheds located in similar environment - the Boise River at South Fork (1660 km2) and Twin Springs (2155 km2). The evaluation of the results of such upscaling will be presented.
NASA Astrophysics Data System (ADS)
Zeng, F.; Collatz, G. J.; Ivanoff, A.
2013-12-01
We assessed the performance of the Carnegie-Ames-Stanford Approach - Global Fire Emissions Database (CASA-GFED3) terrestrial carbon cycle model in simulating seasonal cycle and interannual variability (IAV) of global and regional carbon fluxes and uncertainties associated with model parameterization. Key model parameters were identified from sensitivity analyses and their uncertainties were propagated through model processes using the Monte Carlo approach to estimate the uncertainties in carbon fluxes and pool sizes. Three independent flux data sets, the global gross primary productivity (GPP) upscaled from eddy covariance flux measurements by Jung et al. (2011), the net ecosystem exchange (NEE) estimated by CarbonTracker, and the eddy covariance flux observations, were used to evaluate modeled fluxes and the uncertainties. Modeled fluxes agree well with both Jung's GPP and CarbonTracker NEE in the amplitude and phase of seasonal cycle, except in the case of GPP in tropical regions where Jung et al. (2011) showed larger fluxes and seasonal amplitude. Modeled GPP IAV is positively correlated (p < 0.1) with Jung's GPP IAV except in the tropics and temperate South America. The correlations between modeled NEE IAV and CarbonTracker NEE IAV are weak at regional to continental scales but stronger when fluxes are aggregated to >40°N latitude. At regional to continental scales flux uncertainties were larger than the IAV in the fluxes for both Jung's GPP and CarbonTracker NEE. Comparisons with eddy covariance flux observations are focused on sites within regions and years of recorded large-scale climate anomalies. We also evaluated modeled biomass using other independent continental biomass estimates and found good agreement. From the comparisons we identify the strengths and weaknesses of the model to capture the seasonal cycle and IAV of carbon fluxes and highlight ways to improve model performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denbleyker, Alan; Liu, Yuzhi; Meurice, Y.
We consider the sign problem for classical spin models at complexmore » $$\\beta =1/g_0^2$$ on $$L\\times L$$ lattices. We show that the tensor renormalization group method allows reliable calculations for larger Im$$\\beta$$ than the reweighting Monte Carlo method. For the Ising model with complex $$\\beta$$ we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the TRG method. We check the convergence of the TRG method for the O(2) model on $$L\\times L$$ lattices when the number of states $$D_s$$ increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.« less
Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics
NASA Astrophysics Data System (ADS)
Marcé, R.; Armengol, J.
2009-01-01
One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a watershed-scale biogeochemical model. The rationale of this approach relies on the fact that the working unit for the nutrient in-stream processes of most watershed-scale models is the reach, the same unit used in field research based on the Nutrient Spiralling concept. Automatic calibration of the model using data from the study watershed confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Following calibration, the model was used as a heuristic tool in two ways. First, we compared the Nutrient Spiralling metrics obtained during calibration with results obtained during field-based research in the study watershed. The simulated and measured metrics were similar, suggesting that information collected at the reach scale during research based on the Nutrient Spiralling concept can be directly incorporated into models, without the problems associated with upscaling results from fine-scale studies. Second, we used results from our model to examine some patterns observed in several reports on Nutrient Spiralling metrics measured in impaired streams. Although these two exercises involve circular reasoning and, consequently, cannot validate any hypothesis, this is a powerful example of how models can work as heuristic tools to compare hypotheses and stimulate research in ecology.
A Novel BA Complex Network Model on Color Template Matching
Han, Risheng; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235
A novel BA complex network model on color template matching.
Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.
Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey
NASA Astrophysics Data System (ADS)
Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.
1994-08-01
We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc-1. The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h-1 Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h-1 Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h-1 Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambdazero = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h-1 Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma8 (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h-1 Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the power spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have Mlim greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).
Impacts of uncertainties in European gridded precipitation observations on regional climate analysis
Gobiet, Andreas
2016-01-01
ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497
Prein, Andreas F; Gobiet, Andreas
2017-01-01
Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.
Do swimming animals mix the ocean?
NASA Astrophysics Data System (ADS)
Dabiri, John
2013-11-01
Perhaps. The oceans are teeming with billions of swimming organisms, from bacteria to blue whales. Current research efforts in biological oceanography typically focus on the impact of the marine environment on the organisms within. We ask the opposite question: can organisms in the ocean, especially those that migrate vertically every day and regionally every year, change the physical structure of the water column? The answer has potentially important implications for ecological models at local scale and climate modeling at global scales. This talk will introduce the still-controversial prospect of biogenic ocean mixing, beginning with evidence from measurements in the field. More recent laboratory-scale experiments, in which we create controlled vertical migrations of plankton aggregations using laser signaling, provide initial clues toward a mechanism to achieve efficient mixing at scales larger than the individual organisms. These results are compared and contrasted with theoretical models, and they highlight promising avenues for future research in this area. Funding from the Office of Naval Research and the National Science Foundation is gratefully acknowledged.
NASA Astrophysics Data System (ADS)
Martinez, Luis; Meneveau, Charles
2014-11-01
Large Eddy Simulations (LES) of the flow past a single wind turbine with uniform inflow have been performed. A goal of the simulations is to compare two turbulence subgrid-scale models and their effects in predicting the initial breakdown, transition and evolution of the wake behind the turbine. Prior works have often observed negligible sensitivities to subgrid-scale models. The flow is modeled using an in-house LES with pseudo-spectral discretization in horizontal planes and centered finite differencing in the vertical direction. Turbines are represented using the actuator line model. We compare the standard constant-coefficient Smagorinsky subgrid-scale model with the Lagrangian Scale Dependent Dynamic model (LSDM). The LSDM model predicts faster transition to turbulence in the wake, whereas the standard Smagorinsky model predicts significantly delayed transition. The specified Smagorinsky coefficient is larger than the dynamic one on average, increasing diffusion thus delaying transition. A second goal is to compare the resulting near-blade properties such as local aerodynamic forces from the LES with Blade Element Momentum Theory. Results will also be compared with those of the SOWFA package, the wind energy CFD framework from NREL. This work is supported by NSF (IGERT and IIA-1243482) and computations use XSEDE resources, and has benefitted from interactions with Dr. M. Churchfield of NREL.
Designing Assessments of Microworld Training for Combat Service Support Staff
2003-01-01
training for distribution management skills as a part of a larger project that entailed making changes to the current structure, content, and methods...of CSS training. Microworld models are small-scale simulations of organizations and operations. They are useful for training distribution management processes...pilot studies using a microworld model for U.S. Army Reserve (USAR) soldiers in Distribution Management Centers. The degree to which trainees learned
NASA Technical Reports Server (NTRS)
Carlson, T. N. (Principal Investigator)
1981-01-01
Efforts were made (1) to bring the image processing and boundary layer model operation into a completely interactive mode and (2) to test a method for determining the surface energy budget and surface moisture availability and thermal inertia on a scale appreciably larger than that of the city. A region a few hundred kilometers on a side centered over southern Indiana was examined.
Analysis of the Relationship Between Climate and NDVI Variability at Global Scales
NASA Technical Reports Server (NTRS)
Zeng, Fan-Wei; Collatz, G. James; Pinzon, Jorge; Ivanoff, Alvaro
2011-01-01
interannual variability in modeled (CASA) C flux is in part caused by interannual variability in Normalized Difference Vegetation Index (NDVI) Fraction of Photosynthetically Active Radiation (FPAR). This study confirms a mechanism producing variability in modeled NPP: -- NDVI (FPAR) interannual variability is strongly driven by climate; -- The climate driven variability in NDVI (FPAR) can lead to much larger fluctuation in NPP vs. the NPP computed from FPAR climatology
Does temperature nudging overwhelm aerosol radiative ...
For over two decades, data assimilation (popularly known as nudging) methods have been used for improving regional weather and climate simulations by reducing model biases in meteorological parameters and processes. Similar practice is also popular in many regional integrated meteorology-air quality models that include aerosol direct and indirect effects. However in such multi-modeling systems, temperature changes due to nudging can compete with temperature changes induced by radiatively active & hygroscopic short-lived tracers leading to interesting dilemmas: From weather and climate prediction’s (retrospective or future) point of view when nudging is continuously applied, is there any real added benefit of using such complex and computationally expensive regional integrated modeling systems? What are the relative sizes of these two competing forces? To address these intriguing questions, we convert temperature changes due to nudging into radiative fluxes (referred to as the pseudo radiative forcing, PRF) at the surface and troposphere, and compare the net PRF with the reported aerosol radiative forcing. Results indicate that the PRF at surface dominates PRF at top of the atmosphere (i.e., the net). Also, the net PRF is about 2-4 times larger than estimated aerosol radiative forcing at regional scales while it is significantly larger at local scales. These results also show large surface forcing errors at many polluted urban sites. Thus, operational c
Hunt, Geoffrey; Moloney, Molly; Fazio, Adam
2012-01-01
Qualitative research is often conceptualized as inherently small-scale research, primarily conducted by a lone researcher enmeshed in extensive and long-term fieldwork or involving in-depth interviews with a small sample of 20 to 30 participants. In the study of illicit drugs, traditionally this has often been in the form of ethnographies of drug-using subcultures. Such small-scale projects have produced important interpretive scholarship that focuses on the culture and meaning of drug use in situated, embodied contexts. Larger-scale projects are often assumed to be solely the domain of quantitative researchers, using formalistic survey methods and descriptive or explanatory models. In this paper, however, we will discuss qualitative research done on a comparatively larger scale—with in-depth qualitative interviews with hundreds of young drug users. Although this work incorporates some quantitative elements into the design, data collection, and analysis, the qualitative dimension and approach has nevertheless remained central. Larger-scale qualitative research shares some of the challenges and promises of smaller-scale qualitative work including understanding drug consumption from an emic perspective, locating hard-to-reach populations, developing rapport with respondents, generating thick descriptions and a rich analysis, and examining the wider socio-cultural context as a central feature. However, there are additional challenges specific to the scale of qualitative research, which include data management, data overload and problems of handling large-scale data sets, time constraints in coding and analyzing data, and personnel issues including training, organizing and mentoring large research teams. Yet large samples can prove to be essential for enabling researchers to conduct comparative research, whether that be cross-national research within a wider European perspective undertaken by different teams or cross-cultural research looking at internal divisions and differences within diverse communities and cultures. PMID:22308079
The need to consider temporal variability when modelling exchange at the sediment-water interface
Rosenberry, Donald O.
2011-01-01
Most conceptual or numerical models of flows and processes at the sediment-water interface assume steady-state conditions and do not consider temporal variability. The steady-state assumption is required because temporal variability, if quantified at all, is usually determined on a seasonal or inter-annual scale. In order to design models that can incorporate finer-scale temporal resolution we first need to measure variability at a finer scale. Automated seepage meters that can measure flow across the sediment-water interface with temporal resolution of seconds to minutes were used in a variety of settings to characterize seepage response to rainfall, wind, and evapotranspiration. Results indicate that instantaneous seepage fluxes can be much larger than values commonly reported in the literature, although seepage does not always respond to hydrological processes. Additional study is needed to understand the reasons for the wide range and types of responses to these hydrologic and atmospheric events.
A Preliminary Model Study of the Large-Scale Seasonal Cycle in Bottom Pressure Over the Global Ocean
NASA Technical Reports Server (NTRS)
Ponte, Rui M.
1998-01-01
Output from the primitive equation model of Semtner and Chervin is used to examine the seasonal cycle in bottom pressure (Pb) over the global ocean. Effects of the volume-conserving formulation of the model on the calculation Of Pb are considered. The estimated seasonal, large-scale Pb signals have amplitudes ranging from less than 1 cm over most of the deep ocean to several centimeters over shallow, boundary regions. Variability generally increases toward the western sides of the basins, and is also larger in some Southern Ocean regions. An oscillation between subtropical and higher latitudes in the North Pacific is clear. Comparison with barotropic simulations indicates that, on basin scales, seasonal Pb variability is related to barotropic dynamics and the seasonal cycle in Ekman pumping, and results from a small, net residual in mass divergence from the balance between Ekman and Sverdrup flows.
NASA Astrophysics Data System (ADS)
Gorokhovski, Mikhael; Zamansky, Rémi
2018-03-01
Consistently with observations from recent experiments and DNS, we focus on the effects of strong velocity increments at small spatial scales for the simulation of the drag force on particles in high Reynolds number flows. In this paper, we decompose the instantaneous particle acceleration in its systematic and residual parts. The first part is given by the steady-drag force obtained from the large-scale energy-containing motions, explicitly resolved by the simulation, while the second denotes the random contribution due to small unresolved turbulent scales. This is in contrast with standard drag models in which the turbulent microstructures advected by the large-scale eddies are deemed to be filtered by the particle inertia. In our paper, the residual term is introduced as the particle acceleration conditionally averaged on the instantaneous dissipation rate along the particle path. The latter is modeled from a log-normal stochastic process with locally defined parameters obtained from the resolved field. The residual term is supplemented by an orientation model which is given by a random walk on the unit sphere. We propose specific models for particles with diameter smaller and larger size than the Kolmogorov scale. In the case of the small particles, the model is assessed by comparison with direct numerical simulation (DNS). Results showed that by introducing this modeling, the particle acceleration statistics from DNS is predicted fairly well, in contrast with the standard LES approach. For the particles bigger than the Kolmogorov scale, we propose a fluctuating particle response time, based on an eddy viscosity estimated at the particle scale. This model gives stretched tails of the particle acceleration distribution and dependence of its variance consistent with experiments.
Theoretical considerations on maximum running speeds for large and small animals.
Fuentes, Mauricio A
2016-02-07
Mechanical equations for fast running speeds are presented and analyzed. One of the equations and its associated model predict that animals tend to experience larger mechanical stresses in their limbs (muscles, tendons and bones) as a result of larger stride lengths, suggesting a structural restriction entailing the existence of an absolute maximum possible stride length. The consequence for big animals is that an increasingly larger body mass implies decreasing maximal speeds, given that the stride frequency generally decreases for increasingly larger animals. Another restriction, acting on small animals, is discussed only in preliminary terms, but it seems safe to assume from previous studies that for a given range of body masses of small animals, those which are bigger are faster. The difference between speed scaling trends for large and small animals implies the existence of a range of intermediate body masses corresponding to the fastest animals. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media
NASA Astrophysics Data System (ADS)
Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.
2017-12-01
Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.
Multi-Scale Modeling of a Graphite-Epoxy-Nanotube System
NASA Technical Reports Server (NTRS)
Frankland, S. J. V.; Riddick, J. C.; Gates, T. S.
2005-01-01
A multi-scale method is utilized to determine some of the constitutive properties of a three component graphite-epoxy-nanotube system. This system is of interest because carbon nanotubes have been proposed as stiffening and toughening agents in the interlaminar regions of carbon fiber/epoxy laminates. The multi-scale method uses molecular dynamics simulation and equivalent-continuum modeling to compute three of the elastic constants of the graphite-epoxy-nanotube system: C11, C22, and C33. The 1-direction is along the nanotube axis, and the graphene sheets lie in the 1-2 plane. It was found that the C11 is only 4% larger than the C22. The nanotube therefore does have a small, but positive effect on the constitutive properties in the interlaminar region.
2016-01-01
The problem of multi-scale modelling of damage development in a SiC ceramic fibre-reinforced SiC matrix ceramic composite tube is addressed, with the objective of demonstrating the ability of the finite-element microstructure meshfree (FEMME) model to introduce important aspects of the microstructure into a larger scale model of the component. These are particularly the location, orientation and geometry of significant porosity and the load-carrying capability and quasi-brittle failure behaviour of the fibre tows. The FEMME model uses finite-element and cellular automata layers, connected by a meshfree layer, to efficiently couple the damage in the microstructure with the strain field at the component level. Comparison is made with experimental observations of damage development in an axially loaded composite tube, studied by X-ray computed tomography and digital volume correlation. Recommendations are made for further development of the model to achieve greater fidelity to the microstructure. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242308
NASA Astrophysics Data System (ADS)
Graven, H. D.; Gruber, N.
2011-12-01
The 14C-free fossil carbon added to atmospheric CO2 by combustion dilutes the atmospheric 14C/C ratio (Δ14C), potentially providing a means to verify fossil CO2 emissions calculated using economic inventories. However, sources of 14C from nuclear power generation and spent fuel reprocessing can counteract this dilution and may bias 14C/C-based estimates of fossil fuel-derived CO2 if these nuclear influences are not correctly accounted for. Previous studies have examined nuclear influences on local scales, but the potential for continental-scale influences on Δ14C has not yet been explored. We estimate annual 14C emissions from each nuclear site in the world and conduct an Eulerian transport modeling study to investigate the continental-scale, steady-state gradients of Δ14C caused by nuclear activities and fossil fuel combustion. Over large regions of Europe, North America and East Asia, nuclear enrichment may offset at least 20% of the fossil fuel dilution in Δ14C, corresponding to potential biases of more than -0.25 ppm in the CO2 attributed to fossil fuel emissions, larger than the bias from plant and soil respiration in some areas. Model grid cells including high 14C-release reactors or fuel reprocessing sites showed much larger nuclear enrichment, despite the coarse model resolution of 1.8°×1.8°. The recent growth of nuclear 14C emissions increased the potential nuclear bias over 1985-2005, suggesting that changing nuclear activities may complicate the use of Δ14C observations to identify trends in fossil fuel emissions. The magnitude of the potential nuclear bias is largely independent of the choice of reference station in the context of continental-scale Eulerian transport and inversion studies, but could potentially be reduced by an appropriate choice of reference station in the context of local-scale assessments.
Postinflationary Higgs relaxation and the origin of matter-antimatter asymmetry.
Kusenko, Alexander; Pearce, Lauren; Yang, Louis
2015-02-13
The recent measurement of the Higgs boson mass implies a relatively slow rise of the standard model Higgs potential at large scales, and a possible second minimum at even larger scales. Consequently, the Higgs field may develop a large vacuum expectation value during inflation. The relaxation of the Higgs field from its large postinflationary value to the minimum of the effective potential represents an important stage in the evolution of the Universe. During this epoch, the time-dependent Higgs condensate can create an effective chemical potential for the lepton number, leading to a generation of the lepton asymmetry in the presence of some large right-handed Majorana neutrino masses. The electroweak sphalerons redistribute this asymmetry between leptons and baryons. This Higgs relaxation leptogenesis can explain the observed matter-antimatter asymmetry of the Universe even if the standard model is valid up to the scale of inflation, and any new physics is suppressed by that high scale.
NASA Astrophysics Data System (ADS)
Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.
2017-12-01
Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
Design of scaled down structural models
NASA Technical Reports Server (NTRS)
Simitses, George J.
1994-01-01
In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.
Design of scaled down structural models
NASA Astrophysics Data System (ADS)
Simitses, George J.
1994-07-01
In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.
Effective model hierarchies for dynamic and static classical density functional theories
NASA Astrophysics Data System (ADS)
Majaniemi, S.; Provatas, N.; Nonomura, M.
2010-09-01
The origin and methodology of deriving effective model hierarchies are presented with applications to solidification of crystalline solids. In particular, it is discussed how the form of the equations of motion and the effective parameters on larger scales can be obtained from the more microscopic models. It will be shown that tying together the dynamic structure of the projection operator formalism with static classical density functional theories can lead to incomplete (mass) transport properties even though the linearized hydrodynamics on large scales is correctly reproduced. To facilitate a more natural way of binding together the dynamics of the macrovariables and classical density functional theory, a dynamic generalization of density functional theory based on the nonequilibrium generating functional is suggested.
Integrating social, economic, and ecological values across large landscapes
Jessica E. Halofsky; Megan K. Creutzburg; Miles A. Hemstrom
2014-01-01
The Integrated Landscape Assessment Project (ILAP) was a multiyear effort to produce information, maps, and models to help land managers, policymakers, and others conduct mid- to broad-scale (e.g., watersheds to states and larger areas) prioritization of land management actions, perform landscape assessments, and estimate cumulative effects of management actions for...
Spatial application of WEPS for estimating wind erosion in the Pacific Northwest
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on croplands and was originally designed to run field scale simulations. This research is an extension of the WEPS model to run on multiple fields (grids) covering a larger region. We modified the WEPS source code to allow it...
USDA-ARS?s Scientific Manuscript database
Background: Spray irrigation for land-applying livestock manure is increasing in the United States as farms become larger and economies of scale make manure irrigation affordable. However, human health risks from exposure to zoonotic pathogens aerosolized during manure irrigation are not well-unders...
Climate change, ecosystem impacts, and management for Pacific salmon
D.E. Schindler; X. Augerot; E. Fleishman; N.J. Mantua; B. Riddell; M. Ruckelshaus; J. Seeb; M. Webster
2008-01-01
As climate change intensifies, there is increasing interest in developing models that reduce uncertainties in projections of global climate and refine these projections to finer spatial scales. Forecasts of climate impacts on ecosystems are far more challenging and their uncertainties even larger because of a limited understanding of physical controls on biological...
Colin M. Beier; Trista M. Patterson; F. Stuart Chapin III
2008-01-01
Managed ecosystems experience vulnerabilities when ecological resilience declines and key flows of ecosystem services become depleted or lost. Drivers of vulnerability often include local management actions in conjunction with other external, larger scale factors. To translate these concepts to management applications, we developed a conceptual model of feedbacks...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigoriev, Igor
The JGI Fungal Genomics Program aims to scale up sequencing and analysis of fungal genomes to explore the diversity of fungi important for energy and the environment, and to promote functional studies on a system level. Combining new sequencing technologies and comparative genomics tools, JGI is now leading the world in fungal genome sequencing and analysis. Over 120 sequenced fungal genomes with analytical tools are available via MycoCosm (www.jgi.doe.gov/fungi), a web-portal for fungal biologists. Our model of interacting with user communities, unique among other sequencing centers, helps organize these communities, improves genome annotation and analysis work, and facilitates new larger-scalemore » genomic projects. This resulted in 20 high-profile papers published in 2011 alone and contributing to the Genomics Encyclopedia of Fungi, which targets fungi related to plant health (symbionts, pathogens, and biocontrol agents) and biorefinery processes (cellulose degradation, sugar fermentation, industrial hosts). Our next grand challenges include larger scale exploration of fungal diversity (1000 fungal genomes), developing molecular tools for DOE-relevant model organisms, and analysis of complex systems and metagenomes.« less
Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations
NASA Astrophysics Data System (ADS)
Choi, Suk-Jin; Lee, Dong-Kyou
2016-06-01
This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.
Why Be a Shrub? A Basic Model and Hypotheses for the Adaptive Values of a Common Growth Form
Götmark, Frank; Götmark, Elin; Jensen, Anna M.
2016-01-01
Shrubs are multi-stemmed short woody plants, more widespread than trees, important in many ecosystems, neglected in ecology compared to herbs and trees, but currently in focus due to their global expansion. We present a novel model based on scaling relationships and four hypotheses to explain the adaptive significance of shrubs, including a review of the literature with a test of one hypothesis. Our model describes advantages for a small shrub compared to a small tree with the same above-ground woody volume, based on larger cross-sectional stem area, larger area of photosynthetic tissue in bark and stem, larger vascular cambium area, larger epidermis (bark) area, and larger area for sprouting, and faster production of twigs and canopy. These components form our Hypothesis 1 that predicts higher growth rate for a small shrub than a small tree. This prediction was supported by available relevant empirical studies (14 publications). Further, a shrub will produce seeds faster than a tree (Hypothesis 2), multiple stems in shrubs insure future survival and growth if one or more stems die (Hypothesis 3), and three structural traits of short shrub stems improve survival compared to tall tree stems (Hypothesis 4)—all hypotheses have some empirical support. Multi-stemmed trees may be distinguished from shrubs by more upright stems, reducing bending moment. Improved understanding of shrubs can clarify their recent expansion on savannas, grasslands, and alpine heaths. More experiments and other empirical studies, followed by more elaborate models, are needed to understand why the shrub growth form is successful in many habitats. PMID:27507981
NASA Astrophysics Data System (ADS)
Sandvig Mariegaard, Jesper; Huiban, Méven Robin; Tornfeldt Sørensen, Jacob; Andersson, Henrik
2017-04-01
Determining the optimal domain size and associated position of open boundaries in local high-resolution downscaling ocean models is often difficult. As an important input data set for downscaling ocean modelling, the European Copernicus Marine Environment Monitoring Service (CMEMS) provides baroclinic initial and boundary conditions for local ocean models. Tidal dynamics is often neglected in CMEMS services at large scale but tides are generally crucial for coastal ocean dynamics. To address this need, tides can be superposed via Flather (1976) boundary conditions and the combined flow downscaled using unstructured mesh. The surge component is also only partially represented in selected CMEMS products and must be modelled inside the domain and modelled independently and superposed if the domain becomes too small to model the effect in the downscaling model. The tide and surge components can generally be improved by assimilating water level from tide gauge and altimetry data. An intrinsic part of the problem is to find the limitations of local scale data assimilation and the requirement for consistency between the larger scale ocean models and the local scale assimilation methodologies. This contribution investigates the impact of domain size and associated positions of open boundaries with and without data assimilation of water level. We have used the baroclinic ocean model, MIKE 3 FM, and its newly re-factored built-in data assimilation package. We consider boundary conditions of salinity, temperature, water level and depth varying currents from the Global CMEMS 1/4 degree resolution model from 2011, where in situ ADCP velocity data is available for validation. We apply data assimilation of in-situ tide gauge water levels and along track altimetry surface elevation data from selected satellites. The MIKE 3 FM data assimilation model which use the Ensemble Kalman filter have recently been parallelized with MPI allowing for much larger applications running on HPC. The success of the downscaling is to a large degree determined by the ability to realistically describe and dynamically model the errors on the open boundaries. Three different sizes of downscaling model domains in the Northern North Sea have been examined and two different strategies for modelling the uncertainties on the open Flather boundaries are investigated. The combined downscaling and local data assimilation skill is assessed and the impact on recommended domain size is compared to pure downscaling.
Counterintuitive effects of substrate roughness on PDCs
NASA Astrophysics Data System (ADS)
Andrews, B. J.; Manga, M.
2012-12-01
We model dilute pyroclastic density currents (PDCs) using scaled, warm, particle-laden density currents in a 6 m long, 0.6 m wide, 1.8 m tall air-filled tank. In this set of experiments, we run currents over substrates with characteristic roughness scales, hr, ranging over ~3 orders of magnitude from smooth, through 250 μm sandpaper, 0.1-, 1-, 2-, 5-, and 10 cm hemispheres. As substrate roughness increases, runout distance increases until a critical roughness height, hrc, is reached; further increases in roughness height decrease runout. The critical roughness height appears to be 0.25-0.5 htb, the thickness of the turbulent lower layer of the density currents. The dependence of runout on hr is most likely the result of increases in substrate roughness decreasing the average current velocity and converting that energy into increased turbulence intensity. Small values of hr thus result in increased runout as sedimentation is inhibited by the increased turbulence intensity. At larger values of hr current behavior is controlled by much larger decreases in average current velocity, even though sedimentation decreases. Scaling our experiments up to the size of real volcanic eruptions suggests that landscapes must have characteristic roughness hr>10 m to reduce the runout of natural PDCs, smaller roughness scales can increase runout. Comparison of relevant bulk (Reynolds number, densimetric and thermal Richardson numbers, excess buoyant thermal energy density) and turbulent (Stokes and settling numbers) between our experiments and natural dilute PDCs indicates that we are accurately modeling at least the large scale behaviors and dynamics of dilute PDCs.
Downscaling modelling system for multi-scale air quality forecasting
NASA Astrophysics Data System (ADS)
Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.
2010-09-01
Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -É linear eddy-viscosity model, k - É non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a kind of Dirichlet condition is chosen to provide the values based on interpolation from the coarse to the fine grid. When the roughness approach is changed to the obstacle-resolved one in the nested model, the interpolation procedure will increase the computational time (due to additional iterations) for meteorological/ chemical fields inside the urban sub-layer. In such situations, as a possible alternative, the perturbation approach can be applied. Here, the effects of main meteorological variables and chemical species are considered as a sum of two components: background (large-scale) values, described by the coarse-resolution model, and perturbations (micro-scale) features, obtained from the nested fine resolution model.
NASA Astrophysics Data System (ADS)
Korres, W.; Reichenau, T. G.; Schneider, K.
2013-08-01
Soil moisture is a key variable in hydrology, meteorology and agriculture. Soil moisture, and surface soil moisture in particular, is highly variable in space and time. Its spatial and temporal patterns in agricultural landscapes are affected by multiple natural (precipitation, soil, topography, etc.) and agro-economic (soil management, fertilization, etc.) factors, making it difficult to identify unequivocal cause and effect relationships between soil moisture and its driving variables. The goal of this study is to characterize and analyze the spatial and temporal patterns of surface soil moisture (top 20 cm) in an intensively used agricultural landscape (1100 km2 northern part of the Rur catchment, Western Germany) and to determine the dominant factors and underlying processes controlling these patterns. A second goal is to analyze the scaling behavior of surface soil moisture patterns in order to investigate how spatial scale affects spatial patterns. To achieve these goals, a dynamically coupled, process-based and spatially distributed ecohydrological model was used to analyze the key processes as well as their interactions and feedbacks. The model was validated for two growing seasons for the three main crops in the investigation area: Winter wheat, sugar beet, and maize. This yielded RMSE values for surface soil moisture between 1.8 and 7.8 vol.% and average RMSE values for all three crops of 0.27 kg m-2 for total aboveground biomass and 0.93 for green LAI. Large deviations of measured and modeled soil moisture can be explained by a change of the infiltration properties towards the end of the growing season, especially in maize fields. The validated model was used to generate daily surface soil moisture maps, serving as a basis for an autocorrelation analysis of spatial patterns and scale. Outside of the growing season, surface soil moisture patterns at all spatial scales depend mainly upon soil properties. Within the main growing season, larger scale patterns that are induced by soil properties are superimposed by the small scale land use pattern and the resulting small scale variability of evapotranspiration. However, this influence decreases at larger spatial scales. Most precipitation events cause temporarily higher surface soil moisture autocorrelation lengths at all spatial scales for a short time even beyond the autocorrelation lengths induced by soil properties. The relation of daily spatial variance to the spatial scale of the analysis fits a power law scaling function, with negative values of the scaling exponent, indicating a decrease in spatial variability with increasing spatial resolution. High evapotranspiration rates cause an increase in the small scale soil moisture variability, thus leading to large negative values of the scaling exponent. Utilizing a multiple regression analysis, we found that 53% of the variance of the scaling exponent can be explained by a combination of an independent LAI parameter and the antecedent precipitation.
When at what scale will trends in European mean and heavy precipitation emerge
NASA Astrophysics Data System (ADS)
Maraun, Douglas
2013-04-01
A multi-model ensemble of regional climate projections for Europe is employed to investigate how the time of emergence (TOE) for seasonal sums and maxima of daily precipitation depends on spatial scale. The TOE is redefined for emergence from internal variability only, the spread of the TOE due to imperfect climate model formulation is used as a measure of uncertainty in the TOE itself. Thereby the TOE becomes a fundamentally limiting time scale and translates into a minimum spatial scale on which robust conclusions can be drawn about precipitation trends. Thus also minimum temporal and spatial scales for adaptation planning are given. In northern Europe, positive winter trends in mean and heavy precipitation, in southwestern and southeastern Europe summer trends in mean precipitation emerge already within the next decades. Yet across wide areas, especially for heavy summer precipitation, the local trend emerges only late in the 21st century or later. For precipitation averaged to larger scales, the trend in general emerges earlier. Douglas Maraun, When at what scale will trends in European mean and heavy precipitation emerge? Env. Res. Lett., in press, 2013.
Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method
NASA Astrophysics Data System (ADS)
Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.
2017-10-01
The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.
Landscape associations of birds during migratory stopover
NASA Astrophysics Data System (ADS)
Diehl, Robert Howard
The challenge for migratory bird conservation is habitat preservation that sustains breeding, migration, and non-breeding biological processes. In choosing an appropriately scaled conservation arena for habitat preservation, a conservative and thorough examination of stopover habitat use patterns by migrants works back from the larger scales at which such relationships may occur. Because the use of stopover habitats by migrating birds occurs at spatial scales larger than traditional field techniques can easily accommodate, I quantify these relationship using the United States system of weather surveillance radars (popularly known as NEXRAD). To provide perspective on use of this system for biologists, I first describe the technical challenges as well as some of the biological potential of these radars for ornithological research. Using data from these radars, I then examined the influence of Lake Michigan and the distribution of woodland habitat on migrant concentrations in northeastern Illinois habitats during stopover. Lake Michigan exerted less influence on migrant abundance and density than the distribution and availability of habitat for stopover. There was evidence of post-migratory movement resulting in habitats within suburban landscapes experiencing higher migrant abundance but lower migrant density than habitats within nearby urban and agricultural landscapes. Finally, in the context of hierarchy theory, I examined the influence of landscape ecological and behavioral processes on bird density during migratory stopover. Migrant abundance did not vary across landscapes that differed considerably in the amount of habitat available for stopover. As a result, smaller, more isolated patches held higher densities of birds. Spatial models of migrant habitat selection based on migrant proximity to a patch explained nearly as much variance in the number of migrants occupying patches (R2 = 0.88) as selection models based on migrant interception of patches during flight (R2 = 0.90). Because migrant densities in specific patches were the consequence of biological processes operating at larger spatial scales, sound conservation strategies for migrating landbirds should consider the landscape context of stopover habitats that are potential targets for preservation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Font-Ribera, Andreu; Miralda-Escudé, Jordi; Arnau, Eduard
2012-11-01
We present the first measurement of the large-scale cross-correlation of Lyα forest absorption and Damped Lyman α systems (DLA), using the 9th Data Release of the Baryon Oscillation Spectroscopic Survey (BOSS). The cross-correlation is clearly detected on scales up to 40h{sup −1}Mpc and is well fitted by the linear theory prediction of the standard Cold Dark Matter model of structure formation with the expected redshift distortions, confirming its origin in the gravitational evolution of structure. The amplitude of the DLA-Lyα cross-correlation depends on only one free parameter, the bias factor of the DLA systems, once the Lyα forest bias factorsmore » are known from independent Lyα forest correlation measurements. We measure the DLA bias factor to be b{sub D} = (2.17±0.20)β{sub F}{sup 0.22}, where the Lyα forest redshift distortion parameter β{sub F} is expected to be above unity. This bias factor implies a typical host halo mass for DLAs that is much larger than expected in present DLA models, and is reproduced if the DLA cross section scales with halo mass as M{sub h}{sup α}, with α = 1.1±0.1 for β{sub F} = 1. Matching the observed DLA bias factor and rate of incidence requires that atomic gas remains extended in massive halos over larger areas than predicted in present simulations of galaxy formation, with typical DLA proper sizes larger than 20 kpc in host halos of masses ∼ 10{sup 12}M{sub ☉}. We infer that typical galaxies at z ≅ 2 to 3 are surrounded by systems of atomic clouds that are much more extended than the luminous parts of galaxies and contain ∼ 10% of the baryons in the host halo.« less
NASA Astrophysics Data System (ADS)
Moon, C.; Mitchell, S. A.; Callor, N.; Dewers, T. A.; Heath, J. E.; Yoon, H.; Conner, G. R.
2017-12-01
Traditional subsurface continuum multiphysics models include useful yet limiting geometrical assumptions: penny- or disc-shaped cracks, spherical or elliptical pores, bundles of capillary tubes, cubic law fracture permeability, etc. Each physics (flow, transport, mechanics) uses constitutive models with an increasing number of fit parameters that pertain to the microporous structure of the rock, but bear no inter-physics relationships or self-consistency. Recent advances in digital rock physics and pore-scale modeling link complex physics to detailed pore-level geometries, but measures for upscaling are somewhat unsatisfactory and come at a high computational cost. Continuum mechanics rely on a separation between small scale pore fluctuations and larger scale heterogeneity (and perhaps anisotropy), but this can break down (particularly for shales). Algebraic topology offers powerful mathematical tools for describing a local-to-global structure of shapes. Persistent homology, in particular, analyzes the dynamics of topological features and summarizes into numeric values. It offers a roadmap to both "fingerprint" topologies of pore structure and multiscale connectedness as well as links pore structure to physical behavior, thus potentially providing a means to relate the dependence of constitutive behaviors of pore structures in a self-consistent way. We present a persistence homology (PH) analysis framework of 3D image sets including a focused ion beam-scanning electron microscopy data set of the Selma Chalk. We extract structural characteristics of sampling volumes via persistence homology and fit a statistical model using the summarized values to estimate porosity, permeability, and connectivity—Lattice Boltzmann methods for single phase flow modeling are used to obtain the relationships. These PH methods allow for prediction of geophysical properties based on the geometry and connectivity in a computationally efficient way. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Horizontal and vertical integration of physicians: a tale of two tails.
Burns, Lawton Robert; Goldsmith, Jeff C; Sen, Aditi
2013-01-01
Researchers recommend a reorganization of the medical profession into larger groups with a multispecialty mix. We analyze whether there is evidence for the superiority of these models and if this organizational transformation is underway. DESIGN/METHODOLOGY APPROACH: We summarize the evidence on scale and scope economies in physician group practice, and then review the trends in physician group size and specialty mix to conduct survivorship tests of the most efficient models. The distribution of physician groups exhibits two interesting tails. In the lower tail, a large percentage of physicians continue to practice in small, physician-owned practices. In the upper tail, there is a small but rapidly growing percentage of large groups that have been organized primarily by non-physician owners. While our analysis includes no original data, it does collate all known surveys of physician practice characteristics and group practice formation to provide a consistent picture of physician organization. Our review suggests that scale and scope economies in physician practice are limited. This may explain why most physicians have retained their small practices. Larger, multispecialty groups have been primarily organized by non-physician owners in vertically integrated arrangements. There is little evidence supporting the efficiencies of such models and some concern they may pose anticompetitive threats. This is the first comprehensive review of the scale and scope economies of physician practice in nearly two decades. The research results do not appear to have changed much; nor has much changed in physician practice organization.
Material point method modeling in oil and gas reservoirs
Vanderheyden, William Brian; Zhang, Duan
2016-06-28
A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.
Determination of real-time predictors of the wind turbine wake meandering
NASA Astrophysics Data System (ADS)
Muller, Yann-Aël; Aubrun, Sandrine; Masson, Christian
2015-03-01
The present work proposes an experimental methodology to characterize the unsteady properties of a wind turbine wake, called meandering, and particularly its ability to follow the large-scale motions induced by large turbulent eddies contained in the approach flow. The measurements were made in an atmospheric boundary layer wind tunnel. The wind turbine model is based on the actuator disc concept. One part of the work has been dedicated to the development of a methodology for horizontal wake tracking by mean of a transverse hot wire rake, whose dynamic response is adequate for spectral analysis. Spectral coherence analysis shows that the horizontal position of the wake correlates well with the upstream transverse velocity, especially for wavelength larger than three times the diameter of the disc but less so for smaller scales. Therefore, it is concluded that the wake is actually a rather passive tracer of the large surrounding turbulent structures. The influence of the rotor size and downstream distance on the wake meandering is studied. The fluctuations of the lateral force and the yawing torque affecting the wind turbine model are also measured and correlated with the wake meandering. Two approach flow configurations are then tested: an undisturbed incoming flow (modelled atmospheric boundary layer) and a disturbed incoming flow, with a wind turbine model located upstream. Results showed that the meandering process is amplified by the presence of the upstream wake. It is shown that the coherence between the lateral force fluctuations and the horizontal wake position is significant up to length scales larger than twice the wind turbine model diameter. This leads to the conclusion that the lateral force is a better candidate than the upstream transverse velocity to predict in real time the meandering process, for either undisturbed (wake free) or disturbed incoming atmospheric flows.
[Reliability and validity of depression scales of Chinese version: a systematic review].
Sun, X Y; Li, Y X; Yu, C Q; Li, L M
2017-01-10
Objective: Through systematically reviewing the reliability and validity of depression scales of Chinese version in adults in China to evaluate the psychometric properties of depression scales for different groups. Methods: Eligible studies published before 6 May 2016 were retrieved from the following database: CNKI, Wanfang, PubMed and Embase. The HSROC model of the diagnostic test accuracy (DTA) for Meta-analysis was used to calculate the pooled sensitivity and specificity of the PHQ-9. Results: A total of 44 papers evaluating the performance of depression scales were included. Results showed that the reliability and validity of the common depression scales were eligible, including the Beck depression inventory (BDI), the Hamilton depression scale (HAMD), the center epidemiological studies depression scale (CES-D), the patient health questionnaire (PHQ) and the Geriatric depression scale (GDS). The Cronbach' s coefficient of most tools were larger than 0.8, while the test-retest reliability and split-half reliability were larger than 0.7, indicating good internal consistency and stability. The criterion validity, convergent validity, discrimination validity and screening validity were acceptable though different cut-off points were recommended by different studies. The pooled sensitivity of the 11 studies evaluating PHQ-9 was 0.88 (95 %CI : 0.85-0.91) while the pooled specificity was 0.89 (95 %CI : 0.82-0.94), which demonstrated the applicability of PHQ-9 in screening depression. Conclusion: The reliability and validity of different depression scales of Chinese version are acceptable. The characteristics of different tools and study population should be taken into consideration when choosing a specific scale.
Pullan, S P; Whelan, M J; Rettino, J; Filby, K; Eyre, S; Holman, I P
2016-09-01
This paper describes the development and application of IMPT (Integrated Model for Pesticide Transport), a parameter-efficient tool for predicting diffuse-source pesticide concentrations in surface waters used for drinking water supply. The model was applied to a small UK headwater catchment with high frequency (8h) pesticide monitoring data and to five larger catchments (479-1653km(2)) with sampling approximately every 14days. Model performance was good for predictions of both flow (Nash Sutcliffe Efficiency generally >0.59 and PBIAS <10%) and pesticide concentrations, although low sampling frequency in the larger catchments is likely to mask the true episodic nature of exposure. The computational efficiency of the model, along with the fact that most of its parameters can be derived from existing national soil property data mean that it can be used to rapidly predict pesticide exposure in multiple surface water resources to support operational and strategic risk assessments. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Controlling sign problems in spin models using tensor renormalization
NASA Astrophysics Data System (ADS)
Denbleyker, Alan; Liu, Yuzhi; Meurice, Y.; Qin, M. P.; Xiang, T.; Xie, Z. Y.; Yu, J. F.; Zou, Haiyuan
2014-01-01
We consider the sign problem for classical spin models at complex β =1/g02 on L ×L lattices. We show that the tensor renormalization group method allows reliable calculations for larger Imβ than the reweighting Monte Carlo method. For the Ising model with complex β we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the tensor renormalization group method. We check the convergence of the tensor renormalization group method for the O(2) model on L×L lattices when the number of states Ds increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.
Meta-analysis on Macropore Flow Velocity in Soils
NASA Astrophysics Data System (ADS)
Liu, D.; Gao, M.; Li, H. Y.; Chen, X.; Leung, L. R.
2017-12-01
Macropore flow is ubiquitous in the soils and an important hydrologic process that is not well explained using traditional hydrologic theories. Macropore Flow Velocity (MFV) is an important parameter used to describe macropore flow and quantify its effects on runoff generation and solute transport. However, the dominant factors controlling MFV are still poorly understood and the typical ranges of MFV measured at the field are not defined clearly. To address these issues, we conducted a meta-analysis based on a database created from 246 experiments on MFV collected from 76 journal articles. For a fair comparison, a conceptually unified definition of MFV is introduced to convert the MFV measured with different approaches and at various scales including soil core, field, trench or hillslope scales. The potential controlling factors of MFV considered include scale, travel distance, hydrologic conditions, site factors, macropore morphologies, soil texture, and land use. The results show that MFV is about 2 3 orders of magnitude larger than the corresponding values of saturated hydraulic conductivity. MFV is much larger at the trench and hillslope scale than at the field profile and soil core scales and shows a significant positive correlation with the travel distance. Generally, higher irrigation intensity tends to trigger faster MFV, especially at field profile scale, where MFV and irrigation intensity have significant positive correlation. At the trench and hillslope scale, the presence of large macropores (diameter>10 mm) is a key factor determining MFV. The geometric mean of MFV for sites with large macropores was found to be about 8 times larger than those without large macropores. For sites with large macropores, MFV increases with the macropore diameter. However, no noticeable difference in MFV has been observed among different soil texture and land use. Comparing the existing equations to describe MFV, the Poiseuille equation significantly overestimated the observed values, while the Manning-type equations generate reasonable values. The insights from this study will shed light on future field campaigns and modeling of macropore flow.
Modeling dynamics of western juniper under climate change in a semiarid ecosystem
NASA Astrophysics Data System (ADS)
Shrestha, R.; Glenn, N. F.; Flores, A. N.
2013-12-01
Modeling future vegetation dynamics in response to climate change and disturbances such as fire relies heavily on model parameterization. Fine-scale field-based measurements can provide the necessary parameters for constraining models at a larger scale. But the time- and labor-intensive nature of field-based data collection leads to sparse sampling and significant spatial uncertainties in retrieved parameters. In this study we quantify the fine-scale carbon dynamics and uncertainty of juniper woodland in the Reynolds Creek Experimental Watershed (RCEW) in southern Idaho, which is a proposed critical zone observatory (CZO) site for soil carbon processes. We leverage field-measured vegetation data along with airborne lidar and timeseries Landsat imagery to initialize a state-and-transition model (VDDT) and a process-based fire-model (FlamMap) to examine the vegetation dynamics in response to stochastic fire events and climate change. We utilize recently developed and novel techniques to measure biomass and canopy characteristics of western juniper at the individual tree scale using terrestrial and airborne laser scanning techniques in RCEW. These fine-scale data are upscaled across the watershed for the VDDT and FlamMap models. The results will immediately improve our understanding of fine-scale dynamics and carbon stocks and fluxes of woody vegetation in a semi-arid ecosystem. Moreover, quantification of uncertainty will also provide a basis for generating ensembles of spatially-explicit alternative scenarios to guide future land management decisions in the region.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
Reynolds number trend of hierarchies and scale interactions in turbulent boundary layers.
Baars, W J; Hutchins, N; Marusic, I
2017-03-13
Small-scale velocity fluctuations in turbulent boundary layers are often coupled with the larger-scale motions. Studying the nature and extent of this scale interaction allows for a statistically representative description of the small scales over a time scale of the larger, coherent scales. In this study, we consider temporal data from hot-wire anemometry at Reynolds numbers ranging from Re τ ≈2800 to 22 800, in order to reveal how the scale interaction varies with Reynolds number. Large-scale conditional views of the representative amplitude and frequency of the small-scale turbulence, relative to the large-scale features, complement the existing consensus on large-scale modulation of the small-scale dynamics in the near-wall region. Modulation is a type of scale interaction, where the amplitude of the small-scale fluctuations is continuously proportional to the near-wall footprint of the large-scale velocity fluctuations. Aside from this amplitude modulation phenomenon, we reveal the influence of the large-scale motions on the characteristic frequency of the small scales, known as frequency modulation. From the wall-normal trends in the conditional averages of the small-scale properties, it is revealed how the near-wall modulation transitions to an intermittent-type scale arrangement in the log-region. On average, the amplitude of the small-scale velocity fluctuations only deviates from its mean value in a confined temporal domain, the duration of which is fixed in terms of the local Taylor time scale. These concentrated temporal regions are centred on the internal shear layers of the large-scale uniform momentum zones, which exhibit regions of positive and negative streamwise velocity fluctuations. With an increasing scale separation at high Reynolds numbers, this interaction pattern encompasses the features found in studies on internal shear layers and concentrated vorticity fluctuations in high-Reynolds-number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Dark Energy Domination In The Virgocentric Flow
NASA Astrophysics Data System (ADS)
Byrd, Gene; Chernin, A. D.; Karachentsev, I. D.; Teerikorpi, P.; Valtonen, M.; Dolgachev, V. P.; Domozhilova, L. M.
2011-04-01
Dark energy (DE) was first observationally detected at large Gpc distances. If it is a vacuum energy formulated as Einstein's cosmological constant, Λ, DE should also have dynamical effects at much smaller scales. Previously, we found its effects on much smaller Mpc scales in our Local Group (LG) as well as in other nearby groups. We used new HST observations of member 3D distances from the group centers and Doppler shifts. We find each group's gravity dominates a bound central system of galaxies but DE antigravity results in a radial recession increasing with distance from the group center of the outer members. Here we focus on the much larger (but still cosmologically local) Virgo Cluster and systems around it using new observations of velocities and distances. We propose an analytic model whose key parameter is the zero-gravity radius (ZGR) from the cluster center where gravity and DE antigravity balance. DE brings regularity to the Virgocentric flow. Beyond Virgo's 10 Mpc ZGR, the flow curves to approach a linear global Hubble law at larger distances. The Virgo cluster and its outer flow are similar to the Local Group and its local outflow with a scaling factor of about 10; the ZGR for Virgo is 10 times larger than that of the LG. The similarity of the two systems on the scales of 1 to 30 Mpc suggests that a quasi-stationary bound central component and an expanding outflow applies to a wide range of groups and clusters due to small scale action of DE as well as gravity. Chernin, et al 2009 Astronomy and Astrophysics 507, 1271 http://arxiv.org/abs/1006.0066 http://arxiv.org/abs/1006.0555
Barnard, P.L.; Erikson, L.H.; Kvitek, R.G.
2011-01-01
New multibeam echosounder and processing technologies yield sub-meter-scale bathymetric resolution, revealing striking details of bedform morphology that are shaped by complex boundary-layer flow dynamics at a range of spatial and temporal scales. An inertially aided post processed kinematic (IAPPK) technique generates a smoothed best estimate trajectory (SBET) solution to tie the vessel motion-related effects of each sounding directly to the ellipsoid, significantly reducing artifacts commonly found in multibeam data, increasing point density, and sharpening seafloor features. The new technique was applied to a large bedform field in 20-30 m water depths in central San Francisco Bay, California (USA), revealing bedforms that suggest boundary-layer flow deflection by the crests where 12-m-wavelength, 0.2-m-amplitude bedforms are superimposed on 60-m-wavelength, 1-m-amplitude bedforms, with crests that often were strongly oblique (approaching 90??) to the larger features on the lee side, and near-parallel on the stoss side. During one survey in April 2008, superimposed bedform crests were continuous between the crests of the larger features, indicating that flow detachment in the lee of the larger bedforms is not always a dominant process. Assessment of bedform crest peakedness, asymmetry, and small-scale bedform evolution between surveys indicates the impact of different flow regimes on the entire bedform field. This paper presents unique fine-scale imagery of compound and superimposed bedforms, which is used to (1) assess the physical forcing and evolution of a bedform field in San Francisco Bay, and (2) in conjunction with numerical modeling, gain a better fundamental understanding of boundary-layer flow dynamics that result in the observed superimposed bedform orientation. ?? 2011 Springer-Verlag (outside the USA).
Barnard, Patrick L.; Erikson, Li H.; Rubin, David M.; Kvitek, Rikk G.
2011-01-01
New multibeam echosounder and processing technologies yield sub-meter-scale bathymetric resolution, revealing striking details of bedform morphology that are shaped by complex boundary-layer flow dynamics at a range of spatial and temporal scales. An inertially aided post processed kinematic (IAPPK) technique generates a smoothed best estimate trajectory (SBET) solution to tie the vessel motion-related effects of each sounding directly to the ellipsoid, significantly reducing artifacts commonly found in multibeam data, increasing point density, and sharpening seafloor features. The new technique was applied to a large bedform field in 20–30 m water depths in central San Francisco Bay, California (USA), revealing bedforms that suggest boundary-layer flow deflection by the crests where 12-m-wavelength, 0.2-m-amplitude bedforms are superimposed on 60-m-wavelength, 1-m-amplitude bedforms, with crests that often were strongly oblique (approaching 90°) to the larger features on the lee side, and near-parallel on the stoss side. During one survey in April 2008, superimposed bedform crests were continuous between the crests of the larger features, indicating that flow detachment in the lee of the larger bedforms is not always a dominant process. Assessment of bedform crest peakedness, asymmetry, and small-scale bedform evolution between surveys indicates the impact of different flow regimes on the entire bedform field. This paper presents unique fine-scale imagery of compound and superimposed bedforms, which is used to (1) assess the physical forcing and evolution of a bedform field in San Francisco Bay, and (2) in conjunction with numerical modeling, gain a better fundamental understanding of boundary-layer flow dynamics that result in the observed superimposed bedform orientation.
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Clayson, C. A.
2012-01-01
Residual forcing necessary to close the MLTB on seasonal time scales are largest in regions of strongest surface heat flux forcing. Identifying the dominant source of error - surface heat flux error, mixed layer depth estimation, ocean dynamical forcing - remains a challenge in the eastern tropical oceans where ocean processes are very active. Improved sub-surface observations are necessary to better constrain errors. 1. Mixed layer depth evolution is critical to the seasonal evolution of mixed layer temperatures. It determines the inertia of the mixed layer, and scales the sensitivity of the MLTB to errors in surface heat flux and ocean dynamical forcing. This role produces timing impacts for errors in SST prediction. 2. Errors in the MLTB are larger than the historical 10Wm-2 target accuracy. In some regions, a larger accuracy can be tolerated if the goal is to resolve the seasonal SST cycle.
Studies on scaling of flow noise received at the stagnation point of an axisymmetric body
NASA Astrophysics Data System (ADS)
Arakeri, V. H.; Satyanarayana, S. G.; Mani, K.; Sharma, S. D.
1991-05-01
A description of the studies related to the problem of scaling of flow noise received at the stagnation point of axisymmetric bodies is provided. The source of flow noise under consideration is the transitional/turbulent regions of the boundary layer flow on the axisymmetric body. Lauchle has recently shown that the noise measured in the laminar region (including the stagnation point) corresponds closely to the noise measured in the transition region, provided that the acoustic losses due to diffraction are accounted for. The present study includes experimental measurement of flow noise at the stagnation point of three different shaped axisymmetric headforms. One of the body shapes chosen is that used by Lauchle in similar studies. This was done to establish the effect of body size on flow noise. The results of the experimental investigations clearly show that the flow noise received at the stagnation point is a strong function of free stream velocity, a moderately strong function of body scale but a weak function of boundary layer thickness. In addition, there is evidence that when body scale change is involved, flow noise amplitude scales but no frequency shift is involved. A scaling procedure is proposed based on the present observations along with those of Lauchle. At a given frequency, the amplitude of noise level obtained under model testing conditions is first scaled to account for differences in the velocity and size corresponding to the prototype conditions; then a correction to this is applied to account for losses due to diffraction, which are estimated on the basis of the geometric theory of diffraction (GTD) with the source being located at the predicted position of turbulent transition. Use of the proposed scaling law to extrapolate presently obtained noise levels to two other conditions involving larger-scale bodies show good agreement with actually measured levels, in particular at higher frequencies. Since model scale results have been used successfully to predict levels on larger-sized bodies tested in a totally different environment, the present data along with the proposed scaling procedure can be used to predict the expected flow noise levels at prototype scales during preliminary design studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Sheng; Covino, Timothy P.; Sivapalan, Murugesu
In this paper, we use a dynamic network flow model, coupled with a transient storage zone biogeochemical model, to simulate dissolved nutrient removal processes at the channel network scale. We have explored several scenarios in respect of the combination of rainfall variability, and the biological and geomorphic characteristics of the catchment, to understand the dominant controls on removal and delivery of dissolved nutrients (e.g., nitrate). These model-based theoretical analyses suggested that while nutrient removal efficiency is lower during flood events compared to during baseflow periods, flood events contribute significantly to bulk nutrient removal, whereas bulk removal during baseflow periods ismore » less. This is due to the fact that nutrient supply is larger during flood events; this trend is even stronger in large rivers. However, the efficiency of removal during both periods decreases in larger rivers, however, due to (i) increasing flow velocities and thus decreasing residence time, and (ii) increasing flow depth, and thus decreasing nutrient uptake rates. Besides nutrient removal processes can be divided into two parts: in the main channel and in the hyporheic transient storage zone. When assessing their relative contributions the size of the transient storage zone is a dominant control, followed by uptake rates in the main channel and in the transient storage zone. Increasing size of the transient storage zone with downstream distance affects the relative contributions to nutrient removal of the water column and the transient storage zone, which also impacts the way nutrient removal rates scale with increasing size of rivers. Intra-annual hydrologic variability has a significant impact on removal rates at all scales: the more variable the streamflow is, compared to mean discharge, the less nutrient is removed in the channel network. A scale-independent first order uptake coefficient, ke, estimated from model simulations, is highly dependent on the relative size of the transient storage zone and how it changes in the downstream direction, as well as the nature of hydrologic variability.« less
NASA Astrophysics Data System (ADS)
Frantziskonis, George N.; Gur, Sourav
2017-06-01
Thermally induced phase transformation in NiTi shape memory alloys (SMAs) shows strong size and shape, collectively termed length scale effects, at the nano to micrometer scales, and that has important implications for the design and use of devices and structures at such scales. This paper, based on a recently developed multiscale model that utilizes molecular dynamics (MDs) simulations at small scales and MD-verified phase field (PhF) simulations at larger scales, reports results on specific length scale effects, i.e. length scale effects in martensite phase fraction (MPF) evolution, transformation temperatures (martensite and austenite start and finish) and in the thermally cyclic transformation between austenitic and martensitic phase. The multiscale study identifies saturation points for length scale effects and studies, for the first time, the length scale effect on the kinetics (i.e. developed internal strains) in the B19‧ phase during phase transformation. The major part of the work addresses small scale single crystals in specific orientations. However, the multiscale method is used in a unique and novel way to indirectly study length scale and grain size effects on evolution kinetics in polycrystalline NiTi, and to compare the simulation results to experiments. The interplay of the grain size and the length scale effect on the thermally induced MPF evolution is also shown in this present study. Finally, the multiscale coupling results are employed to improve phenomenological material models for NiTi SMA.
NASA Technical Reports Server (NTRS)
Erickson, Gary E.; Inenaga, Andrew S.
1994-01-01
Laser vapor screen (LVS) flow visualization systems that are fiber-optic based were developed and installed for aerodynamic research in the Langley 8-Foot Transonic Pressure Tunnel and the Langley 7- by 10-Foot High Speed Tunnel. Fiber optics are used to deliver the laser beam through the plenum shell that surrounds the test section of each facility and to the light-sheet-generating optics positioned in the ceiling window of the test section. Water is injected into the wind tunnel diffuser section to increase the relative humidity and promote condensation of the water vapor in the flow field about the model. The condensed water vapor is then illuminated with an intense sheet of laser light to reveal features of the flow field. The plenum shells are optically sealed; therefore, video-based systems are used to observe and document the flow field. Operational experience shows that the fiber-optic-based systems provide safe, reliable, and high-quality off-surface flow visualization in smaller and larger scale subsonic and transonic wind tunnels. The design, the installation, and the application of the Langley Research Center (LaRC) LVS flow visualization systems in larger scale wind tunnels are highlighted. The efficiency of the fiber optic LVS systems and their insensitivity to wind tunnel vibration, the tunnel operating temperature and pressure variations, and the airborne contaminants are discussed.
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Optimal configurations of spatial scale for grid cell firing under noise and uncertainty
Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil
2014-01-01
We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144
Implementation of a boundary element method to solve for the near field effects of an array of WECs
NASA Astrophysics Data System (ADS)
Oskamp, J. A.; Ozkan-Haller, H. T.
2010-12-01
When Wave Energy Converters (WECs) are installed, they affect the shoreline wave climate by removing some of the wave energy which would have reached the shore. Before large WEC projects are launched, it is important to understand the potential coastal impacts of these installations. The high cost associated with ocean scale testing invites the use of hydrodynamic models to play a major role in estimating these effects. In this study, a wave structure interaction program (WAMIT) is used to model an array of WECs. The program predicts the wave field throughout the array using a boundary element method to solve the potential flow fluid problem, taking into account the incident waves, the power dissipated, and the way each WEC moves and interacts with the others. This model is appropriate for a small domain near the WEC array in order to resolve the details in the interactions, but not extending to the coastline (where the far-field effects must be assessed). To propagate these effects to the coastline, the waves leaving this small domain will be used as boundary conditions for a larger model domain which will assess the shoreline effects caused by the array. The immediate work is concerned with setting up the WAMIT model for a small array of point absorbers. A 1:33 scale lab test is planned and will provide data to validate the WAMIT model on this small domain before it is nested with the larger domain to estimate shoreline effects.
NASA Astrophysics Data System (ADS)
Di Luzio, Luca; Mescia, Federico; Nardi, Enrico
2017-01-01
A major goal of axion searches is to reach inside the parameter space region of realistic axion models. Currently, the boundaries of this region depend on somewhat arbitrary criteria, and it would be desirable to specify them in terms of precise phenomenological requirements. We consider hadronic axion models and classify the representations RQ of the new heavy quarks Q . By requiring that (i) the Q 's are sufficiently short lived to avoid issues with long-lived strongly interacting relics, (ii) no Landau poles are induced below the Planck scale; 15 cases are selected which define a phenomenologically preferred axion window bounded by a maximum (minimum) value of the axion-photon coupling about 2 times (4 times) larger than is commonly assumed. Allowing for more than one RQ, larger couplings, as well as complete axion-photon decoupling, become possible.
Multiscale simulations of the early stages of the growth of graphene on copper
NASA Astrophysics Data System (ADS)
Gaillard, P.; Chanier, T.; Henrard, L.; Moskovkin, P.; Lucas, S.
2015-07-01
We have performed multiscale simulations of the growth of graphene on defect-free copper (111) in order to model the nucleation and growth of graphene flakes during chemical vapour deposition and potentially guide future experimental work. Basic activation energies for atomic surface diffusion were determined by ab initio calculations. Larger scale growth was obtained within a kinetic Monte Carlo approach (KMC) with parameters based on the ab initio results. The KMC approach counts the first and second neighbours to determine the probability of surface diffusion. We report qualitative results on the size and shape of the graphene islands as a function of deposition flux. The dominance of graphene zigzag edges for low deposition flux, also observed experimentally, is explained by its larger dynamical stability that the present model fully reproduced.
Do Performance-Safety Tradeoffs Cause Hypometric Metabolic Scaling in Animals?
Harrison, Jon F
2017-09-01
Hypometric scaling of aerobic metabolism in animals has been widely attributed to constraints on oxygen (O 2 ) supply in larger animals, but recent findings demonstrate that O 2 supply balances with need regardless of size. Larger animals also do not exhibit evidence of compensation for O 2 supply limitation. Because declining metabolic rates (MRs) are tightly linked to fitness, this provides significant evidence against the hypothesis that constraints on supply drive hypometric scaling. As an alternative, ATP demand might decline in larger animals because of performance-safety tradeoffs. Larger animals, which typically reproduce later, exhibit risk-reducing strategies that lower MR. Conversely, smaller animals are more strongly selected for growth and costly neurolocomotory performance, elevating metabolism. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Donoghue, John F.
2017-08-01
In the description of general covariance, the vierbein and the Lorentz connection can be treated as independent fundamental fields. With the usual gauge Lagrangian, the Lorentz connection is characterized by an asymptotically free running coupling. When running from high energy, the coupling gets large at a scale which can be called the Planck mass. If the Lorentz connection is confined at that scale, the low energy theory can have the Einstein Lagrangian induced at low energy through dimensional transmutation. However, in general there will be new divergences in such a theory and the Lagrangian basis should be expanded. I construct a conformally invariant model with a larger basis size which potentially may have the same property.
Airframe Noise Sub-Component Definition and Model
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Sen, Rahul; Hardy, Bruce; Yamamoto, Kingo; Guo, Yue-Ping; Miller, Gregory
2004-01-01
Both in-house, and jointly with NASA under the Advanced Subsonic Transport (AST) program, Boeing Commerical Aircraft Company (BCA) had begun work on systematically identifying specific components of noise responsible for total airframe noise generation and applying the knowledge gained towards the creation of a model for airframe noise prediction. This report documents the continuation of the collection of database from model-scale and full-scale airframe noise measurements to compliment the earlier existing databases, the development of the subcomponent models and the generation of a new empirical prediction code. The airframe subcomponent data includes measurements from aircraft ranging in size from a Boeing 737 to aircraft larger than a Boeing 747 aircraft. These results provide the continuity to evaluate the technology developed under the AST program consistent with the guidelines set forth in NASA CR-198298.
Breaking CMB degeneracy in dark energy through LSS
NASA Astrophysics Data System (ADS)
Lee, Seokcheon
2016-03-01
The cosmic microwave background (CMB) and large-scale structure (LSS) are complementary probes in the investigatation of the early and late time Universe. After the current accomplishment of the high accuracies of CMB measurements, accompanying precision cosmology from LSS data is emphasized. We investigate the dynamical dark energy (DE) models which can produce the same CMB angular power spectra as that of the Λ CDM model with less than a sub-percent level accuracy. If one adopts the dynamical DE models using the so-called Chevallier-Polarski-Linder (CPL) parametrization, ω equiv ω 0 + ω a(1-a), then one obtains models (ω 0,ω a) = (-0.8,-0.767),(-0.9,-0.375), (-1.1,0.355), (-1.2,0.688) named M8, M9, M11, and M12, respectively. The differences of the growth rate, f, which is related to the redshift-space distortions (RSD) between different DE models and the Λ CDM model are about 0.2 % only at z = 0. The difference of f between M8 (M9, M11, M12) and the Λ CDM model becomes maximum at z ˜eq 0.25 with -2.4 (-1.2, 1.2, 2.5) %. This is a scale-independent quantity. One can investigate the one-loop correction of the matter power spectrum of each model using the standard perturbation theory in order to probe the scale-dependent quantity in the quasi-linear regime (i.e. k le 0.4 {h^{-1} Mpc}). The differences in the matter power spectra including the one-loop correction between M8 (M9, M11, M12) and the Λ CDM model for the k= 0.4 {h^{-1} Mpc} scale are 1.8 (0.9, 1.2, 3.0) % at z=0, 3.0 (1.6, 1.9, 4.2) % at z=0.5, and 3.2 (1.7, 2.0, 4.5) % at z=1.0. The larger departure from -1 of ω 0, the larger the difference in the power spectrum. Thus, one should use both the RSD and the quasi-linear observable in order to discriminate a viable DE model among a slew of the models which are degenerate in CMB. Also we obtain the lower limit on ω 0> -1.5 from the CMB acoustic peaks and this will provide a useful limitation on phantom models.
Scales and scaling in turbulent ocean sciences; physics-biology coupling
NASA Astrophysics Data System (ADS)
Schmitt, Francois
2015-04-01
Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-07
... to use this information to develop a new administrative fee allocation formula for the HCV program... program effectively without the benefit of economies of scale that apply to larger programs. The results... costs can be explained by PHA, participant, and market characteristics. The results of the model will be...
Imprint of thawing scalar fields on the large scale galaxy overdensity
NASA Astrophysics Data System (ADS)
Dinda, Bikash R.; Sen, Anjan A.
2018-04-01
We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.
NASA Astrophysics Data System (ADS)
Hutnak, M.; Fisher, A. T.; Stauffer, P.; Gable, C. W.
2005-12-01
We use two-dimensional, finite-element models of coupled heat and fluid flow to investigate local and large-scale heat and fluid transport around and between basement outcrops on a young ridge flank. System geometries and properties are based on observations and measurements on the 3.4-3.6 Ma eastern flank of the Juan de Fuca Ridge. A small area of basement exposure (Baby Bare outcrop) experiences focused hydrothermal discharge, whereas a much larger feature (Grizzly Bare outcrop) 50 km to the south is a site of hydrothermal recharge. Observations of seafloor heat flow, subseafloor pressures, and basement fluid geochemistry at and near these outcrops constrain acceptable model results. Single-outcrop simulations suggest that local convection alone (represented by a high Nusselt number proxy) cannot explain the near-outcrop heat flow patterns; rapid through-flow is required. Venting of at least 5 L/s through the smaller outcrop, a volumetric flow rate consistent with earlier estimates based on plume and outcrop measurements, is needed to match seafloor heat flow patterns. Heat flow patterns are more variable and complex near the larger, recharging outcrop. Simulations that include 5-20 L/s of recharge through this feature can replicate first-order trends in the data, but small-scale variations are likely to result from heterogeneous flow paths and vigorous, local convection. Two-outcrop simulations started with a warm hydrostatic initial condition, based on a conductive model, result in rapid fluid flow from the smaller outcrop to the larger outcrop, inconsistent with observations. Flow can be sustained in the opposite (correct) direction if it is initially forced, which generates a hydrothermal siphon between the two features. Free flow simulations maintain rapid circulation at rates consistent with observations (specific discharge of m/yr to tens of m/yr), provided basement permeability is on the order of 10-10 m2 or greater. Lateral flow rates scale inversely with the thickness of the permeable basement layer. The differential pressure needed to drive this circulation, created by the siphon, is on the order of tens to hundreds of kPa, with greater differential pressure needed when basement permeability is lower.
Noise characteristics of upper surface blown configurations. Experimental program and results
NASA Technical Reports Server (NTRS)
Brown, W. H.; Searle, N.; Blakney, D. F.; Pennock, A. P.; Gibson, J. S.
1977-01-01
An experimental data base was developed from the model upper surface blowing (USB) propulsive lift system hardware. While the emphasis was on far field noise data, a considerable amount of relevant flow field data were also obtained. The data were derived from experiments in four different facilities resulting in: (1) small scale static flow field data; (2) small scale static noise data; (3) small scale simulated forward speed noise and load data; and (4) limited larger-scale static noise flow field and load data. All of the small scale tests used the same USB flap parts. Operational and geometrical variables covered in the test program included jet velocity, nozzle shape, nozzle area, nozzle impingement angle, nozzle vertical and horizontal location, flap length, flap deflection angle, and flap radius of curvature.
Log-Normal Turbulence Dissipation in Global Ocean Models
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
The brightness temperature of Venus and the absolute flux-density scale at 608 MHz.
NASA Technical Reports Server (NTRS)
Muhleman, D. O.; Berge, G. L.; Orton, G. S.
1973-01-01
The disk temperature of Venus was measured at 608 MHz near the inferior conjunction of 1972, and a value of 498 plus or minus 33 K was obtained using a nominal CKL flux-density scale. The result is consistent with earlier measurements, but has a much smaller uncertainty. Our theoretical model prediction is larger by a factor of 1.21 plus or minus 0.09. This discrepancy has been noticed previously for frequencies below 1400 MHz, but was generally disregarded because of the large observational uncertainties. No way could be found to change the model to produce agreement without causing a conflict with well-established properties of Venus. Thus it is suggested that the flux-density scale may require an upward revision, at least near this frequency, in excess of what has previously been considered likely.
Connectivity-driven white matter scaling and folding in primate cerebral cortex
Herculano-Houzel, Suzana; Mota, Bruno; Kaas, Jon H.
2010-01-01
Larger brains have an increasingly folded cerebral cortex whose white matter scales up faster than the gray matter. Here we analyze the cellular composition of the subcortical white matter in 11 primate species, including humans, and one Scandentia, and show that the mass of the white matter scales linearly across species with its number of nonneuronal cells, which is expected to be proportional to the total length of myelinated axons in the white matter. This result implies that the average axonal cross-section area in the white matter, a, does not scale significantly with the number of neurons in the gray matter, N. The surface area of the white matter increases with N0.87, not N1.0. Because this surface can be defined as the product of N, a, and the fraction n of cortical neurons connected through the white matter, we deduce that connectivity decreases in larger cerebral cortices as a slowly diminishing fraction of neurons, which varies with N−0.16, sends myelinated axons into the white matter. Decreased connectivity is compatible with previous suggestions that neurons in the cerebral cortex are connected as a small-world network and should slow down the increase in global conduction delay in cortices with larger numbers of neurons. Further, a simple model shows that connectivity and cortical folding are directly related across species. We offer a white matter-based mechanism to account for increased cortical folding across species, which we propose to be driven by connectivity-related tension in the white matter, pulling down on the gray matter. PMID:20956290
Fermi rules out the IC/CMB model for the Large-Scale Jet X-ray emission of 3C 273
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, E. T.
2014-01-01
The process responsible for the Chandra-detected X-ray emission from the large-scale jets of powerful quasars is not clear yet. The two main models are inverse Compton scattering off the cosmic microwave background (IC/CMB) photons and synchrotron emission from a population of electrons separate from those producing the radio-IR emission. These two models imply radically different conditions in the large scale jet in terms of jet speed and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the larger-scale environment. Georganopoulos et al. (2006) proposed a diagnostic based on a fundamental difference between these two models: the production of synchrotron X-rays requires multi-TeV electrons, while the EC/CMB model requires a cutoff in the electron energy distribution below TeV energies. This has significant implications for the gamma-ray emission predicted by these two models. Here we present new Fermi observations that put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that clearly violates the flux expected from the IC/CMB X-ray interpretation found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source. Further, the Fermi upper limit constraints the Doppler beaming factor delta <5.
Long-term forecasting of internet backbone traffic.
Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe
2005-09-01
We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.
Attempting to bridge the gap between laboratory and seismic estimates of fracture energy
McGarr, A.; Fletcher, Joe B.; Beeler, N.M.
2004-01-01
To investigate the behavior of the fracture energy associated with expanding the rupture zone of an earthquake, we have used the results of a large-scale, biaxial stick-slip friction experiment to set the parameters of an equivalent dynamic rupture model. This model is determined by matching the fault slip, the static stress drop and the apparent stress. After confirming that the fracture energy associated with this model earthquake is in reasonable agreement with corresponding laboratory values, we can use it to determine fracture energies for earthquakes as functions of stress drop, rupture velocity and fault slip. If we take account of the state of stress at seismogenic depths, the model extrapolation to larger fault slips yields fracture energies that agree with independent estimates by others based on dynamic rupture models for large earthquakes. For fixed stress drop and rupture speed, the fracture energy scales linearly with fault slip.
Gomez-Velez, Jesus D.; Harvey, Judson
2014-01-01
Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data and by models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bed forms rather than lateral exchange through meanders dominates hyporheic fluxes and turnover rates along river corridors. Per kilometer, low-order streams have a biogeochemical potential at least 2 orders of magnitude larger than higher-order streams. However, when biogeochemical potential is examined per average length of each stream order, low- and high-order streams were often found to be comparable. As a result, the hyporheic zone's intrinsic potential for biogeochemical transformations is comparable across different stream orders, but the greater river miles and larger total streambed area of lower order streams result in the highest cumulative impact from low-order streams. Lateral exchange through meander banks may be important in some cases but generally only in large rivers.
NASA Astrophysics Data System (ADS)
Gomez-Velez, Jesus D.; Harvey, Judson W.
2014-09-01
Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data and by models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bed forms rather than lateral exchange through meanders dominates hyporheic fluxes and turnover rates along river corridors. Per kilometer, low-order streams have a biogeochemical potential at least 2 orders of magnitude larger than higher-order streams. However, when biogeochemical potential is examined per average length of each stream order, low- and high-order streams were often found to be comparable. As a result, the hyporheic zone's intrinsic potential for biogeochemical transformations is comparable across different stream orders, but the greater river miles and larger total streambed area of lower order streams result in the highest cumulative impact from low-order streams. Lateral exchange through meander banks may be important in some cases but generally only in large rivers.
Nearly scale invariant spectrum of gravitational radiation from global phase transitions.
Jones-Smith, Katherine; Krauss, Lawrence M; Mathur, Harsh
2008-04-04
Using a large N sigma model approximation we explicitly calculate the power spectrum of gravitational waves arising from a global phase transition in the early Universe and we confirm that it is scale invariant, implying an observation of such a spectrum may not be a unique feature of inflation. Moreover, the predicted amplitude can be over 3 orders of magnitude larger than the naive dimensional estimate, implying that even a transition that occurs after inflation may dominate in cosmic microwave background polarization or other gravity wave signals.
Hey hey hey hey, it was the DNA
NASA Astrophysics Data System (ADS)
Williams, Martin A. K.
2016-05-01
The investigation of the emergence of spatial patterning in the density profiles of the individual elements of multicomponent systems was perhaps first popularised in a biophysical context by Turing’s work on embryogenesis in 1952. How molecular-scale properties transpire to produce patterns at larger scales continues to fascinate today. Now a model DNA-nanotube system, whose assemblies have been reported recently by Glaser et al (2016 New J. Phys. 18 055001), promises to reveal insights by allowing the mechanical properties of the underlying macromolecular entities to be controlled independently of their chemical nature.
Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)
NASA Astrophysics Data System (ADS)
Teixeira, J.
2013-12-01
In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.
The cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, Joseph
1992-01-01
A review the implications of the spectrum and anisotropy of the cosmic microwave background for cosmology. Thermalization and processes generating spectral distortions are discussed. Anisotropy predictions are described and compared with observational constraints. If the evidence for large-scale power in the galaxy distribution in excess of that predicted by the cold dark matter model is vindicated, and the observed structure originated via gravitational instabilities of primordial density fluctuations, the predicted amplitude of microwave background anisotropies on angular scales of a degree and larger must be at least several parts in 10 exp 6.
Evolution of energy-containing turbulent eddies in the solar wind
NASA Technical Reports Server (NTRS)
Matthaeus, William H.; Oughton, Sean; Pontius, Duane H., Jr.; Zhou, YE
1994-01-01
Previous theoretical treatments of fluid-scale turbulence in the solar wind have concentrated on describing the state and dynamical evolution of fluctuations in the inertial range, which are characterized by power law energy spectra. In the present paper a model for the evolution of somewhat larger, more energetic magnetohydrodynamic (MHD) fluctuations is developed by analogy with classical hydrodynamic turbulence in the quasi-equilibrium range. The model is constructed by assembling and extending existing phenomenologies of homogeneous MHD turbulence, as well as simple two-length-scale models for transport of MHD turbulence in a weekly inhomogeneous medium. A set of equations is presented for the evolution of the turbulence, including the transport and nonlinear evolution of magnetic and kinetic energy, cross helicity, and their correlation scales. Two versions of the model are derived, depending on whether the fluctuations are distributed isotropically in three dimensions or restricted to the two-dimensional plane perpendicular to the mean magnetic field. This model includes a number of potentially important physical effects that have been neglected in previous discussions of transport of solar wind turbulence.
A new model for extinction and recolonization in two dimensions: quantifying phylogeography.
Barton, Nicholas H; Kelleher, Jerome; Etheridge, Alison M
2010-09-01
Classical models of gene flow fail in three ways: they cannot explain large-scale patterns; they predict much more genetic diversity than is observed; and they assume that loosely linked genetic loci evolve independently. We propose a new model that deals with these problems. Extinction events kill some fraction of individuals in a region. These are replaced by offspring from a small number of parents, drawn from the preexisting population. This model of evolution forwards in time corresponds to a backwards model, in which ancestral lineages jump to a new location if they are hit by an event, and may coalesce with other lineages that are hit by the same event. We derive an expression for the identity in allelic state, and show that, over scales much larger than the largest event, this converges to the classical value derived by Wright and Malécot. However, rare events that cover large areas cause low genetic diversity, large-scale patterns, and correlations in ancestry between unlinked loci. © 2010 The Author(s). Journal compilation © 2010 The Society for the Study of Evolution.
Interactions between finite amplitude small and medium-scale waves in the MLT region.
NASA Astrophysics Data System (ADS)
Heale, C. J.; Snively, J. B.
2016-12-01
Small-scale gravity waves can propagate high into the thermosphere and deposit significant momentum and energy into the background flow [e.g., Yamada et al., 2001, Fritts et al., 2014]. However, their propagation, dissipation, and spectral evolution can be significantly altered by other waves and dynamics and the nature of these complex interactions are not yet well understood. While many ray-tracing and time-dependent modeling studies have been performed to investigate interactions between waves of varying scales [e.g., Eckermann and Marks .1996, Sartelet. 2003, Liu et al. 2008, Vanderhoff et al., 2008, Senf and Achatz., 2011, Heale et al., 2015], the majority of these have considered waves of larger (tidal) scales, or have simplified one of the waves to be an imposed "background" and discount (or limit) the nonlinear feedback mechanisms between the two waves. In reality, both waves will influence each other, especially at finite amplitudes when nonlinear effects become important or dominant. We present a study of fully nonlinear interactions between small-scale 10s km, 10 min period) and medium-scale wave packets at finite amplitudes, which include feedback between the two waves and the ambient atmosphere. Time-dependence of the larger-scale wave has been identified as an important factor in reducing reflection [Heale et al., 2015] and critical level effects [Sartelet, 2003, Senf and Achatz, 2011], we choose medium-scale waves of different periods, and thus vertical scales, to investigate how this influences the propagation, filtering, and momentum and energy deposition of the small-scale waves, and in turn how these impacts affect the medium-scale waves. We also consider the observable features of these interactions in the mesosphere and lower thermosphere.
Ultrafast studies of shock induced chemistry-scaling down the size by turning up the heat
NASA Astrophysics Data System (ADS)
McGrane, Shawn
2015-06-01
We will discuss recent progress in measuring time dependent shock induced chemistry on picosecond time scales. Data on the shock induced chemistry of liquids observed through picosecond interferometric and spectroscopic measurements will be reconciled with shock induced chemistry observed on orders of magnitude larger time and length scales from plate impact experiments reported in the literature. While some materials exhibit chemistry consistent with simple thermal models, other materials, like nitromethane, seem to have more complex behavior. More detailed measurements of chemistry and temperature across a broad range of shock conditions, and therefore time and length scales, will be needed to achieve a real understanding of shock induced chemistry, and we will discuss efforts and opportunities in this direction.
Understanding Prairie Fen Hydrology - a Hierarchical Multi-Scale Groundwater Modeling Approach
NASA Astrophysics Data System (ADS)
Sampath, P.; Liao, H.; Abbas, H.; Ma, L.; Li, S.
2012-12-01
Prairie fens provide critical habitat to more than 50 rare species and significantly contribute to the biodiversity of the upper Great Lakes region. The sustainability of these globally unique ecosystems, however, requires that they be fed by a steady supply of pristine, calcareous groundwater. Understanding the hydrology that supports the existence of such fens is essential in preserving these valuable habitats. This research uses process-based multi-scale groundwater modeling for this purpose. Two fen-sites, MacCready Fen and Ives Road Fen, in Southern Michigan were systematically studied. A hierarchy of nested steady-state models was built for each fen-site to capture the system's dynamics at spatial scales ranging from the regional groundwater-shed to the local fens. The models utilize high-resolution Digital Elevation Models (DEM), National Hydrologic Datasets (NHD), a recently-assembled water-well database, and results from a state-wide groundwater mapping project to represent the complex hydro-geological and stress framework. The modeling system simulates both shallow glacial and deep bedrock aquifers as well as the interaction between surface water and groundwater. Aquifer heterogeneities were explicitly simulated with multi-scale transition probability geo-statistics. A two-way hydraulic head feedback mechanism was set up between the nested models, such that the parent models provided boundary conditions to the child models, and in turn the child models provided local information to the parent models. A hierarchical mass budget analysis was performed to estimate the seepage fluxes at the surface water/groundwater interfaces and to assess the relative importance of the processes at multiple scales that contribute water to the fens. The models were calibrated using observed base-flows at stream gauging stations and/or static water levels at wells. Three-dimensional particle tracking was used to predict the sources of water to the fens. We observed from the multi-scale simulations that the water system that supports the fens is a much larger, more connected, and more complex one than expected. The water in the fen can be traced back to a network of sources, including lakes and wetlands at different elevations, which are connected to a regional mound through a "cascade delivery mechanism". This "master recharge area" is the ultimate source of water not only to the fens in its vicinity, but also for many major rivers and aquifers. The implication of this finding is that prairie fens must be managed as part of a much larger, multi-scale groundwater system and we must consider protection of the shorter and long-term water sources. This will require a fundamental reassessment of our current approach to fen conservation, which is primarily based on protection of individual fens and their immediate surroundings. Clearly, in the future we must plan for conservation of the broad recharge areas and the multiple fen complexes they support.
Goal-oriented robot navigation learning using a multi-scale space representation.
Llofriu, M; Tejera, G; Contreras, M; Pelc, T; Fellous, J M; Weitzenfeld, A
2015-12-01
There has been extensive research in recent years on the multi-scale nature of hippocampal place cells and entorhinal grid cells encoding which led to many speculations on their role in spatial cognition. In this paper we focus on the multi-scale nature of place cells and how they contribute to faster learning during goal-oriented navigation when compared to a spatial cognition system composed of single scale place cells. The task consists of a circular arena with a fixed goal location, in which a robot is trained to find the shortest path to the goal after a number of learning trials. Synaptic connections are modified using a reinforcement learning paradigm adapted to the place cells multi-scale architecture. The model is evaluated in both simulation and physical robots. We find that larger scale and combined multi-scale representations favor goal-oriented navigation task learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey
NASA Technical Reports Server (NTRS)
Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.
1994-01-01
We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc(exp -1). The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h(exp -1) Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h(exp -1) Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h(exp -1) Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambda(sub zero) = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h(exp -1) Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma(sub 8) (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h(exp -1) Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the pwer spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have M(sub lim) greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).
Scaling of chew cycle duration in primates.
Ross, Callum F; Reed, David A; Washington, Rhyan L; Eckhardt, Alison; Anapol, Fred; Shahnoor, Nazima
2009-01-01
The biomechanical determinants of the scaling of chew cycle duration are important components of models of primate feeding systems at all levels, from the neuromechanical to the ecological. Chew cycle durations were estimated in 35 species of primates and analyzed in conjunction with data on morphological variables of the feeding system estimating moment of inertia of the mandible and force production capacity of the chewing muscles. Data on scaling of primate chew cycle duration were compared with the predictions of simple pendulum and forced mass-spring system models of the feeding system. The gravity-driven pendulum model best predicts the observed cycle duration scaling but is rejected as biomechanically unrealistic. The forced mass-spring model predicts larger increases in chew cycle duration with size than observed, but provides reasonable predictions of cycle duration scaling. We hypothesize that intrinsic properties of the muscles predict spring-like behavior of the jaw elevator muscles during opening and fast close phases of the jaw cycle and that modulation of stiffness by the central nervous system leads to spring-like properties during the slow close/power stroke phase. Strepsirrhines show no predictable relationship between chew cycle duration and jaw length. Anthropoids have longer chew cycle durations than nonprimate mammals with similar mandible lengths, possibly due to their enlarged symphyses, which increase the moment of inertia of the mandible. Deviations from general scaling trends suggest that both scaling of the jaw muscles and the inertial properties of the mandible are important in determining the scaling of chew cycle duration in primates.
Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations
NASA Astrophysics Data System (ADS)
Smith, Katherine; Hamlington, Peter; Pinardi, Nadia; Zavatarelli, Marco
2017-04-01
Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions that can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parameterizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17) that follows the chemical functional group approach, which allows for non-Redfield stoichiometric ratios and the exchange of matter through units of carbon, nitrate, and phosphate. This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time-series Study and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.
Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations
NASA Astrophysics Data System (ADS)
Smith, K.; Hamlington, P.; Pinardi, N.; Zavatarelli, M.; Milliff, R. F.
2016-12-01
Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions which can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parametrizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17). This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time Series and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.
Hopping Diffusion of Nanoparticles in Polymer Matrices
2016-01-01
We propose a hopping mechanism for diffusion of large nonsticky nanoparticles subjected to topological constraints in both unentangled and entangled polymer solids (networks and gels) and entangled polymer liquids (melts and solutions). Probe particles with size larger than the mesh size ax of unentangled polymer networks or tube diameter ae of entangled polymer liquids are trapped by the network or entanglement cells. At long time scales, however, these particles can diffuse by overcoming free energy barrier between neighboring confinement cells. The terminal particle diffusion coefficient dominated by this hopping diffusion is appreciable for particles with size moderately larger than the network mesh size ax or tube diameter ae. Much larger particles in polymer solids will be permanently trapped by local network cells, whereas they can still move in polymer liquids by waiting for entanglement cells to rearrange on the relaxation time scales of these liquids. Hopping diffusion in entangled polymer liquids and networks has a weaker dependence on particle size than that in unentangled networks as entanglements can slide along chains under polymer deformation. The proposed novel hopping model enables understanding the motion of large nanoparticles in polymeric nanocomposites and the transport of nano drug carriers in complex biological gels such as mucus. PMID:25691803
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1993-01-01
The period from 18 UTC 26 Nov. 1991 to roughly 23 UTC 26 Nov. 1991 is one of the study periods of the FIRE (First International Satellite Cloud Climatology Regional Experiment) 2 field campaign. The middle and upper tropospheric cloud data that was collected during this time allowed FIRE scientists to learn a great deal about the detailed structure, microphysics, and radiative characteristics of the mid latitude cirrus that occurred during that time. Modeling studies that range from the microphysical to the mesoscale are now underway attempting to piece the detailed knowledge of this cloud system into a coherent picture of the atmospheric processes important to cirrus cloud development and maintenance. An important component of the modeling work, either as an input parameter in the case of cloud-scale models, or as output in the case of meso and larger scale models, is the large scale forcing of the cloud system. By forcing we mean the synoptic scale vertical motions and moisture budget that initially send air parcels ascending and supply the water vapor to allow condensation during ascent. Defining this forcing from the synoptic scale to the cloud scale is one of the stated scientific objectives of the FIRE program. From the standpoint of model validation, it is also necessary that the vertical motions and large scale moisture budget of the case studies be derived from observations. It is considered important that the models used to simulate the observed cloud fields begin with the correct dynamics and that the dynamics be in the right place for the right reasons.
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.
2011-01-01
This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.
Scale-dependency of effective hydraulic conductivity on fire-affected hillslopes
NASA Astrophysics Data System (ADS)
Langhans, Christoph; Lane, Patrick N. J.; Nyman, Petter; Noske, Philip J.; Cawson, Jane G.; Oono, Akiko; Sheridan, Gary J.
2016-07-01
Effective hydraulic conductivity (Ke) for Hortonian overland flow modeling has been defined as a function of rainfall intensity and runon infiltration assuming a distribution of saturated hydraulic conductivities (Ks). But surface boundary condition during infiltration and its interactions with the distribution of Ks are not well represented in models. As a result, the mean value of the Ks distribution (KS¯), which is the central parameter for Ke, varies between scales. Here we quantify this discrepancy with a large infiltration data set comprising four different methods and scales from fire-affected hillslopes in SE Australia using a relatively simple yet widely used conceptual model of Ke. Ponded disk (0.002 m2) and ring infiltrometers (0.07 m2) were used at the small scales and rainfall simulations (3 m2) and small catchments (ca 3000 m2) at the larger scales. We compared KS¯ between methods measured at the same time and place. Disk and ring infiltrometer measurements had on average 4.8 times higher values of KS¯ than rainfall simulations and catchment-scale estimates. Furthermore, the distribution of Ks was not clearly log-normal and scale-independent, as supposed in the conceptual model. In our interpretation, water repellency and preferential flow paths increase the variance of the measured distribution of Ks and bias ponding toward areas of very low Ks during rainfall simulations and small catchment runoff events while areas with high preferential flow capacity remain water supply-limited more than the conceptual model of Ke predicts. The study highlights problems in the current theory of scaling runoff generation.
Bispectrum supersample covariance
NASA Astrophysics Data System (ADS)
Chan, Kwan Chuen; Moradinezhad Dizgah, Azadeh; Noreña, Jorge
2018-02-01
Modes with wavelengths larger than the survey window can have significant impact on the covariance within the survey window. The supersample covariance has been recognized as an important source of covariance for the power spectrum on small scales, and it can potentially be important for the bispectrum covariance as well. In this paper, using the response function formalism, we model the supersample covariance contributions to the bispectrum covariance and the cross-covariance between the power spectrum and the bispectrum. The supersample covariances due to the long-wavelength density and tidal perturbations are investigated, and the tidal contribution is a few orders of magnitude smaller than the density one because in configuration space the bispectrum estimator involves angular averaging and the tidal response function is anisotropic. The impact of the super-survey modes is quantified using numerical measurements with periodic box and sub-box setups. For the matter bispectrum, the ratio between the supersample covariance correction and the small-scale covariance—which can be computed using a periodic box—is roughly an order of magnitude smaller than that for the matter power spectrum. This is because for the bispectrum, the small-scale non-Gaussian covariance is significantly larger than that for the power spectrum. For the cross-covariance, the supersample covariance is as important as for the power spectrum covariance. The supersample covariance prediction with the halo model response function is in good agreement with numerical results.
NASA Astrophysics Data System (ADS)
Trujillo, E.; Giometto, M. G.; Leonard, K. C.; Maksym, T. L.; Meneveau, C. V.; Parlange, M. B.; Lehning, M.
2014-12-01
Sea ice-atmosphere interactions are major drivers of patterns of sea ice drift and deformations in the Polar regions, and affect snow erosion and deposition at the surface. Here, we combine analyses of sea ice surface topography at very high-resolutions (1-10 cm), and Large Eddy Simulations (LES) to study surface drag and snow erosion and deposition patterns from process scales to floe scales (1 cm - 100 m). The snow/ice elevations were obtained using a Terrestrial Laser Scanner during the SIPEX II (Sea Ice Physics and Ecosystem eXperiment II) research voyage to East Antarctica (September-November 2012). LES are performed on a regular domain adopting a mixed pseudo-spectral/finite difference spatial discretization. A scale-dependent dynamic subgrid-scale model based on Lagrangian time averaging is adopted to determine the eddy-viscosity in the bulk of the flow. Effects of larger-scale features of the surface on wind flows (those features that can be resolved in the LES) are accounted for through an immersed boundary method. Conversely, drag forces caused by subgrid-scale features of the surface should be accounted for through a parameterization. However, the effective aerodynamic roughness parameter z0 for snow/ice is not known. Hence, a novel dynamic approach is utilized, in which z0 is determined using the constraint that the total momentum flux (drag) must be independent on grid-filter scale. We focus on three ice floe surfaces. The first of these surfaces (October 6, 2012) is used to test the performance of the model, validate the algorithm, and study the spatial distributed fields of resolved and modeled stress components. The following two surfaces, scanned at the same location before and after a snow storm event (October 20/23, 2012), are used to propose an application to study how spatially resolved mean flow and turbulence relates to observed patterns of snow erosion and deposition. We show how erosion and deposition patterns are correlated with the computed stresses, with modeled stresses having higher explanatory power. Deposition is mainly occurring in wake regions of specific ridges that strongly affect wind flow patterns. These larger ridges also lock in place elongated streaks of relatively high speeds with axes along the stream-wise direction, and which are largely responsible for the observed erosion.
Of cilium and flagellum kinematics
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Promode R.; Hansen, Joshua C.
2009-11-01
The kinematics of propulsion of small animals such as paramecium and spermatozoa is considered. Larger scale models of the cilium and flagellum have been built and a four-motor apparatus has been constructed to reproduce their known periodic motions. The cilium model has transverse deformational ability in one plane only, while the flagellum model has such ability in two planes. When the flagellum model is given a push-pull in one diametral plane, instead of transverse deflection in one plane, it forms a coil. Berg & Anderson's postulation (Nature 245 1973) that a flagellum rotates, is recalled. The kinematics of cilia of paramecium, of the whipping motion of the spermatozoa flagella, and of the flapping motion (rolling and pitching) of the pectoral fins of much larger animals such penguins, have been reproduced in the same basic paramecium apparatus. The results suggest that each of the tiny individual paramecium propulsors have the intrinsic dormant kinematic and structural building blocks to optimize into higher Reynolds number propulsors. A synthetic hypothesis on how small might have become large is animated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonanno, Luca; Drago, Alessandro
2009-04-15
We study matter at high density and temperature using a chiral Lagrangian in which the breaking of scale invariance is regulated by the value of a scalar field, called dilaton [E. K. Heide, S. Rudaz, and P. J. Ellis, Nucl. Phys. A571, 713 (1994); G. W. Carter, P. J. Ellis, and S. Rudaz, Nucl. Phys. A603, 367 (1996); G. W. Carter, P. J. Ellis, and S. Rudaz, Nucl. Phys. A618, 317 (1997); G. W. Carter and P. J. Ellis, Nucl. Phys. A628, 325 (1998)]. We provide a phase diagram describing the restoration of chiral and scale symmetries. We show thatmore » chiral symmetry is restored at large temperatures, but at low temperatures it remains broken at all densities. We also show that scale invariance is more easily restored at low rather than large baryon densities. The masses of vector-mesons scale with the value of the dilaton and their values initially slightly decrease with the density but then they increase again for densities larger than {approx}3{rho}{sub 0}. The pion mass increases continuously with the density and at {rho}{sub 0} and T=0 its value is {approx}30 MeV larger than in the vacuum. We show that the model is compatible with the bounds stemming from astrophysics, as, e.g., the one associated with the maximum mass of a neutron star. The most striking feature of the model is a very significant softening at large densities, which manifests also as a strong reduction of the adiabatic index. Although the softening has probably no consequence for supernova explosion via the direct mechanism, it could modify the signal in gravitational waves associated with the merging of two neutron stars.« less
NASA Astrophysics Data System (ADS)
Oberlack, Martin; Rosteck, Andreas; Avsarkisov, Victor
2013-11-01
Text-book knowledge proclaims that Lie symmetries such as Galilean transformation lie at the heart of fluid dynamics. These important properties also carry over to the statistical description of turbulence, i.e. to the Reynolds stress transport equations and its generalization, the multi-point correlation equations (MPCE). Interesting enough, the MPCE admit a much larger set of symmetries, in fact infinite dimensional, subsequently named statistical symmetries. Most important, theses new symmetries have important consequences for our understanding of turbulent scaling laws. The symmetries form the essential foundation to construct exact solutions to the infinite set of MPCE, which in turn are identified as classical and new turbulent scaling laws. Examples on various classical and new shear flow scaling laws including higher order moments will be presented. Even new scaling have been forecasted from these symmetries and in turn validated by DNS. Turbulence modellers have implicitly recognized at least one of the statistical symmetries as this is the basis for the usual log-law which has been employed for calibrating essentially all engineering turbulence models. An obvious conclusion is to generally make turbulence models consistent with the new statistical symmetries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.
2011-10-01
Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less
Assessing sufficiency of thermal riverscapes for resilient ...
Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.
Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve
2011-11-01
Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.
Santos, Xavier; Felicísimo, Ángel M.
2016-01-01
Ecological Niche Models (ENMs) are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models). Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude) were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species) and taxonomy (amphibians and reptiles). Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural parks. PMID:27761304
Self-folding and aggregation of amyloid nanofibrils
NASA Astrophysics Data System (ADS)
Paparcone, Raffaella; Cranford, Steven W.; Buehler, Markus J.
2011-04-01
Amyloids are highly organized protein filaments, rich in β-sheet secondary structures that self-assemble to form dense plaques in brain tissues affected by severe neurodegenerative disorders (e.g. Alzheimer's Disease). Identified as natural functional materials in bacteria, in addition to their remarkable mechanical properties, amyloids have also been proposed as a platform for novel biomaterials in nanotechnology applications including nanowires, liquid crystals, scaffolds and thin films. Despite recent progress in understanding amyloid structure and behavior, the latent self-assembly mechanism and the underlying adhesion forces that drive the aggregation process remain poorly understood. On the basis of previous full atomistic simulations, here we report a simple coarse-grain model to analyze the competition between adhesive forces and elastic deformation of amyloid fibrils. We use simple model system to investigate self-assembly mechanisms of fibrils, focused on the formation of self-folded nanorackets and nanorings, and thereby address a critical issue in linking the biochemical (Angstrom) to micrometre scales relevant for larger-scale states of functional amyloid materials. We investigate the effect of varying the interfibril adhesion energy on the structure and stability of self-folded nanorackets and nanorings and demonstrate that these aggregated amyloid fibrils are stable in such states even when the fibril-fibril interaction is relatively weak, given that the constituting amyloid fibril length exceeds a critical fibril length-scale of several hundred nanometres. We further present a simple approach to directly determine the interfibril adhesion strength from geometric measures. In addition to providing insight into the physics of aggregation of amyloid fibrils our model enables the analysis of large-scale amyloid plaques and presents a new method for the estimation and engineering of the adhesive forces responsible of the self-assembly process of amyloidnanostructures, filling a gap that previously existed between full atomistic simulations of primarily ultra-short fibrils and much larger micrometre-scale amyloid aggregates. Via direct simulation of large-scale amyloid aggregates consisting of hundreds of fibrils we demonstrate that the fibril length has a profound impact on their structure and mechanical properties, where the critical fibril length-scale derived from our analysis of self-folded nanorackets and nanorings defines the structure of amyloid aggregates. A multi-scale modeling approach as used here, bridging the scales from Angstroms to micrometres, opens a wide range of possible nanotechnology applications by presenting a holistic framework that balances mechanical properties of individual fibrils, hierarchical self-assembly, and the adhesive forces determining their stability to facilitate the design of de novoamyloid materials.
Multi-scale genetic dynamic modelling I : an algorithm to compute generators.
Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca
2011-09-01
We present a new approach or framework to model dynamic regulatory genetic activity. The framework is using a multi-scale analysis based upon generic assumptions on the relative time scales attached to the different transitions of molecular states defining the genetic system. At micro-level such systems are regulated by the interaction of two kinds of molecular players: macro-molecules like DNA or polymerases, and smaller molecules acting as transcription factors. The proposed genetic model then represents the larger less abundant molecules with a finite discrete state space, for example describing different conformations of these molecules. This is in contrast to the representations of the transcription factors which are-like in classical reaction kinetics-represented by their particle number only. We illustrate the method by considering the genetic activity associated to certain configurations of interacting genes that are fundamental to modelling (synthetic) genetic clocks. A largely unknown question is how different molecular details incorporated via this more realistic modelling approach lead to different macroscopic regulatory genetic models which dynamical behaviour might-in general-be different for different model choices. The theory will be applied to a real synthetic clock in a second accompanying article (Kirkilioniset al., Theory Biosci, 2011).
Multi-time-scale heat transfer modeling of turbid tissues exposed to short-pulsed irradiations.
Kim, Kyunghan; Guo, Zhixiong
2007-05-01
A combined hyperbolic radiation and conduction heat transfer model is developed to simulate multi-time-scale heat transfer in turbid tissues exposed to short-pulsed irradiations. An initial temperature response of a tissue to an ultrashort pulse irradiation is analyzed by the volume-average method in combination with the transient discrete ordinates method for modeling the ultrafast radiation heat transfer. This response is found to reach pseudo steady state within 1 ns for the considered tissues. The single pulse result is then utilized to obtain the temperature response to pulse train irradiation at the microsecond/millisecond time scales. After that, the temperature field is predicted by the hyperbolic heat conduction model which is solved by the MacCormack's scheme with error terms correction. Finally, the hyperbolic conduction is compared with the traditional parabolic heat diffusion model. It is found that the maximum local temperatures are larger in the hyperbolic prediction than the parabolic prediction. In the modeled dermis tissue, a 7% non-dimensional temperature increase is found. After about 10 thermal relaxation times, thermal waves fade away and the predictions between the hyperbolic and parabolic models are consistent.
NASA Astrophysics Data System (ADS)
Lange, Benjamin A.; Katlein, Christian; Nicolaus, Marcel; Peeken, Ilka; Flores, Hauke
2016-12-01
Multiscale sea ice algae observations are fundamentally important for projecting changes to sea ice ecosystems, as the physical environment continues to change. In this study, we developed upon previously established methodologies for deriving sea ice-algal chlorophyll a concentrations (chl a) from spectral radiation measurements, and applied these to larger-scale spectral surveys. We conducted four different under-ice spectral measurements: irradiance, radiance, transmittance, and transflectance, and applied three statistical approaches: Empirical Orthogonal Functions (EOF), Normalized Difference Indices (NDI), and multi-NDI. We developed models based on ice core chl a and coincident spectral irradiance/transmittance (N = 49) and radiance/transflectance (N = 50) measurements conducted during two cruises to the central Arctic Ocean in 2011 and 2012. These reference models were ranked based on two criteria: mean robustness R2 and true prediction error estimates. For estimating the biomass of a large-scale data set, the EOF approach performed better than the NDI, due to its ability to account for the high variability of environmental properties experienced over large areas. Based on robustness and true prediction error, the three most reliable models, EOF-transmittance, EOF-transflectance, and NDI-transmittance, were applied to two remotely operated vehicle (ROV) and two Surface and Under-Ice Trawl (SUIT) spectral radiation surveys. In these larger-scale chl a estimates, EOF-transmittance showed the best fit to ice core chl a. Application of our most reliable model, EOF-transmittance, to an 85 m horizontal ROV transect revealed large differences compared to published biomass estimates from the same site with important implications for projections of Arctic-wide ice-algal biomass and primary production.
Creating, documenting and sharing network models.
Crook, Sharon M; Bednar, James A; Berger, Sandra; Cannon, Robert; Davison, Andrew P; Djurfeldt, Mikael; Eppler, Jochen; Kriener, Birgit; Furber, Steve; Graham, Bruce; Plesser, Hans E; Schwabe, Lars; Smith, Leslie; Steuber, Volker; van Albada, Sacha
2012-01-01
As computational neuroscience matures, many simulation environments are available that are useful for neuronal network modeling. However, methods for successfully documenting models for publication and for exchanging models and model components among these projects are still under development. Here we briefly review existing software and applications for network model creation, documentation and exchange. Then we discuss a few of the larger issues facing the field of computational neuroscience regarding network modeling and suggest solutions to some of these problems, concentrating in particular on standardized network model terminology, notation, and descriptions and explicit documentation of model scaling. We hope this will enable and encourage computational neuroscientists to share their models more systematically in the future.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
A phase code for memory could arise from circuit mechanisms in entorhinal cortex
Hasselmo, Michael E.; Brandon, Mark P.; Yoshida, Motoharu; Giocomo, Lisa M.; Heys, James G.; Fransen, Erik; Newman, Ehren L.; Zilli, Eric A.
2009-01-01
Neurophysiological data reveals intrinsic cellular properties that suggest how entorhinal cortical neurons could code memory by the phase of their firing. Potential cellular mechanisms for this phase coding in models of entorhinal function are reviewed. This mechanism for phase coding provides a substrate for modeling the responses of entorhinal grid cells, as well as the replay of neural spiking activity during waking and sleep. Efforts to implement these abstract models in more detailed biophysical compartmental simulations raise specific issues that could be addressed in larger scale population models incorporating mechanisms of inhibition. PMID:19656654
Radiation breakage of DNA: a model based on random-walk chromatin structure
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Sachs, R. K.
2001-01-01
Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.
NASA Astrophysics Data System (ADS)
Liu, W.; Atherton, J.; Mõttus, M.; MacArthur, A.; Teemu, H.; Maseyk, K.; Robinson, I.; Honkavaara, E.; Porcar-Castell, A.
2017-10-01
Solar induced chlorophyll a fluorescence (SIF) has been shown to be an excellent proxy of photosynthesis at multiple scales. However, the mechanical linkages between fluorescence and photosynthesis at the leaf level cannot be directly applied at canopy or field scales, as the larger scale SIF emission depends on canopy structure. This is especially true for the forest canopies characterized by high horizontal and vertical heterogeneity. While most of the current studies on SIF radiative transfer in plant canopies are based on the assumption of a homogeneous canopy, recently codes have been developed capable of simulation of fluorescence signal in explicit 3-D forest canopies. Here we present a canopy SIF upscaling method consisting of the integration of the 3-D radiative transfer model DART and a 3-D object model BLENDER. Our aim was to better understand the effect of boreal forest canopy structure on SIF for a spatially explicit forest canopy.
Critical constraint on inflationary magnetogenesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujita, Tomohiro; Yokoyama, Shuichiro, E-mail: tomohiro.fujita@ipmu.jp, E-mail: shu@icrr.u-tokyo.ac.jp
2014-03-01
Recently, there are several reports that the cosmic magnetic fields on Mpc scale in void region is larger than ∼ 10{sup −15}G with an uncertainty of a few orders from the current blazar observations. On the other hand, in inflationary magnetogenesis models, additional primordial curvature perturbations are inevitably produced from iso-curvature perturbations due to generated electromagnetic fields. We explore such induced curvature perturbations in a model independent way and obtained a severe upper bound for the energy scale of inflation from the observed cosmic magnetic fields and the observed amplitude of the curvature perturbation , as ρ{sub inf}{sup 1/4}
Make dark matter charged again
NASA Astrophysics Data System (ADS)
Agrawal, Prateek; Cyr-Racine, Francis-Yan; Randall, Lisa; Scholtz, Jakub
2017-05-01
We revisit constraints on dark matter that is charged under a U(1) gauge group in the dark sector, decoupled from Standard Model forces. We find that the strongest constraints in the literature are subject to a number of mitigating factors. For instance, the naive dark matter thermalization timescale in halos is corrected by saturation effects that slow down isotropization for modest ellipticities. The weakened bounds uncover interesting parameter space, making models with weak-scale charged dark matter viable, even with electromagnetic strength interaction. This also leads to the intriguing possibility that dark matter self-interactions within small dwarf galaxies are extremely large, a relatively unexplored regime in current simulations. Such strong interactions suppress heat transfer over scales larger than the dark matter mean free path, inducing a dynamical cutoff length scale above which the system appears to have only feeble interactions. These effects must be taken into account to assess the viability of darkly-charged dark matter. Future analyses and measurements should probe a promising region of parameter space for this model.
Forced synchronization of large-scale circulation to increase predictability of surface states
NASA Astrophysics Data System (ADS)
Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Wiegerinck, Wim; Duane, Gregory
2016-04-01
Numerical models are key tools in the projection of the future climate change. The lack of perfect initial condition and perfect knowledge of the laws of physics, as well as inherent chaotic behavior limit predictions. Conceptually, the atmospheric variables can be decomposed into a predictable component (signal) and an unpredictable component (noise). In ensemble prediction the anomaly of ensemble mean is regarded as the signal and the ensemble spread the noise. Naturally the prediction skill will be higher if the signal-to-noise ratio (SNR) is larger in the initial conditions. We run two ensemble experiments in order to explore a way to reduce the SNR of surface winds and temperature. One ensemble experiment is AGCM with prescribing sea surface temperature (SST); the other is AGCM with both prescribing SST and nudging the high-level temperature and winds to ERA-Interim. Each ensemble has 30 members. Larger SNR is expected and found over the tropical ocean in the first experiment because the tropical circulation is associated with the convection and the associated surface wind convergence as these are to a large extent driven by the SST. However, small SNR is found over high latitude ocean and land surface due to the chaotic and non-synchronized atmosphere states. In the second experiment the higher level temperature and winds are forced to be synchronized (nudged to reanalysis) and hence a larger SNR of surface winds and temperature is expected. Furthermore, different nudging coefficients are also tested in order to understand the limitation of both synchronization of large-scale circulation and the surface states. These experiments will be useful for the developing strategies to synchronize the 3-D states of atmospheric models that can be later used to build a super model.
The use of imprecise processing to improve accuracy in weather & climate prediction
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, T. N.
2014-08-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Micah Johnson, Andrew Slaughter
PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implemented using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations:more » the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.« less
Reilly, T.E.; Buxton, H.T.
1985-01-01
By 1990, sanitary sewers in Nassau County Sewage Disposal Districts 2 and 3 and Suffolk County Southwest Sewer District will discharge to the ocean 140 cu ft of water per second that would otherwise be returned to the groundwater system through septic tanks and similar systems. To evaluate the effects of this loss on groundwater levels and streamflow, the U.S. Geological Survey developed a groundwater flow model that couples a fine-scale subregional model to a regional model of a larger scale. The regional model generates flux boundary conditions for the subregional model, and the subregional model provides detail in the area of concern. Results indicate that the water table will decline by as much as 90% from conditions in the early 1970's. This report is one of a three-part series describing the predicted hydrologic effects of sewers in southern Nassau and southwestern Suffolk Counties. (USGS)
Parallel Computation of the Regional Ocean Modeling System (ROMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; Song, Y T; Chao, Y
2005-04-05
The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less
NASA Astrophysics Data System (ADS)
Kim, E.; Tedesco, M.; de Roo, R.; England, A. W.; Gu, H.; Pham, H.; Boprie, D.; Graf, T.; Koike, T.; Armstrong, R.; Brodzik, M.; Hardy, J.; Cline, D.
2004-12-01
The NASA Cold Land Processes Field Experiment (CLPX-1) was designed to provide microwave remote sensing observations and ground truth for studies of snow and frozen ground remote sensing, particularly issues related to scaling. CLPX-1 was conducted in 2002 and 2003 in Colorado, USA. One of the goals of the experiment was to test the capabilities of microwave emission models at different scales. Initial forward model validation work has concentrated on the Local-Scale Observation Site (LSOS), a 0.8~ha study site consisting of open meadows separated by trees where the most detailed measurements were made of snow depth and temperature, density, and grain size profiles. Results obtained in the case of the 3rd Intensive Observing Period (IOP3) period (February, 2003, dry snow) suggest that a model based on Dense Medium Radiative Transfer (DMRT) theory is able to model the recorded brightness temperatures using snow parameters derived from field measurements. This paper focuses on the ability of forward DMRT modelling, combined with snowpack measurements, to reproduce the radiobrightness signatures observed by the University of Michigan's Truck-Mounted Radiometer System (TMRS) at 19 and 37~GHz during the 4th IOP (IOP4) in March, 2003. Unlike in IOP3, conditions during IOP4 include both wet and dry periods, providing a valuable test of DMRT model performance. In addition, a comparison will be made for the one day of coincident observations by the University of Tokyo's Ground-Based Microwave Radiometer-7 (GBMR-7) and the TMRS. The plot-scale study in this paper establishes a baseline of DMRT performance for later studies at successively larger scales. And these scaling studies will help guide the choice of future snow retrieval algorithms and the design of future Cold Lands observing systems.
The social brain: scale-invariant layering of Erdős-Rényi networks in small-scale human societies.
Harré, Michael S; Prokopenko, Mikhail
2016-05-01
The cognitive ability to form social links that can bind individuals together into large cooperative groups for safety and resource sharing was a key development in human evolutionary and social history. The 'social brain hypothesis' argues that the size of these social groups is based on a neurologically constrained capacity for maintaining long-term stable relationships. No model to date has been able to combine a specific socio-cognitive mechanism with the discrete scale invariance observed in ethnographic studies. We show that these properties result in nested layers of self-organizing Erdős-Rényi networks formed by each individual's ability to maintain only a small number of social links. Each set of links plays a specific role in the formation of different social groups. The scale invariance in our model is distinct from previous 'scale-free networks' studied using much larger social groups; here, the scale invariance is in the relationship between group sizes, rather than in the link degree distribution. We also compare our model with a dominance-based hierarchy and conclude that humans were probably egalitarian in hunter-gatherer-like societies, maintaining an average maximum of four or five social links connecting all members in a largest social network of around 132 people. © 2016 The Author(s).
John R. Butnor; Kurt H. Johnsen; Chris A. Maier
2005-01-01
Soil C02 efflux is a major component of net ecosystem productivity (NEP) of forest systems. Combining data from multiple researchers for larger-scale modeling and assessment will only be valid if their methodologies provide directly comparable results. We conducted a series of laboratory and field tests to assess the presence and magnitude of...
2008-03-26
only after shifting the GDEM 2 climatology (Davis et al., 1986) to the observed AOSN-II mean that the larger-scale Off Shore Domain was of some use...T.M., K.A. Countryman and M.J. Carron (1986) Tailored acoustic products utilizing the NAVOCEANO GDEM (a generalized digital envi- ronmental model). in
NASA Astrophysics Data System (ADS)
Wayman, C. R.; Russo, T. A.; Li, L.; Forsythe, B.; Hoagland, B.
2017-12-01
As part of the Susquehanna Shale Hills Critical Zone Observatory (SSHCZO) project, we have collected geochemical and hydrological data from several subcatchments and four monitoring sites on the main stem of Shaver's Creek, in Huntingon county, Pennsylvania. One subcatchment (0.43 km2) is under agricultural land use, and the monitoring locations on the larger Shaver's Creek (up to 163 km2) drain watersheds with 0 to 25% agricultural area. These two scales of investigation, coupled with advances made across the SSHCZO on multiple lithologies allow us to extrapolate from the subcatchment to the larger watershed. We use geochemical surface and groundwater data to estimate the solute and water transport regimes within the catchment, and to show how lithology and land use are major controls on ground and surface water quality. One area of investigation includes the transport of nutrients between interflow and regional groundwater, and how that connectivity may be reflected in local surface waters. Water and nutrient (Nitrogen) isotopes, will be used to better understand the relative contributions of local and regional groundwater and interflow fluxes into nearby streams. Following initial qualitative modeling, multiple hydrologic and nutrient transport models (e.g. SWAT and CYCLES/PIHM) will be evaluated from the subcatchment to large watershed scales. We will evaluate the ability to simulate the contributions of regional groundwater versus local groundwater, and also impacts of agricultural land management on surface water quality. Improving estimations of groundwater contributions to stream discharge will provide insight into how much agricultural development can impact stream quality and nutrient loading.
Ensemble Solute Transport in 2-D Operator-Stable Random Fields
NASA Astrophysics Data System (ADS)
Monnig, N. D.; Benson, D. A.
2006-12-01
The heterogeneous velocity field that exists at many scales in an aquifer will typically cause a dissolved solute plume to grow at a rate faster than Fick's Law predicts. Some statistical model must be adopted to account for the aquifer structure that engenders the velocity heterogeneity. A fractional Brownian motion (fBm) model has been shown to create the long-range correlation that can produce continually faster-than-Fickian plume growth. Previous fBm models have assumed isotropic scaling (defined here by a scalar Hurst coefficient). Motivated by field measurements of aquifer hydraulic conductivity, recent techniques were developed to construct random fields with anisotropic scaling with a self-similarity parameter that is defined by a matrix. The growth of ensemble plumes is analyzed for transport through 2-D "operator- stable" fBm hydraulic conductivity (K) fields. Both the longitudinal and transverse Hurst coefficients are important to both plume growth rates and the timing and duration of breakthrough. Smaller Hurst coefficients in the transverse direction lead to more "continuity" or stratification in the direction of transport. The result is continually faster-than-Fickian growth rates, highly non-Gaussian ensemble plumes, and a longer tail early in the breakthrough curve. Contrary to some analytic stochastic theories for monofractal K fields, the plume growth rate never exceeds Mercado's [1967] purely stratified aquifer growth rate of plume apparent dispersivity proportional to mean distance. Apparent super-Mercado growth must be the result of other factors, such as larger plumes corresponding to either a larger initial plume size or greater variance of the ln(K) field.
Experimental investigation of drifting snow in a wind tunnel
NASA Astrophysics Data System (ADS)
Crivelli, Philip; Paterna, Enrico; Horender, Stefan; Lehning, Michael
2015-11-01
Drifting snow has a significant impact on snow distribution in mountains, prairies as well as on glaciers and polar regions. In all these environments, the local mass balance is highly influenced by drifting snow. Despite most of the model approaches still rely on the assumption of steady-state and equilibrium saltation, recent advances have proven the mass-transport of drifting snow events to be highly intermittent. A clear understanding of such high intermittency has not yet been achieved. Therefore in our contribution we investigate mass- and momentum fluxes during drifting snow events, in order to better understand that the link between snow cover erosion and deposition. Experiments were conducted in a cold wind tunnel, employing sensors for the momentum flux measurements, the mass flux measurement and for the snow depth estimation over a certain area upstream of the other devices. Preliminary results show that the mass flux is highly intermittent at scales ranging from eddy turnover time to much larger scales. The former scales are those that contribute the most to the overall intermittency and we observe a link between the turbulent flow structures and the mass flux of drifting snow at those scales. The role of varying snow properties in inducing drifting snow intermittency goes beyond such link and is expected to occur at much larger scales, caused by the physical snow properties such as density and cohesiveness.
Edge fires drive the shape and stability of tropical forests.
Hébert-Dufresne, Laurent; Pellegrini, Adam F A; Bhat, Uttam; Redner, Sidney; Pacala, Stephen W; Berdahl, Andrew M
2018-06-01
In tropical regions, fires propagate readily in grasslands but typically consume only edges of forest patches. Thus, forest patches grow due to tree propagation and shrink by fires in surrounding grasslands. The interplay between these competing edge effects is unknown, but critical in determining the shape and stability of individual forest patches, as well the landscape-level spatial distribution and stability of forests. We analyze high-resolution remote-sensing data from protected Brazilian Cerrado areas and find that forest shapes obey a robust perimeter-area scaling relation across climatic zones. We explain this scaling by introducing a heterogeneous fire propagation model of tropical forest-grassland ecotones. Deviations from this perimeter-area relation determine the stability of individual forest patches. At a larger scale, our model predicts that the relative rates of tree growth due to propagative expansion and long-distance seed dispersal determine whether collapse of regional-scale tree cover is continuous or discontinuous as fire frequency changes. © 2018 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.
When will trends in European mean and heavy daily precipitation emerge?
NASA Astrophysics Data System (ADS)
Maraun, Douglas
2013-03-01
A multi-model ensemble of regional climate projections for Europe is employed to investigate how the time of emergence (TOE) for seasonal sums and maxima of daily precipitation depends on spatial scale. The TOE is redefined for emergence from internal variability only; the spread of the TOE due to imperfect climate model formulation is used as a measure of uncertainty in the TOE itself. Thereby, the TOE becomes a fundamentally limiting timescale and translates into a minimum spatial scale on which robust conclusions can be drawn about precipitation trends. Thus, minimum temporal and spatial scales for adaptation planning are also given. In northern Europe, positive winter trends in mean and heavy precipitation, and in southwestern and southeastern Europe, summer trends in mean precipitation already emerge within the next few decades. However, across wide areas, especially for heavy summer precipitation, the local trend emerges only late in the 21st century or later. For precipitation averaged to larger scales, the trend, in general, emerges earlier.
Helical bottleneck effect in 3D homogeneous isotropic turbulence
NASA Astrophysics Data System (ADS)
Stepanov, Rodion; Golbraikh, Ephim; Frick, Peter; Shestakov, Alexander
2018-02-01
We present the results of modelling the development of homogeneous and isotropic turbulence with a large-scale source of energy and a source of helicity distributed over scales. We use the shell model for numerical simulation of the turbulence at high Reynolds number. The results show that the helicity injection leads to a significant change in the behavior of the energy and helicity spectra in scales larger and smaller than the energy injection scale. We suggest the phenomenology for direct turbulent cascades with the helicity effect, which reduces the efficiency of the spectral energy transfer. Therefore the energy is accumulated and redistributed so that non-linear interactions will be sufficient to provide a constant energy flux. It can be interpreted as the ‘helical bottleneck effect’ which, depending on the parameters of the injection helicity, reminds one of the well-known bottleneck effect at the end of inertial range. Simulations which included the infrared part of the spectrum show that the inverse cascade hardly develops under distributed helicity forcing.
Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide
Tang, William; Wang, Bei; Ethier, Stephane; ...
2016-11-01
The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less
NASA Technical Reports Server (NTRS)
Kniffen, D. A.; Fichtel, C. E.; Thompson, D. J.
1976-01-01
Theoretical considerations and analysis of the results of gamma ray astronomy suggest that the galactic cosmic rays are dynamically coupled to the interstellar matter through the magnetic fields, and hence the cosmic ray density should be enhanced where the matter density is greatest on the scale of galactic arms. This concept has been explored in a galactic model using recent 21 cm radio observations of the neutral hydrogen and 2.6 mm observations of carbon monoxide, which is considered to be a tracer of molecular hydrogen. The model assumes: (1) cosmic rays are galactic and not universal; (2) on the scale of galactic arms, the cosmic ray column (surface) density is proportional to the total interstellar gas column density; (3) the cosmic ray scale height is significantly larger than the scale height of the matter; and (4) ours is a spiral galaxy characterized by an arm to interarm density ratio of about 3:1.
Tracing Multi-Scale Climate Change at Low Latitude from Glacier Shrinkage
NASA Astrophysics Data System (ADS)
Moelg, T.; Cullen, N. J.; Hardy, D. R.; Kaser, G.
2009-12-01
Significant shrinkage of glaciers on top of Africa's highest mountain (Kilimanjaro, 5895 m a.s.l.) has been observed between the late 19th century and the present. Multi-year data from our automatic weather station on the largest remaining slope glacier at 5873 m allow us to force and verify a process-based distributed glacier mass balance model. This generates insights into energy and mass fluxes at the glacier-atmosphere interface, their feedbacks, and how they are linked to atmospheric conditions. By means of numerical atmospheric modeling and global climate model simulations, we explore the linkages of the local climate in Kilimanjaro's summit zone to larger-scale climate dynamics - which suggests a causal connection between Indian Ocean dynamics, mesoscale mountain circulation, and glacier mass balance. Based on this knowledge, the verified mass balance model is used for backward modeling of the steady-state glacier extent observed in the 19th century, which yields the characteristics of local climate change between that time and the present (30-45% less precipitation, 0.1-0.3 hPa less water vapor pressure, 2-4 percentage units less cloud cover at present). Our multi-scale approach provides an important contribution, from a cryospheric viewpoint, to the understanding of how large-scale climate change propagates to the tropical free troposphere. Ongoing work in this context targets the millennium-scale relation between large-scale climate and glacier behavior (by downscaling precipitation), and the possible effects of regional anthropogenic activities (land use change) on glacier mass balance.
Open source Modeling and optimization tools for Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peles, S.
Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward tomore » complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.« less
2017-03-20
comparison with the more intensive demographic study . We found support for spatial variation in productivity at both location and station scales. At location...the larger intensive demographic monitoring study , we also fit a productivity model that included a covariate calculated for the 12 stations included...Reference herein to any specific commercial product , process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily
Effects of heat conduction on artificial viscosity methods for shock capturing
Cook, Andrew W.
2013-12-01
Here we investigate the efficacy of artificial thermal conductivity for shock capturing. The conductivity model is derived from artificial bulk and shear viscosities, such that stagnation enthalpy remains constant across shocks. By thus fixing the Prandtl number, more physical shock profiles are obtained, only on a larger scale. The conductivity model does not contain any empirical constants. It increases the net dissipation of a computational algorithm but is found to better preserve symmetry and produce more robust solutions for strong-shock problems.
NASA Astrophysics Data System (ADS)
Appels, Willemijn M.; Bogaart, Patrick W.; van der Zee, Sjoerd E. A. T. M.
2017-12-01
In winter, saturation excess (SE) ponding is observed regularly in temperate lowland regions. Surface runoff dynamics are controlled by small topographical features that are unaccounted for in hydrological models. To better understand storage and routing effects of small-scale topography and their interaction with shallow groundwater under SE conditions, we developed a model of reduced complexity to investigate SE runoff generation, emphasizing feedbacks between shallow groundwater dynamics and mesotopography. The dynamic specific yield affected unsaturated zone water storage, causing rapid switches between negative and positive head and a flatter groundwater mound than predicted by analytical agrohydrological models. Accordingly, saturated areas were larger and local groundwater fluxes smaller than predicted, leading to surface runoff generation. Mesotopographic features routed water over larger distances, providing a feedback mechanism that amplified changes to the shape of the groundwater mound. This in turn enhanced runoff generation, but whether it also resulted in runoff events depended on the geometry and location of the depressions. Whereas conditions favorable to runoff generation may abound during winter, these feedbacks profoundly reduce the predictability of SE runoff: statistically identical rainfall series may result in completely different runoff generation. The model results indicate that waterlogged areas in any given rainfall event are larger than those predicted by current analytical groundwater models used for drainage design. This change in the groundwater mound extent has implications for crop growth and damage assessments.
The rich get richer: Patterns of plant invasions in the United States
Stohlgren, T.J.; Barnett, D.T.; Kartesz, J.T.
2003-01-01
Observations from islands, small-scale experiments, and mathematical models have generally supported the paradigm that habitats of low plant diversity are more vulnerable to plant invasions than areas of high plant diversity. We summarize two independent data sets to show exactly the opposite pattern at multiple spatial scales. More significant, and alarming, is that hotspots of native plant diversity have been far more heavily invaded than areas of low plant diversity in most parts of the United States when considered at larger spatial scales. Our findings suggest that we cannot expect such hotspots to repel invasions, and that the threat of invasion is significant and predictably greatest in these areas.
Scale dependent behavioral responses to human development by a large predator, the puma.
Wilmers, Christopher C; Wang, Yiwei; Nickel, Barry; Houghtaling, Paul; Shakeri, Yasaman; Allen, Maximilian L; Kermish-Wells, Joe; Yovovich, Veronica; Williams, Terrie
2013-01-01
The spatial scale at which organisms respond to human activity can affect both ecological function and conservation planning. Yet little is known regarding the spatial scale at which distinct behaviors related to reproduction and survival are impacted by human interference. Here we provide a novel approach to estimating the spatial scale at which a top predator, the puma (Puma concolor), responds to human development when it is moving, feeding, communicating, and denning. We find that reproductive behaviors (communication and denning) require at least a 4× larger buffer from human development than non-reproductive behaviors (movement and feeding). In addition, pumas give a wider berth to types of human development that provide a more consistent source of human interference (neighborhoods) than they do to those in which human presence is more intermittent (arterial roads with speeds >35 mph). Neighborhoods were a deterrent to pumas regardless of behavior, while arterial roads only deterred pumas when they were communicating and denning. Female pumas were less deterred by human development than males, but they showed larger variation in their responses overall. Our behaviorally explicit approach to modeling animal response to human activity can be used as a novel tool to assess habitat quality, identify wildlife corridors, and mitigate human-wildlife conflict.
Sibole, Scott C.; Erdemir, Ahmet
2012-01-01
Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro-scale model providing application for other multi-scale continuum mechanics problems. PMID:22649535
Simple and Multiple Endmember Mixture Analysis in the Boreal Forest
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Gamon, John A.; Qiu, Hong-Lie
2000-01-01
A key scientific objective of the original Boreal Ecosystem-Atmospheric Study (BOREAS) field campaign (1993-1996) was to obtain the baseline data required for modeling and predicting fluxes of energy, mass, and trace gases in the boreal forest biome. These data sets are necessary to determine the sensitivity of the boreal forest biome to potential climatic changes and potential biophysical feedbacks on climate. A considerable volume of remotely sensed and supporting field data were acquired by numerous researchers to meet this objective. By design, remote sensing and modeling were considered critical components for scaling efforts, extending point measurements from flux towers and field sites over larger spatial and longer temporal scales. A major focus of the BOREAS Follow-on program was concerned with integrating the diverse remotely sensed and ground-based data sets to address specific questions such as carbon dynamics at local to regional scales.
The Mach number of the cosmic flow - A critical test for current theories
NASA Technical Reports Server (NTRS)
Ostriker, Jeremiah P.; Suto, Yusushi
1990-01-01
A new cosmological, self-contained test using the ratio of mean velocity and the velocity dispersion in the mean flow frame of a group of test objects is presented. To allow comparison with linear theory, the velocity field must first be smoothed on a suitable scale. In the context of linear perturbation theory, the Mach number M(R) which measures the ratio of power on scales larger than to scales smaller than the patch size R, is independent of the perturbation amplitude and also of bias. An apparent inconsistency is found for standard values of power-law index n = 1 and cosmological density parameter Omega = 1, when comparing values of M(R) predicted by popular models with tentative available observations. Nonstandard models based on adiabatic perturbations with either negative n or small Omega value also fail, due to creation of unacceptably large microwave background fluctuations.
NASA Technical Reports Server (NTRS)
Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Florke, M.; Huang, S.; Motovilov, Y.; Buda, S.;
2017-01-01
Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity to climate variability and climate change is comparable for impact models designed for either scale. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a better reproduction of reference conditions. However, the sensitivity of the two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases, but have distinct differences in other cases, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability. Whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models calibrated and validated against observed discharge should be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hattermann, F. F.; Krysanova, V.; Gosling, S. N.
Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climatemore » change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.« less
Beta-diversity of ectoparasites at two spatial scales: nested hierarchy, geography and habitat type.
Warburton, Elizabeth M; van der Mescht, Luther; Stanko, Michal; Vinarski, Maxim V; Korallo-Vinarskaya, Natalia P; Khokhlova, Irina S; Krasnov, Boris R
2017-06-01
Beta-diversity of biological communities can be decomposed into (a) dissimilarity of communities among units of finer scale within units of broader scale and (b) dissimilarity of communities among units of broader scale. We investigated compositional, phylogenetic/taxonomic and functional beta-diversity of compound communities of fleas and gamasid mites parasitic on small Palearctic mammals in a nested hierarchy at two spatial scales: (a) continental scale (across the Palearctic) and (b) regional scale (across sites within Slovakia). At each scale, we analyzed beta-diversity among smaller units within larger units and among larger units with partitioning based on either geography or ecology. We asked (a) whether compositional, phylogenetic/taxonomic and functional dissimilarities of flea and mite assemblages are scale dependent; (b) how geographical (partitioning of sites according to geographic position) or ecological (partitioning of sites according to habitat type) characteristics affect phylogenetic/taxonomic and functional components of dissimilarity of ectoparasite assemblages and (c) whether assemblages of fleas and gamasid mites differ in their degree of dissimilarity, all else being equal. We found that compositional, phylogenetic/taxonomic, or functional beta-diversity was greater on a continental rather than a regional scale. Compositional and phylogenetic/taxonomic components of beta-diversity were greater among larger units than among smaller units within larger units, whereas functional beta-diversity did not exhibit any consistent trend regarding site partitioning. Geographic partitioning resulted in higher values of beta-diversity of ectoparasites than ecological partitioning. Compositional and phylogenetic components of beta-diversity were higher in fleas than mites but the opposite was true for functional beta-diversity in some, but not all, traits.
The Evolution of Clutch Size in Hosts of Avian Brood Parasites.
Medina, Iliana; Langmore, Naomi E; Lanfear, Robert; Kokko, Hanna
2017-11-01
Coevolution with avian brood parasites shapes a range of traits in their hosts, including morphology, behavior, and breeding systems. Here we explore whether brood parasitism is also associated with the evolution of host clutch size. Several studies have proposed that hosts of highly virulent parasites could decrease the costs of parasitism by evolving a smaller clutch size, because hosts with smaller clutches will lose fewer progeny when their clutch is parasitized. We describe a model of the evolution of clutch size, which challenges this logic and shows instead that an increase in clutch size (or no change) should evolve in hosts. We test this prediction using a broad-scale comparative analysis to ask whether there are differences in clutch size within hosts and between hosts and nonhosts. Consistent with our model, this analysis revealed that host species do not have smaller clutches and that hosts that incur larger costs from raising a parasite lay larger clutches. We suggest that brood parasitism might be an influential factor in clutch-size evolution and could potentially select for the evolution of larger clutches in host species.
Magnetic helicity generation in the frame of Kazantsev model
NASA Astrophysics Data System (ADS)
Yushkov, Egor V.; Lukin, Alexander S.
2017-11-01
Using a magnetic dynamo model, suggested by Kazantsev (J. Exp. Theor. Phys. 1968, vol. 26, p. 1031), we study the small-scale helicity generation in a turbulent electrically conducting fluid. We obtain the asymptotic dependencies of dynamo growth rate and magnetic correlation functions on magnetic Reynolds numbers. Special attention is devoted to the comparison of a longitudinal correlation function and a function of magnetic helicity for various conditions of asymmetric turbulent flows. We compare the analytical solutions on small scales with numerical results, calculated by an iterative algorithm on non-uniform grids. We show that the exponential growth of current helicity is simultaneous with the magnetic energy for Reynolds numbers larger than some critical value and estimate this value for various types of asymmetry.
Cosmic Ray Studies with the Fermi Gamma-ray Space Telescope Large Area Telescope
NASA Technical Reports Server (NTRS)
Thompson, David J.; Baldini, L.; Uchiyama, Y.
2012-01-01
The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope provides both direct and indirect measurements of galactic cosmic rays (CR). The LAT high-statistics observations of the 7 GeV - 1 TeV electron plus positron spectrum and limits on spatial anisotropy constrain models for this cosmic-ray component. On a galactic scale, the LAT observations indicate that cosmic-ray sources may be more plentiful in the outer Galaxy than expected or that the scale height of the cosmic-ray diffusive halo is larger than conventional models. Production of cosmic rays in supernova remnants (SNR) is supported by the LAT gamma-ray studies of several of these, both young SNR and those interacting with molecular clouds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Rumeng; Wang, Lifeng, E-mail: walfe@nuaa.edu.cn
The nonlinear thermal vibration behavior of a single-walled carbon nanotube (SWCNT) is investigated by molecular dynamics simulation and a nonlinear, nonplanar beam model. Whirling motion with energy transfer between flexural motions is found in the free vibration of the SWCNT excited by the thermal motion of atoms where the geometric nonlinearity is significant. A nonlinear, nonplanar beam model considering the coupling in two vertical vibrational directions is presented to explain the whirling motion of the SWCNT. Energy in different vibrational modes is not equal even over a time scale of tens of nanoseconds, which is much larger than the periodmore » of fundamental natural vibration of the SWCNT at equilibrium state. The energy of different modes becomes equal when the time scale increases to the microsecond range.« less
Cosmic Ray Studies with the Fermi Gamma-ray Space Telescope Large Area Telescope
NASA Technical Reports Server (NTRS)
Thompson, D. J.; Baldini, L.; Uchiyama, Y.
2011-01-01
The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope provides both direct and indirect measurements of Galactic cosmic rays (CR). The LAT high-statistics observations of the 7 GeV - 1 TcV electron plus positron spectrum and limits on spatial anisotropy constrain models for this cosmic-ray component. On a Galactic scale, the LAT observations indicate that cosmic-ray sources may be more plentiful in the outer Galaxy than expected or that the scale height of the cosmic-ray diffusive halo is larger than conventional models. Production of cosmic rays in supernova remnants (SNR) is supported by the LAT gamma-ray studies of several of these, both young SNR and those interacting with molecular clouds.
Experiments on a Tail-wheel Shimmy
NASA Technical Reports Server (NTRS)
Harling, R; Dietz, O
1954-01-01
Model tests on the "running belt" and tests with a full-scale tail wheel were made on a rotating drum as well as on a runway in order to investigate the causes of the undesirable shimmy phenomena frequently occurring on airplane tail wheels, and the means of avoiding them. The small model (scale 1:10) permitted simulation of the mass, moments of inertia, and fuselage stiffness of the airplane and determination of their influence on the shimmy, whereas by means of the larger model with pneumatic tires (scale 1:2) more accurate investigations were made on the tail wheel itself. The results of drum and road tests show good agreement with one another and with model values. Detailed investigations were made regarding the dependence of the shimmy tendency on trail, rolling speed, load, size of tires, ground friction,and inclination of the swivel axis; furthermore, regarding the influence of devices with restoring effect on the tail wheel, and the friction damping required for prevention of shimmy. Finally observations from slow-motion pictures are reported and conclusions drawn concerning the influence of tire deformation.
NASA Astrophysics Data System (ADS)
McKenna, M. H.; Alter, R. E.; Swearingen, M. E.; Wilson, D. K.
2017-12-01
Many larger sources, such as volcanic eruptions and nuclear detonations, produce infrasound (acoustic waves with a frequency lower than humans can hear, namely 0.1-20 Hz) that can propagate over global scales. But many smaller infrastructure sources, such as bridges, dams, and buildings, also produce infrasound, though with a lower amplitude that tends to propagate only over regional scales (up to 150 km). In order to accurately calculate regional-scale infrasound propagation, we have incorporated high-resolution, three-dimensional forecasts from the Weather Research and Forecasting (WRF) meteorological model into a signal propagation modeling system called Environmental Awareness for Sensor and Emitter Employment (EASEE), developed at the US Army Engineer Research and Development Center. To quantify the improvement of infrasound propagation predictions with more realistic weather data, we conducted sensitivity studies with different propagation ranges and horizontal resolutions and compared them to default predictions with no weather model data. We describe the process of incorporating WRF output into EASEE for conducting these acoustic propagation simulations and present the results of the aforementioned sensitivity studies.
Modelling volatility recurrence intervals in the Chinese commodity futures market
NASA Astrophysics Data System (ADS)
Zhou, Weijie; Wang, Zhengxin; Guo, Haiming
2016-09-01
The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.
Cecala, Kristen K.; Maerz, John C.; Halstead, Brian J.; Frisch, John R.; Gragson, Ted L.; Hepinstall-Cymerman, Jeffrey; Leigh, David S.; Jackson, C. Rhett; Peterson, James T.; Pringle, Catherine M.
2018-01-01
Understanding how factors that vary in spatial scale relate to population abundance is vital to forecasting species responses to environmental change. Stream and river ecosystems are inherently hierarchical, potentially resulting in organismal responses to fine‐scale changes in patch characteristics that are conditional on the watershed context. Here, we address how populations of two salamander species are affected by interactions among hierarchical processes operating at different scales within a rapidly changing landscape of the southern Appalachian Mountains. We modeled reach‐level occupancy of larval and adult black‐bellied salamanders (Desmognathus quadramaculatus) and larval Blue Ridge two‐lined salamanders (Eurycea wilderae) as a function of 17 different terrestrial and aquatic predictor variables that varied in spatial extent. We found that salamander occurrence varied widely among streams within fully forested catchments, but also exhibited species‐specific responses to changes in local conditions. While D. quadramaculatus declined predictably in relation to losses in forest cover, larval occupancy exhibited the strongest negative response to forest loss as well as decreases in elevation. Conversely, occupancy of E. wilderae was unassociated with watershed conditions, only responding negatively to higher proportions of fast‐flowing stream habitat types. Evaluation of hierarchical relationships demonstrated that most fine‐scale variables were closely correlated with broad watershed‐scale variables, suggesting that local reach‐scale factors have relatively smaller effects within the context of the larger landscape. Our results imply that effective management of southern Appalachian stream salamanders must first focus on the larger scale condition of watersheds before management of local‐scale conditions should proceed. Our findings confirm the results of some studies while refuting the results of others, which may indicate that prescriptive recommendations for range‐wide management of species or the application of a single management focus across large geographic areas is inappropriate.
Similarity Rules for Scaling Solar Sail Systems
NASA Technical Reports Server (NTRS)
Canfield, Stephen L.; Beard, James W., III; Peddieson, John; Ewing, Anthony; Garbe, Greg
2004-01-01
Future science missions will require solar sails on the order 10,000 sq m (or larger). However, ground and flight demonstrations must be conducted at significantly smaller Sizes (400 sq m for ground demo) due to limitations of ground-based facilities and cost and availability of flight opportunities. For this reason, the ability to understand the process of scalability, as it applies to solar sail system models and test data, is crucial to the advancement of this technology. This report will address issues of scaling in solar sail systems, focusing on structural characteristics, by developing a set of similarity or similitude functions that will guide the scaling process. The primary goal of these similarity functions (process invariants) that collectively form a set of scaling rules or guidelines is to establish valid relationships between models and experiments that are performed at different orders of scale. In the near term, such an effort will help guide the size and properties of a flight validation sail that will need to be flown to accurately represent a large, mission-level sail.
Stochastic inflation lattice simulations - Ultra-large scale structure of the universe
NASA Technical Reports Server (NTRS)
Salopek, D. S.
1991-01-01
Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.
Recent Advances in the LEWICE Icing Model
NASA Technical Reports Server (NTRS)
Wright, William B.; Addy, Gene; Struk, Peter; Bartkus, Tadas
2015-01-01
This paper will describe two recent modifications to the Glenn ICE software. First, a capability for modeling ice crystals and mixed phase icing has been modified based on recent experimental data. Modifications have been made to the ice particle bouncing and erosion model. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to ice crystal ice accretions performed in the NRC Research Altitude Test Facility (RATFac). Second, modifications were made to the run back model based on data and observations from thermal scaling tests performed in the NRC Altitude Icing Tunnel.
Locatelli, R.; Bousquet, P.; Chevallier, F.; ...
2013-10-08
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr -1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr -1 in North America to 7 Tg yr -1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems.« less
A first large-scale flood inundation forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie
2013-11-04
At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less
Spatial-pattern-induced evolution of a self-replicating loop network.
Suzuki, Keisuke; Ikegami, Takashi
2006-01-01
We study a system of self-replicating loops in which interaction rules between individuals allow competition that leads to the formation of a hypercycle-like network. The main feature of the model is the multiple layers of interaction between loops, which lead to both global spatial patterns and local replication. The network of loops manifests itself as a spiral structure from which new kinds of self-replicating loops emerge at the boundaries between different species. In these regions, larger and more complex self-replicating loops live for longer periods of time, managing to self-replicate in spite of their slower replication. Of particular interest is how micro-scale interactions between replicators lead to macro-scale spatial pattern formation, and how these macro-scale patterns in turn perturb the micro-scale replication dynamics.
NASA Astrophysics Data System (ADS)
de Boer, D. H.; Hassan, M. A.; MacVicar, B.; Stone, M.
2005-01-01
Contributions by Canadian fluvial geomorphologists between 1999 and 2003 are discussed under four major themes: sediment yield and sediment dynamics of large rivers; cohesive sediment transport; turbulent flow structure and sediment transport; and bed material transport and channel morphology. The paper concludes with a section on recent technical advances. During the review period, substantial progress has been made in investigating the details of fluvial processes at relatively small scales. Examples of this emphasis are the studies of flow structure, turbulence characteristics and bedload transport, which continue to form central themes in fluvial research in Canada. Translating the knowledge of small-scale, process-related research to an understanding of the behaviour of large-scale fluvial systems, however, continues to be a formidable challenge. Models play a prominent role in elucidating the link between small-scale processes and large-scale fluvial geomorphology, and, as a result, a number of papers describing models and modelling results have been published during the review period. In addition, a number of investigators are now approaching the problem by directly investigating changes in the system of interest at larger scales, e.g. a channel reach over tens of years, and attempting to infer what processes may have led to the result. It is to be expected that these complementary approaches will contribute to an increased understanding of fluvial systems at a variety of spatial and temporal scales. Copyright
Wang, Yongjiang; Pang, Li; Liu, Xinyu; Wang, Yuansheng; Zhou, Kexun; Luo, Fei
2016-04-01
A comprehensive model of thermal balance and degradation kinetics was developed to determine the optimal reactor volume and insulation material. Biological heat production and five channels of heat loss were considered in the thermal balance model for a representative reactor. Degradation kinetics was developed to make the model applicable to different types of substrates. Simulation of the model showed that the internal energy accumulation of compost was the significant heat loss channel, following by heat loss through reactor wall, and latent heat of water evaporation. Lower proportion of heat loss occurred through the reactor wall when the reactor volume was larger. Insulating materials with low densities and low conductive coefficients were more desirable for building small reactor systems. Model developed could be used to determine the optimal reactor volume and insulation material needed before the fabrication of a lab-scale composting system. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Size effects in non-linear heat conduction with flux-limited behaviors
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2017-11-01
Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Linking models and data on vegetation structure
NASA Astrophysics Data System (ADS)
Hurtt, G. C.; Fisk, J.; Thomas, R. Q.; Dubayah, R.; Moorcroft, P. R.; Shugart, H. H.
2010-06-01
For more than a century, scientists have recognized the importance of vegetation structure in understanding forest dynamics. Now future satellite missions such as Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) hold the potential to provide unprecedented global data on vegetation structure needed to reduce uncertainties in terrestrial carbon dynamics. Here, we briefly review the uses of data on vegetation structure in ecosystem models, develop and analyze theoretical models to quantify model-data requirements, and describe recent progress using a mechanistic modeling approach utilizing a formal scaling method and data on vegetation structure to improve model predictions. Generally, both limited sampling and coarse resolution averaging lead to model initialization error, which in turn is propagated in subsequent model prediction uncertainty and error. In cases with representative sampling, sufficient resolution, and linear dynamics, errors in initialization tend to compensate at larger spatial scales. However, with inadequate sampling, overly coarse resolution data or models, and nonlinear dynamics, errors in initialization lead to prediction error. A robust model-data framework will require both models and data on vegetation structure sufficient to resolve important environmental gradients and tree-level heterogeneity in forest structure globally.
NASA Technical Reports Server (NTRS)
Matsui, Toshihisa; Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Satoh, Masaki; Hashino, Tempei; Kubota, Takuji
2016-01-01
A 14-year climatology of Tropical Rainfall Measuring Mission (TRMM) collocated multi-sensor signal statistics reveal a distinct land-ocean contrast as well as geographical variability of precipitation type, intensity, and microphysics. Microphysics information inferred from the TRMM precipitation radar and Microwave Imager (TMI) show a large land-ocean contrast for the deep category, suggesting continental convective vigor. Over land, TRMM shows higher echo-top heights and larger maximum echoes, suggesting taller storms and more intense precipitation, as well as larger microwave scattering, suggesting the presence of morelarger frozen convective hydrometeors. This strong land-ocean contrast in deep convection is invariant over seasonal and multi-year time-scales. Consequently, relatively short-term simulations from two global storm-resolving models can be evaluated in terms of their land-ocean statistics using the TRMM Triple-sensor Three-step Evaluation via a satellite simulator. The models evaluated are the NASA Multi-scale Modeling Framework (MMF) and the Non-hydrostatic Icosahedral Cloud Atmospheric Model (NICAM). While both simulations can represent convective land-ocean contrasts in warm precipitation to some extent, near-surface conditions over land are relatively moisture in NICAM than MMF, which appears to be the key driver in the divergent warm precipitation results between the two models. Both the MMF and NICAM produced similar frequencies of large CAPE between land and ocean. The dry MMF boundary layer enhanced microwave scattering signals over land, but only NICAM had an enhanced deep convection frequency over land. Neither model could reproduce a realistic land-ocean contrast in in deep convective precipitation microphysics. A realistic contrast between land and ocean remains an issue in global storm-resolving modeling.
O'Donnell, Michael
2015-01-01
State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf
A simple shear limited, single size, time dependent flocculation model
NASA Astrophysics Data System (ADS)
Kuprenas, R.; Tran, D. A.; Strom, K.
2017-12-01
This research focuses on the modeling of flocculation of cohesive sediment due to turbulent shear, specifically, investigating the dependency of flocculation on the concentration of cohesive sediment. Flocculation is important in larger sediment transport models as cohesive particles can create aggregates which are orders of magnitude larger than their unflocculated state. As the settling velocity of each particle is determined by the sediment size, density, and shape, accounting for this aggregation is important in determining where the sediment is deposited. This study provides a new formulation for flocculation of cohesive sediment by modifying the Winterwerp (1998) flocculation model (W98) so that it limits floc size to that of the Kolmogorov micro length scale. The W98 model is a simple approach that calculates the average floc size as a function of time. Because of its simplicity, the W98 model is ideal for implementing into larger sediment transport models; however, the model tends to over predict the dependency of the floc size on concentration. It was found that the modification of the coefficients within the original model did not allow for the model to capture the dependency on concentration. Therefore, a new term within the breakup kernel of the W98 formulation was added. The new formulation results is a single size, shear limited, and time dependent flocculation model that is able to effectively capture the dependency of the equilibrium size of flocs on both suspended sediment concentration and the time to equilibrium. The overall behavior of the new model is explored and showed align well with other studies on flocculation. Winterwerp, J. C. (1998). A simple model for turbulence induced flocculation of cohesive sediment. .Journal of Hydraulic Research, 36(3):309-326.
NASA Astrophysics Data System (ADS)
Greer, A. T.; Woodson, C. B.
2016-02-01
Because of the complexity and extremely large size of marine ecosystems, research attention has a strong focus on modelling the system through space and time to elucidate processes driving ecosystem state. One of the major weaknesses of current modelling approaches is the reliance on a particular grid cell size (usually 10's of km in the horizontal & water column mean) to capture the relevant processes, even though empirical research has shown that marine systems are highly structured on fine scales, and this structure can persist over relatively long time scales (days to weeks). Fine-scale features can have a strong influence on the predator-prey interactions driving trophic transfer. Here we apply a statistic, the AB ratio, used to quantify increased predator production due to predator-prey overlap on fine scales in a manner that is computationally feasible for larger scale models. We calculated the AB ratio for predator-prey distributions throughout the scientific literature, as well as for data obtained with a towed plankton imaging system, demonstrating that averaging across a typical model grid cell neglects the fine-scale predator-prey overlap that is an essential component of ecosystem productivity. Organisms from a range of trophic levels and oceanographic regions tended to overlap with their prey both in the horizontal and vertical dimensions. When predator swimming over a diel cycle was incorporated, the amount of production indicated by the AB ratio increased substantially. For the plankton image data, the AB ratio was higher with increasing sampling resolution, especially when prey were highly aggregated. We recommend that ecosystem models incorporate more fine-scale information both to more accurately capture trophic transfer processes and to capitalize on the increasing sampling resolution and data volume from empirical studies.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
NASA Astrophysics Data System (ADS)
Schmengler, A. C.; Vlek, P. L. G.
2012-04-01
Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The study has shown that the use of multiple methods facilitates the calibration and validation of models and might provide a more accurate measure for soil erosion rates in ungauged catchments. Moreover, the approach could be used to identify the most appropriate working and operational scales for soil erosion modelling.
Recent assimilation developments of FOAM the Met Office ocean forecast system
NASA Astrophysics Data System (ADS)
Lea, Daniel; Martin, Matthew; Waters, Jennifer; Mirouze, Isabelle; While, James; King, Robert
2015-04-01
FOAM is the Met Office's operational ocean forecasting system. This system comprises a range of models from a 1/4 degree resolution global to 1/12 degree resolution regional models and shelf seas models at 7 km resolution. The system is made up of the ocean model NEMO (Nucleus for European Modeling of the Ocean), the Los Alomos sea ice model CICE and the NEMOVAR assimilation run in 3D-VAR FGAT mode. Work is ongoing to transition to both a higher resolution global ocean model at 1/12 degrees and to run FOAM in coupled models. The FOAM system generally performs well. One area of concern however is the performance in the tropics where spurious oscillations and excessive vertical velocity gradients are found after assimilation. NEMOVAR includes a balance operator which in the extra-tropics uses geostrophic balance to produce velocity increments which balance the density increments applied. In the tropics, however, the main balance is between the pressure gradients produced by the density gradient and the applied wind stress. A scheme is presented which aims to maintain this balance when increments are applied. Another issue in FOAM is that there are sometimes persistent temperature and salinity errors which are not effectively corrected by the assimilation. The standard NEMOVAR has a single correlation length scale based on the local Rossby radius. This means that observations in the extra tropics have influence on the model only on short length-scales. In order to maximise the information extracted from the observations and to correct large scale model biases a multiple correlation length-scale scheme has been developed. This includes a larger length scale which spreads observation information further. Various refinements of the scheme are also explored including reducing the longer length scale component at the edge of the sea ice and in areas with high potential vorticity gradients. A related scheme which varies the correlation length scale in the shelf seas is also described.
In the absence of a "landscape of fear": How lions, hyenas, and cheetahs coexist.
Swanson, Alexandra; Arnold, Todd; Kosmala, Margaret; Forester, James; Packer, Craig
2016-12-01
Aggression by top predators can create a "landscape of fear" in which subordinate predators restrict their activity to low-risk areas or times of day. At large spatial or temporal scales, this can result in the costly loss of access to resources. However, fine-scale reactive avoidance may minimize the risk of aggressive encounters for subordinate predators while maintaining access to resources, thereby providing a mechanism for coexistence. We investigated fine-scale spatiotemporal avoidance in a guild of African predators characterized by intense interference competition. Vulnerable to food stealing and direct killing, cheetahs are expected to avoid both larger predators; hyenas are expected to avoid lions. We deployed a grid of 225 camera traps across 1,125 km 2 in Serengeti National Park, Tanzania, to evaluate concurrent patterns of habitat use by lions, hyenas, cheetahs, and their primary prey. We used hurdle models to evaluate whether smaller species avoided areas preferred by larger species, and we used time-to-event models to evaluate fine-scale temporal avoidance in the hours immediately surrounding top predator activity. We found no evidence of long-term displacement of subordinate species, even at fine spatial scales. Instead, hyenas and cheetahs were positively associated with lions except in areas with exceptionally high lion use. Hyenas and lions appeared to actively track each, while cheetahs appear to maintain long-term access to sites with high lion use by actively avoiding those areas just in the hours immediately following lion activity. Our results suggest that cheetahs are able to use patches of preferred habitat by avoiding lions on a moment-to-moment basis. Such fine-scale temporal avoidance is likely to be less costly than long-term avoidance of preferred areas: This may help explain why cheetahs are able to coexist with lions despite high rates of lion-inflicted mortality, and highlights reactive avoidance as a general mechanism for predator coexistence.
The Zero Boil-Off Tank Experiment Contributions to the Development of Cryogenic Fluid Management
NASA Technical Reports Server (NTRS)
Chato, David J.; Kassemi, Mohammad
2015-01-01
The Zero Boil-Off Technology (ZBOT) Experiment involves performing a small scale ISS experiment to study tank pressurization and pressure control in microgravity. The ZBOT experiment consists of a vacuum jacketed test tank filled with an inert fluorocarbon simulant liquid. Heaters and thermo-electric coolers are used in conjunction with an axial jet mixer flow loop to study a range of thermal conditions within the tank. The objective is to provide a high quality database of low gravity fluid motions and thermal transients which will be used to validate Computational Fluid Dynamic (CFD) modeling. This CFD can then be used in turn to predict behavior in larger systems with cryogens. This paper will discuss the current status of the ZBOT experiment as it approaches its flight to installation on the International Space Station, how its findings can be scaled to larger and more ambitious cryogenic fluid management experiments, as well as ideas for follow-on investigations using ZBOT like hardware to study other aspects of cryogenic fluid management.
NASA Astrophysics Data System (ADS)
von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian; Rickenmann, Dieter
2016-04-01
Debris flows are frequent natural hazards that cause massive damage. A wide range of debris flow models try to cover the complex flow behavior that arises from the inhomogeneous material mixture of water with clay, silt, sand, and gravel. The energy dissipation between moving grains depends on grain collisions and tangential friction, and the viscosity of the interstitial fine material suspension depends on the shear gradient. Thus a rheology description needs to be sensitive to the local pressure and shear rate, making the three-dimensional flow structure a key issue for flows in complex terrain. Furthermore, the momentum exchange between the granular and fluid phases should account for the presence of larger particles. We model the fine material suspension with a Herschel-Bulkley rheology law, and represent the gravel with the Coulomb-viscoplastic rheology of Domnik & Pudasaini (Domnik et al. 2013). Both composites are described by two phases that can mix; a third phase accounting for the air is kept separate to account for the free surface. The fluid dynamics are solved in three dimensions using the finite volume open-source code OpenFOAM. Computational costs are kept reasonable by using the Volume of Fluid method to solve only one phase-averaged system of Navier-Stokes equations. The Herschel-Bulkley parameters are modeled as a function of water content, volumetric solid concentration of the mixture, clay content and its mineral composition (Coussot et al. 1989, Yu et al. 2013). The gravel phase properties needed for the Coulomb-viscoplastic rheology are defined by the angle of repose of the gravel. In addition to this basic setup, larger grains and the corresponding grain collisions can be introduced by a coupled Lagrangian particle simulation. Based on the local Savage number a diffusive term in the gravel phase can activate phase separation. The resulting model can reproduce the sensitivity of the debris flow to water content and channel bed roughness, as illustrated with lab-scale and large-scale experiments. A large-scale natural landslide event down a curved channel is presented to show the model performance at such a scale, calibrated based on the observed surface super-elevation.
NASA Astrophysics Data System (ADS)
Higgins, N.; Lapusta, N.
2014-12-01
Many large earthquakes on natural faults are preceded by smaller events, often termed foreshocks, that occur close in time and space to the larger event that follows. Understanding the origin of such events is important for understanding earthquake physics. Unique laboratory experiments of earthquake nucleation in a meter-scale slab of granite (McLaskey and Kilgore, 2013; McLaskey et al., 2014) demonstrate that sample-scale nucleation processes are also accompanied by much smaller seismic events. One potential explanation for these foreshocks is that they occur on small asperities - or bumps - on the fault interface, which may also be the locations of smaller critical nucleation size. We explore this possibility through 3D numerical simulations of a heterogeneous 2D fault embedded in a homogeneous elastic half-space, in an attempt to qualitatively reproduce the laboratory observations of foreshocks. In our model, the simulated fault interface is governed by rate-and-state friction with laboratory-relevant frictional properties, fault loading, and fault size. To create favorable locations for foreshocks, the fault surface heterogeneity is represented as patches of increased normal stress, decreased characteristic slip distance L, or both. Our simulation results indicate that one can create a rate-and-state model of the experimental observations. Models with a combination of higher normal stress and lower L at the patches are closest to matching the laboratory observations of foreshocks in moment magnitude, source size, and stress drop. In particular, we find that, when the local compression is increased, foreshocks can occur on patches that are smaller than theoretical critical nucleation size estimates. The additional inclusion of lower L for these patches helps to keep stress drops within the range observed in experiments, and is compatible with the asperity model of foreshock sources, since one would expect more compressed spots to be smoother (and hence have lower L). In this heterogeneous rate-and-state fault model, the foreshocks interact with each other and with the overall nucleation process through their postseismic slip. The interplay amongst foreshocks, and between foreshocks and the larger-scale nucleation process, is a topic of our future work.
The importance of stochasticity and internal variability in geomorphic erosion system
NASA Astrophysics Data System (ADS)
Kim, J.; Ivanov, V. Y.; Fatichi, S.
2016-12-01
Understanding soil erosion is essential for a range of studies but the predictive skill of prognostic models and reliability of national-scale assessments have been repeatedly questioned. Indeed, data from multiple environments indicate that fluvial soil loss is highly non-unique and its frequency distributions exhibit heavy tails. We reveal that these features are attributed to the high sensitivity of erosion response to micro-scale variations of soil erodibility - `geomorphic internal variability'. The latter acts as an intermediary between forcing and erosion dynamics, augmenting the conventionally emphasized effects of `external variability' (climate, topography, land use, management form). Furthermore, we observe a reduction of erosion non-uniqueness at larger temporal scales that correlates with environment stochasticity. Our analysis shows that this effect can be attributed to the larger likelihood of alternating characteristic regimes of sediment dynamics. The corollary of this study is that the glaring gap - the inherently large uncertainties and the fallacy of representativeness of central tendencies - must be conceded in soil loss assessments. Acknowledgement: This research was supported by a grant (16AWMP-B083066-03) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government, and by the faculty research fund of Sejong University in 2016.
CMB hemispherical asymmetry from non-linear isocurvature perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Assadullahi, Hooshyar; Wands, David; Firouzjahi, Hassan
2015-04-01
We investigate whether non-adiabatic perturbations from inflation could produce an asymmetric distribution of temperature anisotropies on large angular scales in the cosmic microwave background (CMB). We use a generalised non-linear δ N formalism to calculate the non-Gaussianity of the primordial density and isocurvature perturbations due to the presence of non-adiabatic, but approximately scale-invariant field fluctuations during multi-field inflation. This local-type non-Gaussianity leads to a correlation between very long wavelength inhomogeneities, larger than our observable horizon, and smaller scale fluctuations in the radiation and matter density. Matter isocurvature perturbations contribute primarily to low CMB multipoles and hence can lead to a hemisphericalmore » asymmetry on large angular scales, with negligible asymmetry on smaller scales. In curvaton models, where the matter isocurvature perturbation is partly correlated with the primordial density perturbation, we are unable to obtain a significant asymmetry on large angular scales while respecting current observational constraints on the observed quadrupole. However in the axion model, where the matter isocurvature and primordial density perturbations are uncorrelated, we find it may be possible to obtain a significant asymmetry due to isocurvature modes on large angular scales. Such an isocurvature origin for the hemispherical asymmetry would naturally give rise to a distinctive asymmetry in the CMB polarisation on large scales.« less
ERIC Educational Resources Information Center
Yoon, Susan A.; Koehler-Yom, Jessica; Anderson, Emma; Lin, Joyce; Klopfer, Eric
2015-01-01
Background: This exploratory study is part of a larger-scale research project aimed at building theoretical and practical knowledge of complex systems in students and teachers with the goal of improving high school biology learning through professional development and a classroom intervention. Purpose: We propose a model of adaptive expertise to…
Idealized climate change simulations with a high-resolution physical model: HadGEM3-GC2
NASA Astrophysics Data System (ADS)
Senior, Catherine A.; Andrews, Timothy; Burton, Chantelle; Chadwick, Robin; Copsey, Dan; Graham, Tim; Hyder, Pat; Jackson, Laura; McDonald, Ruth; Ridley, Jeff; Ringer, Mark; Tsushima, Yoko
2016-06-01
Idealized climate change simulations with a new physical climate model, HadGEM3-GC2 from The Met Office Hadley Centre are presented and contrasted with the earlier MOHC model, HadGEM2-ES. The role of atmospheric resolution is also investigated. The Transient Climate Response (TCR) is 1.9 K/2.1 K at N216/N96 and Effective Climate Sensitivity (ECS) is 3.1 K/3.2 K at N216/N96. These are substantially lower than HadGEM2-ES (TCR: 2.5 K; ECS: 4.6 K) arising from a combination of changes in the size of climate feedbacks. While the change in the net cloud feedback between HadGEM3 and HadGEM2 is relatively small, there is a change in sign of its longwave and a strengthening of its shortwave components. At a global scale, there is little impact of the increase in atmospheric resolution on the future climate change signal and even at a broad regional scale, many features are robust including tropical rainfall changes, however, there are some significant exceptions. For the North Atlantic and western Europe, the tripolar pattern of winter storm changes found in most CMIP5 models is little impacted by resolution but for the most intense storms, there is a larger percentage increase in number at higher resolution than at lower resolution. Arctic sea-ice sensitivity shows a larger dependence on resolution than on atmospheric physics.
NASA Astrophysics Data System (ADS)
Hedelius, J.; Wennberg, P. O.; Wunch, D.; Roehl, C. M.; Podolske, J. R.; Hillyard, P.; Iraci, L. T.
2017-12-01
Greenhouse gas (GHG) emissions from California's South Coast Air Basin (SoCAB) have been studied extensively using a variety of tower, aircraft, remote sensing, emission inventory, and modeling studies. It is impractical to survey GHG fluxes from all urban areas and hot-spots to the extent the SoCAB has been studied, but it can serve as a test location for scaling methods globally. We use a combination of remote sensing measurements from ground (Total Carbon Column Observing Network, TCCON) and space-based (Observing Carbon Observatory-2, OCO-2) sensors in an inversion to obtain the carbon dioxide flux from the SoCAB. We also perform a variety of sensitivity tests to see how the inversion performs using different model parameterizations. Fluxes do not significantly depend on the mixed layer depth, but are sensitive to the model surface layers (<5 m). Carbon dioxide fluxes are larger than those from bottom-up inventories by about 20%, and along with CO has a significant weekend:weekday effect. Methane fluxes have little weekend changes. Results also include flux estimates from sub-regions of the SoCAB. Larger top-down than bottom-up fluxes highlight the need for additional work on both approaches. Higher top-down fluxes could arise from sampling bias, model bias, or may show bottom-up values underestimate sources. Lessons learned here may help in scaling up inversions to hundreds of urban systems using space-based observations.
NASA Astrophysics Data System (ADS)
Vogt, Marissa F.; Withers, Paul; Fallows, Kathryn; Flynn, Casey L.; Andrews, David J.; Duru, Firdevs; Morgan, David D.
2016-10-01
Radio occultation electron densities measurements from the Mariner 9 and Viking spacecraft, which orbited Mars in the 1970s, have recently become available in a digital format. These data are highly complementary to the radio occultation electron density profiles from Mars Global Surveyor, which were restricted in solar zenith angle and altitude. We have compiled data from the Mariner 9, Viking, and Mars Global Surveyor radio occultation experiments for comparison to electron density measurements made by Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS), the topside radar sounder on Mars Express, and MARSIS-based empirical density models. We find that the electron densities measured by radio occultation are in generally good agreement with the MARSIS data and model, especially near the altitude of the peak electron density but that the MARSIS data and model display a larger plasma scale height than the radio occultation profiles at altitudes between the peak density and 200 km. Consequently, the MARSIS-measured and model electron densities are consistently larger than radio occultation densities at altitudes 200-300 km. Finally, we have analyzed transitions in the topside ionosphere, at the boundary between the photochemically controlled and transport-controlled regions, and identified the average transition altitude, or altitude at which a change in scale height occurs. The average transition altitude is 200 km in the Mariner 9 and Viking radio occultation profiles and in profiles of the median MARSIS radar sounding electron densities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099
2015-03-15
A Direct Numerical Simulation (DNS) database was created representing mixing of species under high-pressure conditions. The configuration considered is that of a temporally evolving mixing layer. The database was examined and analyzed for the purpose of modeling some of the unclosed terms that appear in the Large Eddy Simulation (LES) equations. Several metrics are used to understand the LES modeling requirements. First, a statistical analysis of the DNS-database large-scale flow structures was performed to provide a metric for probing the accuracy of the proposed LES models as the flow fields obtained from accurate LESs should contain structures of morphology statisticallymore » similar to those observed in the filtered-and-coarsened DNS (FC-DNS) fields. To characterize the morphology of the large-scales structures, the Minkowski functionals of the iso-surfaces were evaluated for two different fields: the second-invariant of the rate of deformation tensor and the irreversible entropy production rate. To remove the presence of the small flow scales, both of these fields were computed using the FC-DNS solutions. It was found that the large-scale structures of the irreversible entropy production rate exhibit higher morphological complexity than those of the second invariant of the rate of deformation tensor, indicating that the burden of modeling will be on recovering the thermodynamic fields. Second, to evaluate the physical effects which must be modeled at the subfilter scale, an a priori analysis was conducted. This a priori analysis, conducted in the coarse-grid LES regime, revealed that standard closures for the filtered pressure, the filtered heat flux, and the filtered species mass fluxes, in which a filtered function of a variable is equal to the function of the filtered variable, may no longer be valid for the high-pressure flows considered in this study. The terms requiring modeling are the filtered pressure, the filtered heat flux, the filtered pressure work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less
Stochastic summation of empirical Green's functions
Wennerberg, Leif
1990-01-01
Two simple strategies are presented that use random delay times for repeatedly summing the record of a relatively small earthquake to simulate the effects of a larger earthquake. The simulations do not assume any fault plane geometry or rupture dynamics, but realy only on the ω−2 spectral model of an earthquake source and elementary notions of source complexity. The strategies simulate ground motions for all frequencies within the bandwidth of the record of the event used as a summand. The first strategy, which introduces the basic ideas, is a single-stage procedure that consists of simply adding many small events with random time delays. The probability distribution for delays has the property that its amplitude spectrum is determined by the ratio of ω−2 spectra, and its phase spectrum is identically zero. A simple expression is given for the computation of this zero-phase scaling distribution. The moment rate function resulting from the single-stage simulation is quite simple and hence is probably not realistic for high-frequency (>1 Hz) ground motion of events larger than ML∼ 4.5 to 5. The second strategy is a two-stage summation that simulates source complexity with a few random subevent delays determined using the zero-phase scaling distribution, and then clusters energy around these delays to get an ω−2 spectrum for the sum. Thus, the two-stage strategy allows simulations of complex events of any size for which the ω−2 spectral model applies. Interestingly, a single-stage simulation with too few ω−2records to get a good fit to an ω−2 large-event target spectrum yields a record whose spectral asymptotes are consistent with the ω−2 model, but that includes a region in its spectrum between the corner frequencies of the larger and smaller events reasonably approximated by a power law trend. This spectral feature has also been discussed as reflecting the process of partial stress release (Brune, 1970), an asperity failure (Boatwright, 1984), or the breakdown of ω−2 scaling due to rupture significantly longer than the width of the seismogenic zone (Joyner, 1984).
IGLOO: an Intermediate Complexity Framework to Simulate Greenland Ice-Ocean Interactions
NASA Astrophysics Data System (ADS)
Perrette, M.; Calov, R.; Beckmann, J.; Alexander, D.; Beyer, S.; Ganopolski, A.
2017-12-01
The Greenland ice-sheet is a major contributor to current and future sea level rise associated to climate warming. It is widely believed that over a century time scale, surface melting is the main driver of Greenland ice volume change, in contrast to melting by the ocean. It is due to relatively warmer air and less ice area exposed to melting by ocean water compared to Antarctica, its southern, larger twin. Yet most modeling studies do not have adequate grid resolution to represent fine-scale outlet glaciers and fjords at the margin of the ice sheet, where ice-ocean interaction occurs, and must use rather crude parameterizations to represent this process. Additionally, the ice-sheet area grounded below sea level has been reassessed upwards in the most recent estimates of bedrock elevation under the Greenland ice sheet, revealing a larger potential for marine-mediated melting than previously thought. In this work, we develop an original approach to estimate potential Greenland ice sheet contribution to sea level rise from ocean melting, in an intermediate complexity framework, IGLOO. We use a medium-resolution (5km) ice-sheet model coupled interactively to a number of 1-D flowline models for the individual outlet glaciers. We propose a semi-objective methodology to derive 1-D glacier geometries from 2-D Greenland datasets, as well as preliminary results of coupled ice-sheet-glaciers simulations with IGLOO.
Scaling NASA Applications to 1024 CPUs on Origin 3K
NASA Technical Reports Server (NTRS)
Taft, Jim
2002-01-01
The long and highly successful joint SGI-NASA research effort in ever larger SSI systems was to a large degree the result of the successful development of the MLP scalable parallel programming paradigm developed at ARC: 1) MLP scaling in real production codes justified ever larger systems at NAS; 2) MLP scaling on 256p Origin 2000 gave SGl impetus to productize 256p; 3) MLP scaling on 512 gave SGI courage to build 1024p O3K; and 4) History of MLP success resulted in IBM Star Cluster based MLP effort.
NASA Astrophysics Data System (ADS)
Omrani, H.; Drobinski, P.; Dubos, T.
2009-09-01
In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.
Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M D; Cole, S; Frenk, C S
2011-02-14
We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less
Crucial nesting habitat for gunnison sage-grouse: A spatially explicit hierarchical approach
Aldridge, Cameron L.; Saher, D.J.; Childers, T.M.; Stahlnecker, K.E.; Bowen, Z.H.
2012-01-01
Gunnison sage-grouse (Centrocercus minimus) is a species of special concern and is currently considered a candidate species under Endangered Species Act. Careful management is therefore required to ensure that suitable habitat is maintained, particularly because much of the species' current distribution is faced with exurban development pressures. We assessed hierarchical nest site selection patterns of Gunnison sage-grouse inhabiting the western portion of the Gunnison Basin, Colorado, USA, at multiple spatial scales, using logistic regression-based resource selection functions. Models were selected using Akaike Information Criterion corrected for small sample sizes (AIC c) and predictive surfaces were generated using model averaged relative probabilities. Landscape-scale factors that had the most influence on nest site selection included the proportion of sagebrush cover >5%, mean productivity, and density of 2 wheel-drive roads. The landscape-scale predictive surface captured 97% of known Gunnison sage-grouse nests within the top 5 of 10 prediction bins, implicating 57% of the basin as crucial nesting habitat. Crucial habitat identified by the landscape model was used to define the extent for patch-scale modeling efforts. Patch-scale variables that had the greatest influence on nest site selection were the proportion of big sagebrush cover >10%, distance to residential development, distance to high volume paved roads, and mean productivity. This model accurately predicted independent nest locations. The unique hierarchical structure of our models more accurately captures the nested nature of habitat selection, and allowed for increased discrimination within larger landscapes of suitable habitat. We extrapolated the landscape-scale model to the entire Gunnison Basin because of conservation concerns for this species. We believe this predictive surface is a valuable tool which can be incorporated into land use and conservation planning as well the assessment of future land-use scenarios. ?? 2011 The Wildlife Society.
Observing and Simulating Diapycnal Mixing in the Canadian Arctic Archipelago
NASA Astrophysics Data System (ADS)
Hughes, K.; Klymak, J. M.; Hu, X.; Myers, P. G.; Williams, W. J.; Melling, H.
2016-12-01
High-spatial-resolution observations in the central Canadian Arctic Archipelago are analysed in conjunction with process-oriented modelling to estimate the flow pathways among the constricted waterways, understand the nature of the hydraulic control(s), and assess the influence of smaller scale (metres to kilometres) phenomena such as internal waves and topographically induced eddies. The observations repeatedly display isopycnal displacements of 50 m as dense water plunges over a sill. Depth-averaged turbulent dissipation rates near the sill estimated from these observations are typically 10-6-10-5 W kg-1, a range that is three orders of magnitude larger than that for the open ocean. These and other estimates are compared against a 1/12° basin-scale model from which we estimate diapycnal mixing rates using a volume-integrated advection-diffusion equation. Much of the mixing in this simulation is concentrated near constrictions within Barrow Strait and Queens Channel, the latter being our observational site. This suggests the model is capable of capturing topographically induced mixing. However, such mixing is expected to be enhanced in the presence of tides, a process not included in our basin scale simulation or other similar models. Quantifying this enhancement is another objective of our process-oriented modelling.
Perturbations and gradients as fundamental tests for modeling the soil carbon cycle
NASA Astrophysics Data System (ADS)
Bond-Lamberty, B. P.; Bailey, V. L.; Becker, K.; Fansler, S.; Hinkle, C.; Liu, C.
2013-12-01
An important step in matching process-level knowledge to larger-scale measurements and model results is to challenge those models with site-specific perturbations and/or changing environmental conditions. Here we subject modified versions of an ecosystem process model to two stringent tests: replicating a long-term climate change dryland experiment (Rattlesnake Mountain) and partitioning the carbon fluxes of a soil drainage gradient in the northern Everglades (Disney Wilderness Preserve). For both sites, on-site measurements were supplemented by laboratory incubations of soil columns. We used a parameter-space search algorithm to optimize, within observational limits, the model's influential inputs, so that the spun-up carbon stocks and fluxes matched observed values. Modeled carbon fluxes (net primary production and net ecosystem exchange) agreed with measured values, within observational error limits, but the model's partitioning of soil fluxes (autotrophic versus heterotrophic), did not match laboratory measurements from either site. Accounting for site heterogeneity at DWP, modeled carbon exchange was reasonably consistent with values from eddy covariance. We discuss the implications of this work for ecosystem- to global scale modeling of ecosystems in a changing climate.
Penn, Colin A.; Bearup, Lindsay A.; Maxwell, Reed M.; Clow, David W.
2016-01-01
The effects of mountain pine beetle (MPB)-induced tree mortality on a headwater hydrologic system were investigated using an integrated physical modeling framework with a high-resolution computational grid. Simulations of MPB-affected and unaffected conditions, each with identical atmospheric forcing for a normal water year, were compared at multiple scales to evaluate the effects of scale on MPB-affected hydrologic systems. Individual locations within the larger model were shown to maintain hillslope-scale processes affecting snowpack dynamics, total evapotranspiration, and soil moisture that are comparable to several field-based studies and previous modeling work. Hillslope-scale analyses also highlight the influence of compensating changes in evapotranspiration and snow processes. Reduced transpiration in the Grey Phase of MPB-induced tree mortality was offset by increased late-summer evaporation, while overall snowpack dynamics were more dependent on elevation effects than MPB-induced tree mortality. At the watershed scale, unaffected areas obscured the magnitude of MPB effects. Annual water yield from the watershed increased during Grey Phase simulations by 11 percent; a difference that would be difficult to diagnose with long-term gage observations that are complicated by inter-annual climate variability. The effects on hydrology observed and simulated at the hillslope scale can be further damped at the watershed scale, which spans more life zones and a broader range of landscape properties. These scaling effects may change under extreme conditions, e.g., increased total MPB-affected area or a water year with above average snowpack.
Naming Game with Multiple Hearers
NASA Astrophysics Data System (ADS)
Li, Bing; Chen, Guanrong; Chow, Tommy W. S.
2013-05-01
A new model called Naming Game with Multiple Hearers (NGMH) is proposed in this paper. A naming game over a population of individuals aims to reach consensus on the name of an object through pair-wise local interactions among all the individuals. The proposed NGMH model describes the learning process of a new word, in a population with one speaker and multiple hearers, at each interaction towards convergence. The characteristics of NGMH are examined on three types of network topologies, namely ER random-graph network, WS small-world network, and BA scale-free network. Comparative analysis on the convergence time is performed, revealing that the topology with a larger average (node) degree can reach consensus faster than the others over the same population. It is found that, for a homogeneous network, the average degree is the limiting value of the number of hearers, which reduces the individual ability of learning new words, consequently decreasing the convergence time; for a scale-free network, this limiting value is the deviation of the average degree. It is also found that a network with a larger clustering coefficient takes longer time to converge; especially a small-word network with smallest rewiring possibility takes longest time to reach convergence. As more new nodes are being added to scale-free networks with different degree distributions, their convergence time appears to be robust against the network-size variation. Most new findings reported in this paper are different from that of the single-speaker/single-hearer naming games documented in the literature.
Validating and improving a zero-dimensional stack voltage model of the Vanadium Redox Flow Battery
NASA Astrophysics Data System (ADS)
König, S.; Suriyah, M. R.; Leibfried, T.
2018-02-01
Simple, computationally efficient battery models can contribute significantly to the development of flow batteries. However, validation studies for these models on an industrial-scale stack level are rarely published. We first extensively present a simple stack voltage model for the Vanadium Redox Flow Battery. For modeling the concentration overpotential, we derive mass transfer coefficients from experimental results presented in the 1990s. The calculated mass transfer coefficient of the positive half-cell is 63% larger than of the negative half-cell, which is not considered in models published to date. Further, we advance the concentration overpotential model by introducing an apparent electrochemically active electrode surface which differs from the geometric electrode area. We use the apparent surface as fitting parameter for adapting the model to experimental results of a flow battery manufacturer. For adapting the model, we propose a method for determining the agreement between model and reality quantitatively. To protect the manufacturer's intellectual property, we introduce a normalization method for presenting the results. For the studied stack, the apparent electrochemically active surface of the electrode is 41% larger than its geometrical area. Hence, the current density in the diffusion layer is 29% smaller than previously reported for a zero-dimensional model.
NASA Astrophysics Data System (ADS)
Lamraoui, F.; Booth, J. F.; Naud, C. M.
2017-12-01
The representation of subgrid-scale processes of low-level marine clouds located in the post-cold-frontal region poses a serious challenge for climate models. More precisely, the boundary layer parameterizations are predominantly designed for individual regimes that can evolve gradually over time and does not accommodate the cold front passage that can overly modify the boundary layer rapidly. Also, the microphysics schemes respond differently to the quick development of the boundary layer schemes, especially under unstable conditions. To improve the understanding of cloud physics in the post-cold frontal region, the present study focuses on exploring the relationship between cloud properties, the local processes and large-scale conditions. In order to address these questions, we explore the WRF sensitivity to the interaction between various combinations of the boundary layer and microphysics parameterizations, including the Community Atmospheric Model version 5 (CAM5) physical package in a perturbed physics ensemble. Then, we evaluate these simulations against ground-based ARM observations over the Azores. The WRF-based simulations demonstrate particular sensitivities of the marine cold front passage and the associated post-cold frontal clouds to the domain size, the resolution and the physical parameterizations. First, it is found that in multiple different case studies the model cannot generate the cold front passage when the domain size is larger than 3000 km2. Instead, the modeled cold front stalls, which shows the importance of properly capturing the synoptic scale conditions. The simulation reveals persistent delay in capturing the cold front passage and also an underestimated duration of the post-cold-frontal conditions. Analysis of the perturbed physics ensemble shows that changing the microphysics scheme leads to larger differences in the modeled clouds than changing the boundary layer scheme. The in-cloud heating tendencies are analyzed to explain this sensitivity.
Performance analysis of parallel gravitational N-body codes on large GPU clusters
NASA Astrophysics Data System (ADS)
Huang, Si-Yi; Spurzem, Rainer; Berczik, Peter
2016-01-01
We compare the performance of two very different parallel gravitational N-body codes for astrophysical simulations on large Graphics Processing Unit (GPU) clusters, both of which are pioneers in their own fields as well as on certain mutual scales - NBODY6++ and Bonsai. We carry out benchmarks of the two codes by analyzing their performance, accuracy and efficiency through the modeling of structure decomposition and timing measurements. We find that both codes are heavily optimized to leverage the computational potential of GPUs as their performance has approached half of the maximum single precision performance of the underlying GPU cards. With such performance we predict that a speed-up of 200 - 300 can be achieved when up to 1k processors and GPUs are employed simultaneously. We discuss the quantitative information about comparisons of the two codes, finding that in the same cases Bonsai adopts larger time steps as well as larger relative energy errors than NBODY6++, typically ranging from 10 - 50 times larger, depending on the chosen parameters of the codes. Although the two codes are built for different astrophysical applications, in specified conditions they may overlap in performance at certain physical scales, thus allowing the user to choose either one by fine-tuning parameters accordingly.
Modified Baryonic Dynamics: two-component cosmological simulations with light sterile neutrinos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angus, G.W.; Gentile, G.; Diaferio, A.
2014-10-01
In this article we continue to test cosmological models centred on Modified Newtonian Dynamics (MOND) with light sterile neutrinos, which could in principle be a way to solve the fine-tuning problems of the standard model on galaxy scales while preserving successful predictions on larger scales. Due to previous failures of the simple MOND cosmological model, here we test a speculative model where the modified gravitational field is produced only by the baryons and the sterile neutrinos produce a purely Newtonian field (hence Modified Baryonic Dynamics). We use two-component cosmological simulations to separate the baryonic N-body particles from the sterile neutrinomore » ones. The premise is to attenuate the over-production of massive galaxy cluster halos which were prevalent in the original MOND plus light sterile neutrinos scenario. Theoretical issues with such a formulation notwithstanding, the Modified Baryonic Dynamics model fails to produce the correct amplitude for the galaxy cluster mass function for any reasonable value of the primordial power spectrum normalisation.« less
Quantifying Stock Return Distributions in Financial Markets
Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593
Quantifying Stock Return Distributions in Financial Markets.
Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.
STEM and APT characterization of scale formation on a La,Hf,Ti-doped NiCrAl model alloy.
Unocic, Kinga A; Chen, Yimeng; Shin, Dongwon; Pint, Bruce A; Marquis, Emmanuelle A
2018-06-01
A thermally grown scale formed on a cast NiCrAl model alloy doped with lanthanum, hafnium, and titanium was examined after isothermal exposure at 1100 °C for 100 h in dry flowing O 2 to understand the dopant segregation along scale grain boundaries. The complex scale formed on the alloy surface was composed of two types of substrates: phase-dependent, thin (<250 nm) outer layers and a columnar-grained ∼3.5 μm inner alumina layer. Two types of oxides formed between the inner and outer scale layers: small (3-15 nm) La 2 O 3 and larger (≤50 nm) HfO 2 oxide precipitates. Nonuniform distributions of the hafnium, lanthanum, and titanium dopants were observed along the inner scale grain boundaries, with hafnium dominating in most of the grain boundaries of α-Al 2 O 3. The concentration of reactive elements (RE) seemed to strongly depend on the grain boundary structure. The level of titanium grain boundary segregation in the inner scale decreased toward the model alloy (substrate), confirming the fast outward diffusion of titanium. Hafnium was also observed at the metal-scale interface and in the γ' (Ni 3 Al) phase of the alloy. High-resolution scanning transmission electron microscopy (STEM) confirmed the substitution of REs for aluminum atoms at the scale grain boundaries, consistent with both the semiconducting band structure and the site-blocking models. Both STEM and atom probe tomography allowed quantification of REs along the scale grain boundaries across the scale thickness. Analysis of the scale morphology after isothermal exposure in flowing oxygen revealed a myriad of new precipitate phases, RE segregation dependence on grain boundary type, and atomic arrangement along scale grain boundaries, which is expected to influence the scale growth rate, stability, and mechanical properties. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluation of NOx Emissions and Modeling
NASA Astrophysics Data System (ADS)
Henderson, B. H.; Simon, H. A.; Timin, B.; Dolwick, P. D.; Owen, R. C.; Eyth, A.; Foley, K.; Toro, C.; Baker, K. R.
2017-12-01
Studies focusing on ambient measurements of NOy have concluded that NOx emissions are overestimated and some have attributed the error to the onroad mobile sector. We investigate this conclusion to identify the cause of observed bias. First, we compare DISCOVER-AQ Baltimore ambient measurements to fine-scale modeling with NOy tagged by sector. Sector-based relationships with bias are present, but these are sensitive to simulated vertical mixing. This is evident both in sensitivity to mixing parameterization and the seasonal patterns of bias. We also evaluate observation-based indicators, like CO:NOy ratios, that are commonly used to diagnose emissions inventories. Second, we examine the sensitivity of predicted NOx and NOy to temporal allocation of emissions. We investigate alternative temporal allocations for EGUs without CEMS, on-road mobile, and several non-road categories. These results show some location-specific sensitivity and will lead to some improved temporal allocations. Third, near-road studies have inherently fewer confounding variables, and have been examined for more direct evaluation of emissions and dispersion models. From 2008-2011, the EPA and FHWA conducted near-road studies in Las Vegas and Detroit. These measurements are used to more directly evaluate the emissions and dispersion using site-specific traffic data. In addition, the site-specific emissions are being compared to the emissions used in larger-scale photochemical modeling to identify key discrepancies. These efforts are part of a larger coordinated effort by EPA scientist to ensure the highest quality in emissions and model processes. We look forward to sharing the state of these analyses and expected updates.
Clouds in ECMWF's 30 KM Resolution Global Atmospheric Forecast Model (TL639)
NASA Technical Reports Server (NTRS)
Cahalan, R. F.; Morcrette, J. J.
1999-01-01
Global models of the general circulation of the atmosphere resolve a wide range of length scales, and in particular cloud structures extend from planetary scales to the smallest scales resolvable, now down to 30 km in state-of-the-art models. Even the highest resolution models do not resolve small-scale cloud phenomena seen, for example, in Landsat and other high-resolution satellite images of clouds. Unresolved small-scale disturbances often grow into larger ones through non-linear processes that transfer energy upscale. Understanding upscale cascades is of crucial importance in predicting current weather, and in parameterizing cloud-radiative processes that control long term climate. Several movie animations provide examples of the temporal and spatial variation of cloud fields produced in 4-day runs of the forecast model at the European Centre for Medium-Range Weather Forecasts (ECMWF) in Reading, England, at particular times and locations of simultaneous measurement field campaigns. model resolution is approximately 30 km horizontally (triangular truncation TL639) with 31 vertical levels from surface to stratosphere. Timestep of the model is about 10 minutes, but animation frames are 3 hours apart, at timesteps when the radiation is computed. The animations were prepared from an archive of several 4-day runs at the highest available model resolution, and archived at ECMWF. Cloud, wind and temperature fields in an approximately 1000 km X 1000 km box were retrieved from the archive, then approximately 60 Mb Vis5d files were prepared with the help of Graeme Kelly of ECMWF, and were compressed into MPEG files each less than 3 Mb. We discuss the interaction of clouds and radiation in the model, and compare the variability of cloud liquid as a function of scale to that seen in cloud observations made in intensive field campaigns. Comparison of high-resolution global runs to cloud-resolving models, and to lower resolution climate models is leading to better understanding of the upscale cascade and suggesting new cloud-radiation parameterizations for climate models.
NASA Astrophysics Data System (ADS)
Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.
2017-12-01
In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at even larger geographical domains. Keywords: PAHs; Community multi-scale air quality model; Multimedia fate model; Land use
Why does offspring size affect performance? Integrating metabolic scaling with life-history theory
Pettersen, Amanda K.; White, Craig R.; Marshall, Dustin J.
2015-01-01
Within species, larger offspring typically outperform smaller offspring. While the relationship between offspring size and performance is ubiquitous, the cause of this relationship remains elusive. By linking metabolic and life-history theory, we provide a general explanation for why larger offspring perform better than smaller offspring. Using high-throughput respirometry arrays, we link metabolic rate to offspring size in two species of marine bryozoan. We found that metabolism scales allometrically with offspring size in both species: while larger offspring use absolutely more energy than smaller offspring, larger offspring use proportionally less of their maternally derived energy throughout the dependent, non-feeding phase. The increased metabolic efficiency of larger offspring while dependent on maternal investment may explain offspring size effects—larger offspring reach nutritional independence (feed for themselves) with a higher proportion of energy relative to structure than smaller offspring. These findings offer a potentially universal explanation for why larger offspring tend to perform better than smaller offspring but studies on other taxa are needed. PMID:26559952
Long lived light scalars as probe of low scale seesaw models
NASA Astrophysics Data System (ADS)
Dev, P. S. Bhupal; Mohapatra, Rabindra N.; Zhang, Yongchao
2017-10-01
We point out that in generic TeV scale seesaw models for neutrino masses with local B- L symmetry breaking, there is a phenomenologically allowed range of parameters where the Higgs field responsible for B- L symmetry breaking leaves a physical real scalar field with mass around GeV scale. This particle (denoted here by H3) is weakly mixed with the Standard Model Higgs field (h) with mixing θ1 ≲mH3 /mh, barring fine-tuned cancellation. In the specific case when the B- L symmetry is embedded into the TeV scale left-right seesaw scenario, we show that the bounds on the h-H3 mixing θ1 become further strengthened due to low energy flavor constraints, thus forcing the light H3 to be long lived, with displaced vertex signals at the LHC. The property of left-right TeV scale seesaw models are such that they make the H3 decay to two photons as the dominant mode. This is in contrast with a generic light scalar that mixes with the SM Higgs boson, which could also have leptonic and hadronic decay modes with comparable or larger strength. We discuss the production of this new scalar field at the LHC and show that it leads to testable displaced vertex signals of collimated photon jets, which is a new distinguishing feature of the left-right seesaw model. We also study a simpler version of the model where the SU(2)R breaking scale is much higher than the O(TeV) U(1) B- L breaking scale, in which case the production and decay of H3 proceed differently, but its long lifetime feature is still preserved for a large range of parameters. Thus, the search for such long-lived light scalar particles provides a new way to probe TeV scale seesaw models for neutrino masses at colliders.
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping
2017-03-01
A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.
Does temperature nudging overwhelm aerosol radiative effects in regional integrated climate models?
NASA Astrophysics Data System (ADS)
He, Jian; Glotfelty, Timothy; Yahya, Khairunnisa; Alapaty, Kiran; Yu, Shaocai
2017-04-01
Nudging (data assimilation) is used in many regional integrated meteorology-air quality models to reduce biases in simulated climatology. However, in such modeling systems, temperature changes due to nudging could compete with temperature changes induced by radiatively active and hygroscopic short-lived tracers leading to two interesting dilemmas: when nudging is continuously applied, what are the relative sizes of these two radiative forces at regional and local scales? How do these two forces present in the free atmosphere differ from those present at the surface? This work studies these two issues by converting temperature changes due to nudging into pseudo radiative effects (PRE) at the surface (PRE_sfc), in troposphere (PRE_atm), and at the top of atmosphere (PRE_toa), and comparing PRE with the reported aerosol radiative effects (ARE). Results show that the domain-averaged PRE_sfc is smaller than ARE_sfc estimated in previous studies and this work, but could be significantly larger than ARE_sfc at local scales. PRE_atm is also much smaller than ARE_atm. These results indicate that appropriate nudging methodology could be applied to the integrated models to study aerosol radiative effects at continental/regional scales, but it should be treated with caution for local scale applications.
NASA Astrophysics Data System (ADS)
Darmenova, K.; Higgins, G.; Kiley, H.; Apling, D.
2010-12-01
Current General Circulation Models (GCMs) provide a valuable estimate of both natural and anthropogenic climate changes and variability on global scales. At the same time, future climate projections calculated with GCMs are not of sufficient spatial resolution to address regional needs. Many climate impact models require information at scales of 50 km or less, so dynamical downscaling is often used to estimate the smaller-scale information based on larger scale GCM output. To address current deficiencies in local planning and decision making with respect to regional climate change, our research is focused on performing a dynamical downscaling with the Weather Research and Forecasting (WRF) model and developing decision aids that translate the regional climate data into actionable information for users. Our methodology involves development of climatological indices of extreme weather and heating/cooling degree days based on WRF ensemble runs initialized with the NCEP-NCAR reanalysis and the European Center/Hamburg Model (ECHAM5). Results indicate that the downscale simulations provide the necessary detailed output required by state and local governments and the private sector to develop climate adaptation plans. In addition we evaluated the WRF performance in long-term climate simulations over the Southwestern US and validated against observational datasets.
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian
2016-01-01
A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.
Assessment of Scaled Rotors for Wind Tunnel Experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maniaci, David Charles; Kelley, Christopher Lee; Chiu, Phillip
2015-07-01
Rotor design and analysis work has been performed to support the conceptualization of a wind tunnel test focused on studying wake dynamics. This wind tunnel test would serve as part of a larger model validation campaign that is part of the Department of Energy Wind and Water Power Program’s Atmosphere to electrons (A2e) initiative. The first phase of this effort was directed towards designing a functionally scaled rotor based on the same design process and target full-scale turbine used for new rotors for the DOE/SNL SWiFT site. The second phase focused on assessing the capabilities of an already available rotor,more » the G1, designed and built by researchers at the Technical University of München.« less
Multiscale Computation. Needs and Opportunities for BER Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheibe, Timothy D.; Smith, Jeremy C.
2015-01-01
The Environmental Molecular Sciences Laboratory (EMSL), a scientific user facility managed by Pacific Northwest National Laboratory for the U.S. Department of Energy, Office of Biological and Environmental Research (BER), conducted a one-day workshop on August 26, 2014 on the topic of “Multiscale Computation: Needs and Opportunities for BER Science.” Twenty invited participants, from various computational disciplines within the BER program research areas, were charged with the following objectives; Identify BER-relevant models and their potential cross-scale linkages that could be exploited to better connect molecular-scale research to BER research at larger scales and; Identify critical science directions that will motivate EMSLmore » decisions regarding future computational (hardware and software) architectures.« less
Models of Small-Scale Patchiness
NASA Technical Reports Server (NTRS)
McGillicuddy Dennis J., Jr.
2001-01-01
Patchiness is perhaps the most salient characteristic of plankton populations in the ocean. The scale of this heterogeneity spans many orders of magnitude in its spatial extent, ranging from planetary down to microscale. It has been argued that patchiness plays a fundamental role in the functioning of marine ecosystems, insofar as the mean conditions may not reflect the environment to which organisms are adapted. For example, the fact that some abundant predators cannot thrive on the mean concentration of their prey in the ocean implies that they are somehow capable of exploiting small-scale patches of prey whose concentrations are much larger than the mean. Understanding the nature of this patchiness is thus one of the major challenges of oceanographic ecology. Additional information is contained in the original extended abstract.
NASA Astrophysics Data System (ADS)
Harris, L.; Lin, S. J.; Zhou, L.; Chen, J. H.; Benson, R.; Rees, S.
2016-12-01
Limited-area convection-permitting models have proven useful for short-range NWP, but are unable to interact with the larger scales needed for longer lead-time skill. A new global forecast model, fvGFS, has been designed combining a modern nonhydrostatic dynamical core, the GFDL Finite-Volume Cubed-Sphere dynamical core (FV3) with operational GFS physics and initial conditions, and has been shown to provide excellent global skill while improving representation of small-scale phenomena. The nested-grid capability of FV3 allows us to build a regional-to-global variable-resolution model to efficiently refine to 3-km grid spacing over the Continental US. The use of two-way grid nesting allows us to reach these resolutions very efficiently, with the operational requirement easily attainable on current supercomputing systems.Even without a boundary-layer or advanced microphysical scheme appropriate for convection-perrmitting resolutions, the effectiveness of fvGFS can be demonstrated for a variety of weather events. We demonstrate successful proof-of-concept simulations of a variety of phenomena. We show the capability to develop intense hurricanes with realistic fine-scale eyewalls and rainbands. The new model also produces skillful predictions of severe weather outbreaks and of organized mesoscale convective systems. Fine-scale orographic and boundary-layer phenomena are also simulated with excellent fidelity by fvGFS. Further expected improvements are discussed, including the introduction of more sophisticated microphysics and of scale-aware convection schemes.
NASA Astrophysics Data System (ADS)
Andrews, A. E.; Hu, L.; Thoning, K. W.; Nehrkorn, T.; Mountain, M. E.; Jacobson, A. R.; Michalak, A.; Dlugokencky, E. J.; Sweeney, C.; Worthy, D. E. J.; Miller, J. B.; Fischer, M. L.; Biraud, S.; van der Velde, I. R.; Basu, S.; Tans, P. P.
2017-12-01
CarbonTracker-Lagrange (CT-L) is a new high-resolution regional inverse modeling system for improved estimation of North American CO2 fluxes. CT-L uses footprints from the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by high-resolution (10 to 30 km) meteorological fields from the Weather Research and Forecasting (WRF) model. We performed a suite of synthetic-data experiments to evaluate a variety of inversion configurations, including (1) solving for scaling factors to an a priori flux versus additive corrections, (2) solving for fluxes at 3-hrly resolution versus at coarser temporal resolution, (3) solving for fluxes at 1o × 1o resolution versus at large eco-regional scales. Our framework explicitly and objectively solves for the optimal solution with a full error covariance matrix with maximum likelihood estimation, thereby enabling rigorous uncertainty estimates for the derived fluxes. In the synthetic-data inversions, we find that solving for weekly scaling factors of a priori Net Ecosystem Exchange (NEE) at 1o × 1o resolution with optimization of diurnal cycles of CO2 fluxes yields faithful retrieval of the specified "true" fluxes as those solved at 3-hrly resolution. In contrast, a scheme that does not allow for optimization of diurnal cycles of CO2 fluxes suffered from larger aggregation errors. We then applied the optimal inversion setup to estimate North American fluxes for 2007-2015 using real atmospheric CO2 observations, multiple prior estimates of NEE, and multiple boundary values estimated from the NOAA's global Eulerian CarbonTracker (CarbonTracker) and from an empirical approach. Our derived North American land CO2 fluxes show larger seasonal amplitude than those estimated from the CarbonTracker, removing seasonal biases in the CarbonTracker's simulated CO2 mole fractions. Independent evaluations using in-situ CO2 eddy covariance flux measurements and independent aircraft profiles also suggest an improved estimation on North American CO2 fluxes from CT-L. Furthermore, our derived CO2 flux anomalies over North America corresponding to the 2012 North American drought and the 2015 El Niño are larger than derived by the CarbonTracker. They also indicate different responses of ecosystems to those anomalous climatic events.
Yapuncich, Gabriel S; Boyer, Doug M
2014-01-01
The articular facets of interosseous joints must transmit forces while maintaining relatively low stresses. To prevent overloading, joints that transmit higher forces should therefore have larger facet areas. The relative contributions of body mass and muscle-induced forces to joint stress are unclear, but generate opposing hypotheses. If mass-induced forces dominate, facet area should scale with positive allometry to body mass. Alternatively, muscle-induced forces should cause facets to scale isometrically with body mass. Within primates, both scaling patterns have been reported for articular surfaces of the femoral and humeral heads, but more distal elements are less well studied. Additionally, examination of complex articular surfaces has largely been limited to linear measurements, so that ‘true area' remains poorly assessed. To re-assess these scaling relationships, we examine the relationship between body size and articular surface areas of the talus. Area measurements were taken from microCT scan-generated surfaces of all talar facets from a comprehensive sample of extant euarchontan taxa (primates, treeshrews, and colugos). Log-transformed data were regressed on literature-derived log-body mass using reduced major axis and phylogenetic least squares regressions. We examine the scaling patterns of muscle mass and physiological cross-sectional area (PCSA) to body mass, as these relationships may complicate each model. Finally, we examine the scaling pattern of hindlimb muscle PCSA to talar articular surface area, a direct test of the effect of mass-induced forces on joint surfaces. Among most groups, there is an overall trend toward positive allometry for articular surfaces. The ectal (= posterior calcaneal) facet scales with positive allometry among all groups except ‘sundatherians', strepsirrhines, galagids, and lorisids. The medial tibial facet scales isometrically among all groups except lemuroids. Scaling coefficients are not correlated with sample size, clade inclusivity or behavioral diversity of the sample. Muscle mass scales with slight positive allometry to body mass, and PCSA scales at isometry to body mass. PCSA generally scales with negative allometry to articular surface area, which indicates joint surfaces increase faster than muscles' ability to generate force. We suggest a synthetic model to explain the complex patterns observed for talar articular surface area scaling: whether ‘muscles or mass' drive articular facet scaling is probably dependent on the body size range of the sample and the biological role of the facet. The relationship between ‘muscle vs. mass' dominance is likely bone-and facet-specific, meaning that some facets should respond primarily to stresses induced by larger body mass, whereas others primarily reflect muscle forces. PMID:24219027
Review of the outer scale of the atmospheric turbulence
NASA Astrophysics Data System (ADS)
Ziad, Aziz
2016-07-01
Outer scale is a relevant parameter for the experimental performance evaluation of large telescopes. Different techniques have been used for the outer scale estimation. In situ measurements with radiosounding balloons have given very small values of outer scale. This latter has also been estimated directly at the ground level from the wavefront analysis with High Angular Resolution (HAR) techniques using interferometric or Shack-Hartmann or more generally AO systems data. Dedicated instruments have been also developed for the outer scale monitoring such as the Generalized Seeing Monitor (GSM) and the Monitor of Outer Scale Profile (MOSP). The measured values of outer scale from HAR techniques, GSM and MOSP are somewhat coherent and are larger than the in situ results. The main explanation of this difference comes from the definition of the outer scale itself. This paper aims to give a review in a non-exhaustive way of different techniques and instruments for the measurement of the outer scale. Comparisons of outer scale measurements will be discussed in the light of the different definitions of this parameter, the associated observable quantities and the atmospheric turbulence model as well.
A New Framework for Cumulus Parametrization - A CPT in action
NASA Astrophysics Data System (ADS)
Jakob, C.; Peters, K.; Protat, A.; Kumar, V.
2016-12-01
The representation of convection in climate model remains a major Achilles Heel in our pursuit of better predictions of global and regional climate. The basic principle underpinning the parametrisation of tropical convection in global weather and climate models is that there exist discernible interactions between the resolved model scale and the parametrised cumulus scale. Furthermore, there must be at least some predictive power in the larger scales for the statistical behaviour on small scales for us to be able to formally close the parametrised equations. The presentation will discuss a new framework for cumulus parametrisation based on the idea of separating the prediction of cloud area from that of velocity. This idea is put into practice by combining an existing multi-scale stochastic cloud model with observations to arrive at the prediction of the area fraction for deep precipitating convection. Using mid-tropospheric humidity and vertical motion as predictors, the model is shown to reproduce the observed behaviour of both mean and variability of deep convective area fraction well. The framework allows for the inclusion of convective organisation and can - in principle - be made resolution-aware or resolution-independent. When combined with simple assumptions about cloud-base vertical motion the model can be used as a closure assumption in any existing cumulus parametrisation. Results of applying this idea in the the ECHAM model indicate significant improvements in the simulation of tropical variability, including but not limited to the MJO. This presentation will highlight how the close collaboration of the observational, theoretical and model development community in the spirit of the climate process teams can lead to significant progress in long-standing issues in climate modelling while preserving the freedom of individual groups in pursuing their specific implementation of an agreed framework.
Chaix, Basile; Leyland, Alastair H; Sabel, Clive E; Chauvin, Pierre; Råstam, Lennart; Kristersson, Håkan; Merlo, Juan
2006-01-01
Study objective Previous research provides preliminary evidence of spatial variations of mental disorders and associations between neighbourhood social context and mental health. This study expands past literature by (1) using spatial techniques, rather than multilevel models, to compare the spatial distributions of two groups of mental disorders (that is, disorders due to psychoactive substance use, and neurotic, stress related, and somatoform disorders); and (2) investigating the independent impact of contextual deprivation and neighbourhood social disorganisation on mental health, while assessing both the magnitude and the spatial scale of these effects. Design Using different spatial techniques, the study investigated mental disorders due to psychoactive substance use, and neurotic disorders. Participants All 89 285 persons aged 40–69 years residing in Malmö, Sweden, in 2001, geolocated to their place of residence. Main results The spatial scan statistic identified a large cluster of increased prevalence in a similar location for the two mental disorders in the northern part of Malmö. However, hierarchical geostatistical models showed that the two groups of disorders exhibited a different spatial distribution, in terms of both magnitude and spatial scale. Mental disorders due to substance consumption showed larger neighbourhood variations, and varied in space on a larger scale, than neurotic disorders. After adjustment for individual factors, the risk of substance related disorders increased with neighbourhood deprivation and neighbourhood social disorganisation. The risk of neurotic disorders only increased with contextual deprivation. Measuring contextual factors across continuous space, it was found that these associations operated on a local scale. Conclusions Taking space into account in the analyses permitted deeper insight into the contextual determinants of mental disorders. PMID:16614334
NASA Astrophysics Data System (ADS)
Buccolieri, Riccardo; Salim, Salim Mohamed; Leo, Laura Sandra; Di Sabatino, Silvana; Chan, Andrew; Ielpo, Pierina; de Gennaro, Gianluigi; Gromke, Christof
2011-03-01
This paper first discusses the aerodynamic effects of trees on local scale flow and pollutant concentration in idealized street canyon configurations by means of laboratory experiments and Computational Fluid Dynamics (CFD). These analyses are then used as a reference modelling study for the extension a the neighbourhood scale by investigating a real urban junction of a medium size city in southern Italy. A comparison with previous investigations shows that street-level concentrations crucially depend on the wind direction and street canyon aspect ratio W/H (with W and H the width and the height of buildings, respectively) rather than on tree crown porosity and stand density. It is usually assumed in the literature that larger concentrations are associated with perpendicular approaching wind. In this study, we demonstrate that while for tree-free street canyons under inclined wind directions the larger the aspect ratio the lower the street-level concentration, in presence of trees the expected reduction of street-level concentration with aspect ratio is less pronounced. Observations made for the idealized street canyons are re-interpreted in real case scenario focusing on the neighbourhood scale in proximity of a complex urban junction formed by street canyons of similar aspect ratios as those investigated in the laboratory. The aim is to show the combined influence of building morphology and vegetation on flow and dispersion and to assess the effect of vegetation on local concentration levels. To this aim, CFD simulations for two typical winter/spring days show that trees contribute to alter the local flow and act to trap pollutants. This preliminary study indicates that failing to account for the presence of vegetation, as typically practiced in most operational dispersion models, would result in non-negligible errors in the predictions.
Filtering analysis of a direct numerical simulation of the turbulent Rayleigh-Benard problem
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Hussaini, M. Y.; Zang, T. A.
1990-01-01
A filtering analysis of a turbulent flow was developed which provides details of the path of the kinetic energy of the flow from its creation via thermal production to its dissipation. A low-pass spatial filter is used to split the velocity and the temperature field into a filtered component (composed mainly of scales larger than a specific size, nominally the filter width) and a fluctuation component (scales smaller than a specific size). Variables derived from these fields can fall into one of the above two ranges or be composed of a mixture of scales dominated by scales near the specific size. The filter is used to split the kinetic energy equation into three equations corresponding to the three scale ranges described above. The data from a direct simulation of the Rayleigh-Benard problem for conditions where the flow is turbulent are used to calculate the individual terms in the three kinetic energy equations. This is done for a range of filter widths. These results are used to study the spatial location and the scale range of the thermal energy production, the cascading of kinetic energy, the diffusion of kinetic energy, and the energy dissipation. These results are used to evaluate two subgrid models typically used in large-eddy simulations of turbulence. Subgrid models attempt to model the energy below the filter width that is removed by a low-pass filter.
Black-hole universe: time evolution.
Yoo, Chul-Moon; Okawa, Hirotada; Nakao, Ken-ichi
2013-10-18
Time evolution of a black hole lattice toy model universe is simulated. The vacuum Einstein equations in a cubic box with a black hole at the origin are numerically solved with periodic boundary conditions on all pairs of faces opposite to each other. Defining effective scale factors by using the area of a surface and the length of an edge of the cubic box, we compare them with that in the Einstein-de Sitter universe. It is found that the behavior of the effective scale factors is well approximated by that in the Einstein-de Sitter universe. In our model, if the box size is sufficiently larger than the horizon radius, local inhomogeneities do not significantly affect the global expansion law of the Universe even though the inhomogeneity is extremely nonlinear.
Self-folding with shape memory composites at the millimeter scale
NASA Astrophysics Data System (ADS)
Felton, S. M.; Becker, K. P.; Aukes, D. M.; Wood, R. J.
2015-08-01
Self-folding is an effective method for creating 3D shapes from flat sheets. In particular, shape memory composites—laminates containing shape memory polymers—have been used to self-fold complex structures and machines. To date, however, these composites have been limited to feature sizes larger than one centimeter. We present a new shape memory composite capable of folding millimeter-scale features. This technique can be activated by a global heat source for simultaneous folding, or by resistive heaters for sequential folding. It is capable of feature sizes ranging from 0.5 to 40 mm, and is compatible with multiple laminate compositions. We demonstrate the ability to produce complex structures and mechanisms by building two self-folding pieces: a model ship and a model bumblebee.
An evolving effective stress approach to anisotropic distortional hardening
Lester, B. T.; Scherzinger, W. M.
2018-03-11
A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less
An evolving effective stress approach to anisotropic distortional hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, B. T.; Scherzinger, W. M.
A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less
Super Yang Mills, matrix models and geometric transitions
NASA Astrophysics Data System (ADS)
Ferrari, Frank
2005-03-01
I explain two applications of the relationship between four-dimensional N=1 supersymmetric gauge theories, zero-dimensional gauged matrix models, and geometric transitions in string theory. The first is related to the spectrum of BPS domain walls or BPS branes. It is shown that one can smoothly interpolate between a D-brane state, whose weak coupling tension scales as N˜1/g, and a closed string solitonic state, whose weak coupling tension scales as N˜1/gs2. This is part of a larger theory of N=1 quantum parameter spaces. The second is a new purely geometric approach to sum exactly over planar diagrams in zero dimension. It is an example of open/closed string duality. To cite this article: F. Ferrari, C. R. Physique 6 (2005).
A model for the origin of high-energy cosmic rays
NASA Technical Reports Server (NTRS)
Jokipii, J. R.; Morfill, G. E.
1985-01-01
It is suggested that cosmic rays, up to the highest energies observed, originate in the Galaxy and are accelerated in astrophysical shock waves. If there is a galactic wind, in analogy with the solar wind, a hierarchy of shocks ranging from supernova shocks to the galactic wind termination shock is expected. This leads to a consistent model in which most cosmic rays, up to perhaps 10 to the 14th eV energy, are accelerated by supernova shocks, but that particles with energies of 10 to the 15th eV and higher are accelerated at the termination shock of the galactic wind. Intermediate energies may be accelerated by intermediate-scale shocks, and there may be larger scale shocks associated with the Local Group of galaxies.
Scaling properties of ballistic nano-transistors
2011-01-01
Recently, we have suggested a scale-invariant model for a nano-transistor. In agreement with experiments a close-to-linear thresh-old trace was found in the calculated ID - VD-traces separating the regimes of classically allowed transport and tunneling transport. In this conference contribution, the relevant physical quantities in our model and its range of applicability are discussed in more detail. Extending the temperature range of our studies it is shown that a close-to-linear thresh-old trace results at room temperatures as well. In qualitative agreement with the experiments the ID - VG-traces for small drain voltages show thermally activated transport below the threshold gate voltage. In contrast, at large drain voltages the gate-voltage dependence is weaker. As can be expected in our relatively simple model, the theoretical drain current is larger than the experimental one by a little less than a decade. PMID:21711899
Submesoscale-selective compensation of fronts in a salinity-stratified ocean.
Spiro Jaeger, Gualtiero; Mahadevan, Amala
2018-02-01
Salinity, rather than temperature, is the leading influence on density in some regions of the world's upper oceans. In the Bay of Bengal, heavy monsoonal rains and runoff generate strong salinity gradients that define density fronts and stratification in the upper ~50 m. Ship-based observations made in winter reveal that fronts exist over a wide range of length scales, but at O(1)-km scales, horizontal salinity gradients are compensated by temperature to alleviate about half the cross-front density gradient. Using a process study ocean model, we show that scale-selective compensation occurs because of surface cooling. Submesoscale instabilities cause density fronts to slump, enhancing stratification along-front. Specifically for salinity fronts, the surface mixed layer (SML) shoals on the less saline side, correlating sea surface salinity (SSS) with SML depth at O(1)-km scales. When losing heat to the atmosphere, the shallower and less saline SML experiences a larger drop in temperature compared to the adjacent deeper SML on the salty side of the front, thus correlating sea surface temperature (SST) with SSS at the submesoscale. This compensation of submesoscale fronts can diminish their strength and thwart the forward cascade of energy to smaller scales. During winter, salinity fronts that are dynamically submesoscale experience larger temperature drops, appearing in satellite-derived SST as cold filaments. In freshwater-influenced regions, cold filaments can mark surface-trapped layers insulated from deeper nutrient-rich waters, unlike in other regions, where they indicate upwelling of nutrient-rich water and enhanced surface biological productivity.
Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS)
Shea, Tracey L; Tennant, Alan; Pallant, Julie F
2009-01-01
Background There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. Methods The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. Results To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. Conclusion The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study. PMID:19426512
Testing the role of bedforms as controls on the morphodynamics of sandy braided rivers with CFD
NASA Astrophysics Data System (ADS)
Unsworth, C. A.; Nicholas, A. P.; Ashworth, P. J.; Best, J.; Lane, S. N.; Parsons, D. R.; Sambrook Smith, G.; Simpson, C.; Strick, R. J. P.
2017-12-01
Sand-bed rivers are characterised by multiple scales of topography (e.g., channels, bars and bedforms). Small scale topographic features (e.g., dunes) exert a significant influence on coherent flow structures and sediment transport processes, over distances that scale with channel depth. However, the extent to which such dune-scale effects control larger, channel and bar-scale morphology and morphodynamics remains unknown. Moreover, such bedform effects are typically neglected in two-dimensional (depth-averaged) morphodynamic models that are used to simulate river evolution. To evaluate the significance of these issues, we report results from a combined numerical modelling and field monitoring study, undertaken in the South Saskatchewan River, Canada. Numerical simulations were carried out, using the OpenFOAM CFD code, to quantify the mean three-dimensional flow structure within a 90 x 350 m section of channel. To isolate the role of bedforms as a control on flow and sediment transport, two simulations were undertaken. The first used a high-resolution ( 3 cm) bedform-resolving DEM. The second used a filtered DEM in which dunes were removed and only large scale topographic features (e.g., bars, scour pools etc) were resolved. The results of these simulations are compared here, in order to quantify the degree to which topographic steering by bedforms influences flow and sediment transport directions at bar and channel scales. Analysis of the CFD simulation results within a 2D morphodynamic modelling framework demonstrates that dunes exert a significant influence on sediment transport, and hence morphodynamics, and highlights important shortcomings in existing 2D model parameterisations of topographic steering.
Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS).
Shea, Tracey L; Tennant, Alan; Pallant, Julie F
2009-05-09
There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.
Huang, Zhi; Liu, Xiangnan; Jin, Ming; Ding, Chao; Jiang, Jiale; Wu, Ling
2016-01-01
Accurate monitoring of heavy metal stress in crops is of great importance to assure agricultural productivity and food security, and remote sensing is an effective tool to address this problem. However, given that Earth observation instruments provide data at multiple scales, the choice of scale for use in such monitoring is challenging. This study focused on identifying the characteristic scale for effectively monitoring heavy metal stress in rice using the dry weight of roots (WRT) as the representative characteristic, which was obtained by assimilation of GF-1 data with the World Food Studies (WOFOST) model. We explored and quantified the effect of the important state variable LAI (leaf area index) at various spatial scales on the simulated rice WRT to find the critical scale for heavy metal stress monitoring using the statistical characteristics. Furthermore, a ratio analysis based on the varied heavy metal stress levels was conducted to identify the characteristic scale. Results indicated that the critical threshold for investigating the rice WRT in monitoring studies of heavy metal stress was larger than 64 m but smaller than 256 m. This finding represents a useful guideline for choosing the most appropriate imagery. PMID:26959033
Exploring cosmic homogeneity with the BOSS DR12 galaxy sample
NASA Astrophysics Data System (ADS)
Ntelis, Pierros; Hamilton, Jean-Christophe; Le Goff, Jean-Marc; Burtin, Etienne; Laurent, Pierre; Rich, James; Guillermo Busca, Nicolas; Tinker, Jeremy; Aubourg, Eric; du Mas des Bourboux, Hélion; Bautista, Julian; Palanque Delabrouille, Nathalie; Delubac, Timothée; Eftekharzadeh, Sarah; Hogg, David W.; Myers, Adam; Vargas-Magaña, Mariana; Pâris, Isabelle; Petitjean, Partick; Rossi, Graziano; Schneider, Donald P.; Tojeiro, Rita; Yeche, Christophe
2017-06-01
In this study, we probe the transition to cosmic homogeneity in the Large Scale Structure (LSS) of the Universe using the CMASS galaxy sample of BOSS spectroscopic survey which covers the largest effective volume to date, 3 h-3 Gpc3 at 0.43 <= z <= 0.7. We study the scaled counts-in-spheres, N(
López-Padilla, Alexis; Ruiz-Rodriguez, Alejandro; Restrepo Flórez, Claudia Estela; Rivero Barrios, Diana Marsela; Reglero, Guillermo; Fornari, Tiziana
2016-06-25
Vaccinium meridionale Swartz (Mortiño or Colombian blueberry) is one of the Vaccinium species abundantly found across the Colombian mountains, which are characterized by high contents of polyphenolic compounds (anthocyanins and flavonoids). The supercritical fluid extraction (SFE) of Vaccinium species has mainly focused on the study of V. myrtillus L. (blueberry). In this work, the SFE of Mortiño fruit from Colombia was studied in a small-scale extraction cell (273 cm³) and different extraction pressures (20 and 30 MPa) and temperatures (313 and 343 K) were investigated. Then, process scaling-up to a larger extraction cell (1350 cm³) was analyzed using well-known semi-empirical engineering approaches. The Broken and Intact Cell (BIC) model was adjusted to represent the kinetic behavior of the low-scale extraction and to simulate the large-scale conditions. Extraction yields obtained were in the range 0.1%-3.2%. Most of the Mortiño solutes are readily accessible and, thus, 92% of the extractable material was recovered in around 30 min. The constant CO₂ residence time criterion produced excellent results regarding the small-scale kinetic curve according to the BIC model, and this conclusion was experimentally validated in large-scale kinetic experiments.
López-Padilla, Alexis; Ruiz-Rodriguez, Alejandro; Restrepo Flórez, Claudia Estela; Rivero Barrios, Diana Marsela; Reglero, Guillermo; Fornari, Tiziana
2016-01-01
Vaccinium meridionale Swartz (Mortiño or Colombian blueberry) is one of the Vaccinium species abundantly found across the Colombian mountains, which are characterized by high contents of polyphenolic compounds (anthocyanins and flavonoids). The supercritical fluid extraction (SFE) of Vaccinium species has mainly focused on the study of V. myrtillus L. (blueberry). In this work, the SFE of Mortiño fruit from Colombia was studied in a small-scale extraction cell (273 cm3) and different extraction pressures (20 and 30 MPa) and temperatures (313 and 343 K) were investigated. Then, process scaling-up to a larger extraction cell (1350 cm3) was analyzed using well-known semi-empirical engineering approaches. The Broken and Intact Cell (BIC) model was adjusted to represent the kinetic behavior of the low-scale extraction and to simulate the large-scale conditions. Extraction yields obtained were in the range 0.1%–3.2%. Most of the Mortiño solutes are readily accessible and, thus, 92% of the extractable material was recovered in around 30 min. The constant CO2 residence time criterion produced excellent results regarding the small-scale kinetic curve according to the BIC model, and this conclusion was experimentally validated in large-scale kinetic experiments. PMID:28773640
LoCuSS: THE SUNYAEV-ZEL'DOVICH EFFECT AND WEAK-LENSING MASS SCALING RELATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrone, Daniel P.; Carlstrom, John E.; Gralla, Megan
2012-08-01
We present the first weak-lensing-based scaling relation between galaxy cluster mass, M{sub WL}, and integrated Compton parameter Y{sub sph}. Observations of 18 galaxy clusters at z {approx_equal} 0.2 were obtained with the Subaru 8.2 m telescope and the Sunyaev-Zel'dovich Array. The M{sub WL}-Y{sub sph} scaling relations, measured at {Delta} = 500, 1000, and 2500 {rho}{sub c}, are consistent in slope and normalization with previous results derived under the assumption of hydrostatic equilibrium (HSE). We find an intrinsic scatter in M{sub WL} at fixed Y{sub sph} of 20%, larger than both previous measurements of M{sub HSE}-Y{sub sph} scatter as well asmore » the scatter in true mass at fixed Y{sub sph} found in simulations. Moreover, the scatter in our lensing-based scaling relations is morphology dependent, with 30%-40% larger M{sub WL} for undisturbed compared to disturbed clusters at the same Y{sub sph} at r{sub 500}. Further examination suggests that the segregation may be explained by the inability of our spherical lens models to faithfully describe the three-dimensional structure of the clusters, in particular, the structure along the line of sight. We find that the ellipticity of the brightest cluster galaxy, a proxy for halo orientation, correlates well with the offset in mass from the mean scaling relation, which supports this picture. This provides empirical evidence that line-of-sight projection effects are an important systematic uncertainty in lensing-based scaling relations.« less
Module-based multiscale simulation of angiogenesis in skeletal muscle
2011-01-01
Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529
Multi-scale modeling of multi-component reactive transport in geothermal aquifers
NASA Astrophysics Data System (ADS)
Nick, Hamidreza M.; Raoof, Amir; Wolf, Karl-Heinz; Bruhn, David
2014-05-01
In deep geothermal systems heat and chemical stresses can cause physical alterations, which may have a significant effect on flow and reaction rates. As a consequence it will lead to changes in permeability and porosity of the formations due to mineral precipitation and dissolution. Large-scale modeling of reactive transport in such systems is still challenging. A large area of uncertainty is the way in which the pore-scale information controlling the flow and reaction will behave at a larger scale. A possible choice is to use constitutive relationships relating, for example the permeability and porosity evolutions to the change in the pore geometry. While determining such relationships through laboratory experiments may be limited, pore-network modeling provides an alternative solution. In this work, we introduce a new workflow in which a hybrid Finite-Element Finite-Volume method [1,2] and a pore network modeling approach [3] are employed. Using the pore-scale model, relevant constitutive relations are developed. These relations are then embedded in the continuum-scale model. This approach enables us to study non-isothermal reactive transport in porous media while accounting for micro-scale features under realistic conditions. The performance and applicability of the proposed model is explored for different flow and reaction regimes. References: 1. Matthäi, S.K., et al.: Simulation of solute transport through fractured rock: a higher-order accurate finite-element finite-volume method permitting large time steps. Transport in porous media 83.2 (2010): 289-318. 2. Nick, H.M., et al.: Reactive dispersive contaminant transport in coastal aquifers: Numerical simulation of a reactive Henry problem. Journal of contaminant hydrology 145 (2012), 90-104. 3. Raoof A., et al.: PoreFlow: A Complex pore-network model for simulation of reactive transport in variably saturated porous media, Computers & Geosciences, 61, (2013), 160-174.
Price, Owen F; Penman, Trent; Bradstock, Ross; Borah, Rittick
2016-10-01
Wildfires are complex adaptive systems, and have been hypothesized to exhibit scale-dependent transitions in the drivers of fire spread. Among other things, this makes the prediction of final fire size from conditions at the ignition difficult. We test this hypothesis by conducting a multi-scale statistical modelling of the factors determining whether fires reached 10 ha, then 100 ha then 1000 ha and the final size of fires >1000 ha. At each stage, the predictors were measures of weather, fuels, topography and fire suppression. The objectives were to identify differences among the models indicative of scale transitions, assess the accuracy of the multi-step method for predicting fire size (compared to predicting final size from initial conditions) and to quantify the importance of the predictors. The data were 1116 fires that occurred in the eucalypt forests of New South Wales between 1985 and 2010. The models were similar at the different scales, though there were subtle differences. For example, the presence of roads affected whether fires reached 10 ha but not larger scales. Weather was the most important predictor overall, though fuel load, topography and ease of suppression all showed effects. Overall, there was no evidence that fires have scale-dependent transitions in behaviour. The models had a predictive accuracy of 73%, 66%, 72% and 53% accuracy at 10 ha, 100 ha, 1000 ha and final size scales. When these steps were combined, the overall accuracy for predicting the size of fires was 62%, while the accuracy of the one step model was only 20%. Thus, the multi-scale approach was an improvement on the single scale approach, even though the predictive accuracy was probably insufficient for use as an operational tool. The analysis has also provided further evidence of the important role of weather, compared to fuel, suppression and topography in driving fire behaviour. Copyright © 2016. Published by Elsevier Ltd.
Earth System Modeling and Field Experiments in the Arctic-Boreal Zone - Report from a NASA Workshop
NASA Technical Reports Server (NTRS)
Sellers, Piers; Rienecker Michele; Randall, David; Frolking, Steve
2012-01-01
Early climate modeling studies predicted that the Arctic Ocean and surrounding circumpolar land masses would heat up earlier and faster than other parts of the planet as a result of greenhouse gas-induced climate change, augmented by the sea-ice albedo feedback effect. These predictions have been largely borne out by observations over the last thirty years. However, despite constant improvement, global climate models have greater difficulty in reproducing the current climate in the Arctic than elsewhere and the scatter between projections from different climate models is much larger in the Arctic than for other regions. Biogeochemical cycle (BGC) models indicate that the warming in the Arctic-Boreal Zone (ABZ) could lead to widespread thawing of the permafrost, along with massive releases of CO2 and CH4, and large-scale changes in the vegetation cover in the ABZ. However, the uncertainties associated with these BGC model predictions are even larger than those associated with the physical climate system models used to describe climate change. These deficiencies in climate and BGC models reflect, at least in part, an incomplete understanding of the Arctic climate system and can be related to inadequate observational data or analyses of existing data. A workshop was held at NASA/GSFC, May 22-24 2012, to assess the predictive capability of the models, prioritize the critical science questions; and make recommendations regarding new field experiments needed to improve model subcomponents. This presentation will summarize the findings and recommendations of the workshop, including the need for aircraft and flux tower measurements and extension of existing in-situ measurements to improve process modeling of both the physical climate and biogeochemical cycle systems. Studies should be directly linked to remote sensing investigations with a view to scaling up the improved process models to the Earth System Model scale. Data assimilation and observing system simulation studies should be used to guide the deployment pattern and schedule for inversion studies as well. Synthesis and integration of previously funded Arctic-Boreal projects (e.g., ABLE, BOREAS, ICESCAPE, ICEBRIDGE, ARCTAS) should also be undertaken. Such an effort would include the integration of multiple remotely sensed products from the EOS satellites and other resources.
NASA Astrophysics Data System (ADS)
Persson, M. V.; Harsono, D.; Tobin, J. J.; van Dishoeck, E. F.; Jørgensen, J. K.; Murillo, N.; Lai, S.-P.
2016-05-01
Context. The physical structure of deeply embedded low-mass protostars (Class 0) on scales of less than 300 AU is still poorly constrained. While molecular line observations demonstrate the presence of disks with Keplerian rotation toward a handful of sources, others show no hint of rotation. Determining the structure on small scales (a few 100 AU) is crucial for understanding the physical and chemical evolution from cores to disks. Aims: We determine the presence and characteristics of compact, disk-like structures in deeply embedded low-mass protostars. A related goal is investigating how the derived structure affects the determination of gas-phase molecular abundances on hot-core scales. Methods: Two models of the emission, a Gaussian disk intensity distribution and a parametrized power-law disk model, are fitted to subarcsecond resolution interferometric continuum observations of five Class 0 sources, including one source with a confirmed Keplerian disk. Prior to fitting the models to the de-projected real visibilities, the estimated envelope from an independent model and any companion sources are subtracted. For reference, a spherically symmetric single power-law envelope is fitted to the larger scale emission (~1000 AU) and investigated further for one of the sources on smaller scales. Results: The radii of the fitted disk-like structures range from ~90-170 AU, and the derived masses depend on the method. Using the Gaussian disk model results in masses of 54-556 × 10-3 M⊙, and using the power-law disk model gives 9-140 × 10-3 M⊙. While the disk radii agree with previous estimates the masses are different for some of the sources studied. Assuming a typical temperature distribution (r-0.5), the fractional amount of mass in the disk above 100 K varies from 7% to 30%. Conclusions: A thin disk model can approximate the emission and physical structure in the inner few 100 AU scales of the studied deeply embedded low-mass protostars and paves the way for analysis of a larger sample with ALMA. Kinematic data are needed to determine the presence of any Keplerian disk. Using previous observations of p-H218O, we estimate the relative gas phase water abundances relative to total warm H2 to be 6.2 × 10-5 (IRAS 2A), 0.33 × 10-5 (IRAS 4A-NW), 1.8 × 10-7 (IRAS 4B), and < 2 × 10-7 (IRAS 4A-SE), roughly an order of magnitude higher than previously inferred when both warm and cold H2 were used as reference. A spherically symmetric single power-law envelope model fails to simultaneously reproduce both the small- and large-scale emission. Based on observations carried out with the IRAM Plateau de Bure Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).Continuum data for the sources are available through http://dx.doi.org/10.5281/zenodo.47642 and at CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/590/A33
Computational Psychotherapy Research: Scaling up the evaluation of patient-provider interactions
Imel, Zac E.; Steyvers, Mark; Atkins, David C.
2014-01-01
In psychotherapy, the patient-provider interaction contains the treatment’s active ingredients. However, the technology for analyzing the content of this interaction has not fundamentally changed in decades, limiting both the scale and specificity of psychotherapy research. New methods are required in order to “scale up” to larger evaluation tasks and “drill down” into the raw linguistic data of patient-therapist interactions. In the current paper we demonstrate the utility of statistical text analysis models called topic models for discovering the underlying linguistic structure in psychotherapy. Topic models identify semantic themes (or topics) in a collection of documents (here, transcripts). We used topic models to summarize and visualize 1,553 psychotherapy and drug therapy (i.e., medication management) transcripts. Results showed that topic models identified clinically relevant content, including affective, content, and intervention related topics. In addition, topic models learned to identify specific types of therapist statements associated with treatment related codes (e.g., different treatment approaches, patient-therapist discussions about the therapeutic relationship). Visualizations of semantic similarity across sessions indicate that topic models identify content that discriminates between broad classes of therapy (e.g., cognitive behavioral therapy vs. psychodynamic therapy). Finally, predictive modeling demonstrated that topic model derived features can classify therapy type with a high degree of accuracy. Computational psychotherapy research has the potential to scale up the study of psychotherapy to thousands of sessions at a time, and we conclude by discussing the implications of computational methods such as topic models for the future of psychotherapy research and practice. PMID:24866972
Computational psychotherapy research: scaling up the evaluation of patient-provider interactions.
Imel, Zac E; Steyvers, Mark; Atkins, David C
2015-03-01
In psychotherapy, the patient-provider interaction contains the treatment's active ingredients. However, the technology for analyzing the content of this interaction has not fundamentally changed in decades, limiting both the scale and specificity of psychotherapy research. New methods are required to "scale up" to larger evaluation tasks and "drill down" into the raw linguistic data of patient-therapist interactions. In the current article, we demonstrate the utility of statistical text analysis models called topic models for discovering the underlying linguistic structure in psychotherapy. Topic models identify semantic themes (or topics) in a collection of documents (here, transcripts). We used topic models to summarize and visualize 1,553 psychotherapy and drug therapy (i.e., medication management) transcripts. Results showed that topic models identified clinically relevant content, including affective, relational, and intervention related topics. In addition, topic models learned to identify specific types of therapist statements associated with treatment-related codes (e.g., different treatment approaches, patient-therapist discussions about the therapeutic relationship). Visualizations of semantic similarity across sessions indicate that topic models identify content that discriminates between broad classes of therapy (e.g., cognitive-behavioral therapy vs. psychodynamic therapy). Finally, predictive modeling demonstrated that topic model-derived features can classify therapy type with a high degree of accuracy. Computational psychotherapy research has the potential to scale up the study of psychotherapy to thousands of sessions at a time. We conclude by discussing the implications of computational methods such as topic models for the future of psychotherapy research and practice. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Mendoza, Pablo A.; Mizukami, Naoki; Ikeda, Kyoko; Clark, Martyn P.; Gutmann, Ethan D.; Arnold, Jeffrey R.; Brekke, Levi D.; Rajagopalan, Balaji
2016-10-01
We examine the effects of regional climate model (RCM) horizontal resolution and forcing scaling (i.e., spatial aggregation of meteorological datasets) on the portrayal of climate change impacts. Specifically, we assess how the above decisions affect: (i) historical simulation of signature measures of hydrologic behavior, and (ii) projected changes in terms of annual water balance and hydrologic signature measures. To this end, we conduct our study in three catchments located in the headwaters of the Colorado River basin. Meteorological forcings for current and a future climate projection are obtained at three spatial resolutions (4-, 12- and 36-km) from dynamical downscaling with the Weather Research and Forecasting (WRF) regional climate model, and hydrologic changes are computed using four different hydrologic model structures. These projected changes are compared to those obtained from running hydrologic simulations with current and future 4-km WRF climate outputs re-scaled to 12- and 36-km. The results show that the horizontal resolution of WRF simulations heavily affects basin-averaged precipitation amounts, propagating into large differences in simulated signature measures across model structures. The implications of re-scaled forcing datasets on historical performance were primarily observed on simulated runoff seasonality. We also found that the effects of WRF grid resolution on projected changes in mean annual runoff and evapotranspiration may be larger than the effects of hydrologic model choice, which surpasses the effects from re-scaled forcings. Scaling effects on projected variations in hydrologic signature measures were found to be generally smaller than those coming from WRF resolution; however, forcing aggregation in many cases reversed the direction of projected changes in hydrologic behavior.
Centennial-scale Holocene climate variations amplified by Antarctic Ice Sheet discharge
NASA Astrophysics Data System (ADS)
Bakker, Pepijn; Clark, Peter U.; Golledge, Nicholas R.; Schmittner, Andreas; Weber, Michael E.
2017-01-01
Proxy-based indicators of past climate change show that current global climate models systematically underestimate Holocene-epoch climate variability on centennial to multi-millennial timescales, with the mismatch increasing for longer periods. Proposed explanations for the discrepancy include ocean-atmosphere coupling that is too weak in models, insufficient energy cascades from smaller to larger spatial and temporal scales, or that global climate models do not consider slow climate feedbacks related to the carbon cycle or interactions between ice sheets and climate. Such interactions, however, are known to have strongly affected centennial- to orbital-scale climate variability during past glaciations, and are likely to be important in future climate change. Here we show that fluctuations in Antarctic Ice Sheet discharge caused by relatively small changes in subsurface ocean temperature can amplify multi-centennial climate variability regionally and globally, suggesting that a dynamic Antarctic Ice Sheet may have driven climate fluctuations during the Holocene. We analysed high-temporal-resolution records of iceberg-rafted debris derived from the Antarctic Ice Sheet, and performed both high-spatial-resolution ice-sheet modelling of the Antarctic Ice Sheet and multi-millennial global climate model simulations. Ice-sheet responses to decadal-scale ocean forcing appear to be less important, possibly indicating that the future response of the Antarctic Ice Sheet will be governed more by long-term anthropogenic warming combined with multi-centennial natural variability than by annual or decadal climate oscillations.
Farmer, Adrian H.; Cade, Brian S.; Terrell, James W.; Henriksen, Jim H.; Runge, Jeffery T.
2005-01-01
The primary objectives of this evaluation were to improve the performance of the Whooping Crane Habitat Suitability model (C4R) used by the U.S. Fish and Wildlife Service (Service) for defining the relationship between river discharge and habitat availability, and to assist the Service in implementing improved model(s) with existing hydraulic files. The C4R habitat model is applied at the scale of individual river cross-sections, but the model outputs are scaledup to larger reaches of the river using a decision support “model” comprised of other data and procedures. Hence, the validity of the habitat model depends at least partially on how its outputs are incorporated into this larger context. For that reason, we also evaluated other procedures including the PHABSIM data files, the FORTRAN computer programs used to implement the model, and other parameters used to simulate the relationship between river flows and the availability of Whooping Crane roosting habitat along more than 100 miles of heterogeneous river channels. An equally important objective of this report was to fully document these related procedures as well as the model and evaluation results so that interested parties could readily understand the technical basis for the Service’s recommendations.
Freeze-drying process monitoring using a cold plasma ionization device.
Mayeresse, Y; Veillon, R; Sibille, P H; Nomine, C
2007-01-01
A cold plasma ionization device has been designed to monitor freeze-drying processes in situ by monitoring lyophilization chamber moisture content. This plasma device, which consists of a probe that can be mounted directly on the lyophilization chamber, depends upon the ionization of nitrogen and water molecules using a radiofrequency generator and spectrometric signal collection. The study performed on this probe shows that it is steam sterilizable, simple to integrate, reproducible, and sensitive. The limitations include suitable positioning in the lyophilization chamber, calibration, and signal integration. Sensitivity was evaluated in relation to the quantity of vials and the probe positioning, and correlation with existing methods, such as microbalance, was established. These tests verified signal reproducibility through three freeze-drying cycles. Scaling-up studies demonstrated a similar product signature for the same product using pilot-scale and larger-scale equipment. On an industrial scale, the method efficiently monitored the freeze-drying cycle, but in a larger industrial freeze-dryer the signal was slightly modified. This was mainly due to the positioning of the plasma device, in relation to the vapor flow pathway, which is not necessarily homogeneous within the freeze-drying chamber. The plasma tool is a relevant method for monitoring freeze-drying processes and may in the future allow the verification of current thermodynamic freeze-drying models. This plasma technique may ultimately represent a process analytical technology (PAT) approach for the freeze-drying process.
Cowley, Lauren A; Petersen, Fernanda C; Junges, Roger; Jimson D Jimenez, Med; Morrison, Donald A; Hanage, William P
2018-06-01
Homologous recombination in the genetic transformation model organism Streptococcus pneumoniae is thought to be important in the adaptation and evolution of this pathogen. While competent pneumococci are able to scavenge DNA added to laboratory cultures, large-scale transfers of multiple kb are rare under these conditions. We used whole genome sequencing (WGS) to map transfers in recombinants arising from contact of competent cells with non-competent 'target' cells, using strains with known genomes, distinguished by a total of ~16,000 SNPs. Experiments designed to explore the effect of environment on large scale recombination events used saturating purified donor DNA, short-term cell assemblages on Millipore filters, and mature biofilm mixed cultures. WGS of 22 recombinants for each environment mapped all SNPs that were identical between the recombinant and the donor but not the recipient. The mean recombination event size was found to be significantly larger in cell-to-cell contact cultures (4051 bp in filter assemblage and 3938 bp in biofilm co-culture versus 1815 bp with saturating DNA). Up to 5.8% of the genome was transferred, through 20 recombination events, to a single recipient, with the largest single event incorporating 29,971 bp. We also found that some recombination events are clustered, that these clusters are more likely to occur in cell-to-cell contact environments, and that they cause significantly increased linkage of genes as far apart as 60,000 bp. We conclude that pneumococcal evolution through homologous recombination is more likely to occur on a larger scale in environments that permit cell-to-cell contact.
Accelerating Calculations of Reaction Dissipative Particle Dynamics in LAMMPS
2017-05-17
order reaction mechanism, the best acceleration was 6.1 times. For a larger, more chemically detailed mechanism, the best acceleration exceeded 60 times...simulations at previously inaccessible scales. A principle feature of DPD-RX is its ability to model chemical reactions within each CG particle. The...change in composition due to chemical reactions is described by a system of ordinary differential equations (ODEs) that are evaluated at each DPD time
Thermosphere Extension of the Whole Atmosphere Community Climate Model
2010-12-04
tropospheric ozone and related tracers: Description and evaluation of MOZART, version 2, J. Geophys. Res., 108(D24), 4784, doi:10.1029/2002JD002853. Immel, T... troposphere to the upper thermosphere and their variability on interannual, seasonal, and daily scales. These quantities are compared with observational and...gravity waves are excited by tropospheric processes. As their amplitudes grow exponen- tially with altitude, they will cause larger variability
Beyond Borders: Innovating from Conflict to Community in Public Art Engagement in Holon, Israel
ERIC Educational Resources Information Center
Rubenstein, Ziva Haller
2012-01-01
The story of the Center for Digital Art in Holon is a story of innovation in the face of adversity. At key points of escalation in the Middle East conflict, this small-scale arts center managed to rise above and beyond the larger and more traditional museums in Israel to create new models for arts engagement. This article will present the critical…
Large-scale flows, sheet plumes and strong magnetic fields in a rapidly rotating spherical dynamo
NASA Astrophysics Data System (ADS)
Takahashi, F.
2011-12-01
Mechanisms of magnetic field intensification by flows of an electrically conducting fluid in a rapidly rotating spherical shell is investigated. Bearing dynamos of the Eartn and planets in mind, the Ekman number is set at 10-5. A strong dipolar solution with magnetic energy 55 times larger than the kinetic energy of thermal convection is obtained. In a regime of small viscosity and inertia with the strong magnetic field, convection structure consists of a few large-scale retrograde flows in the azimuthal direction and sporadic thin sheet-like plumes. The magnetic field is amplified through stretching of magnetic lines, which occurs typically through three types of flow: the retrograde azimuthal flow near the outer boundary, the downwelling flow of the sheet plume, and the prograde azimuthal flow near the rim of the tangent cylinder induced by the downwelling flow. It is found that either structure of current loops or current sheets is accompanied in each flow structure. Current loops emerge as a result of stretching the magnetic lines along the magnetic field, wheres the current sheets are formed to counterbalance the Coriolis force. Convection structure and processes of magnetic field generation found in the present model are distinct from those in models at larger/smaller Ekman number.
Tropical Cyclone Information System
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Knosp, Brian W.; Vu, Quoc A.; Yi, Chao; Hristova-Veleva, Svetla M.
2009-01-01
The JPL Tropical Cyclone Infor ma tion System (TCIS) is a Web portal (http://tropicalcyclone.jpl.nasa.gov) that provides researchers with an extensive set of observed hurricane parameters together with large-scale and convection resolving model outputs. It provides a comprehensive set of high-resolution satellite (see figure), airborne, and in-situ observations in both image and data formats. Large-scale datasets depict the surrounding environmental parameters such as SST (Sea Surface Temperature) and aerosol loading. Model outputs and analysis tools are provided to evaluate model performance and compare observations from different platforms. The system pertains to the thermodynamic and microphysical structure of the storm, the air-sea interaction processes, and the larger-scale environment as depicted by ocean heat content and the aerosol loading of the environment. Currently, the TCIS is populated with satellite observations of all tropical cyclones observed globally during 2005. There is a plan to extend the database both forward in time till present as well as backward to 1998. The portal is powered by a MySQL database and an Apache/Tomcat Web server on a Linux system. The interactive graphic user interface is provided by Google Map.