Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko
2015-06-01
We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.
Correcting the SIMPLE Model of Free Recall
ERIC Educational Resources Information Center
Lee, Michael D.; Pooley, James P.
2013-01-01
The scale-invariant memory, perception, and learning (SIMPLE) model developed by Brown, Neath, and Chater (2007) formalizes the theoretical idea that scale invariance is an important organizing principle across numerous cognitive domains and has made an influential contribution to the literature dealing with modeling human memory. In the context…
Pyrotechnic modeling for the NSI and pin puller
NASA Technical Reports Server (NTRS)
Powers, Joseph M.; Gonthier, Keith A.
1993-01-01
A discussion concerning the modeling of pyrotechnically driven actuators is presented in viewgraph format. The following topics are discussed: literature search, constitutive data for full-scale model, simple deterministic model, observed phenomena, and results from simple model.
Geometry and Reynolds-Number Scaling on an Iced Business-Jet Wing
NASA Technical Reports Server (NTRS)
Lee, Sam; Ratvasky, Thomas P.; Thacker, Michael; Barnhart, Billy P.
2005-01-01
A study was conducted to develop a method to scale the effect of ice accretion on a full-scale business jet wing model to a 1/12-scale model at greatly reduced Reynolds number. Full-scale, 5/12-scale, and 1/12-scale models of identical airfoil section were used in this study. Three types of ice accretion were studied: 22.5-minute ice protection system failure shape, 2-minute initial ice roughness, and a runback shape that forms downstream of a thermal anti-ice system. The results showed that the 22.5-minute failure shape could be scaled from full-scale to 1/12-scale through simple geometric scaling. The 2-minute roughness shape could be scaled by choosing an appropriate grit size. The runback ice shape exhibited greater Reynolds number effects and could not be scaled by simple geometric scaling of the ice shape.
Modeling Age-Related Differences in Immediate Memory Using SIMPLE
ERIC Educational Resources Information Center
Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.
2006-01-01
In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…
SimpleBox 4.0: Improving the model while keeping it simple….
Hollander, Anne; Schoorl, Marian; van de Meent, Dik
2016-04-01
Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model
NASA Astrophysics Data System (ADS)
Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman
2015-01-01
The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
Defining Simple nD Operations Based on Prosmatic nD Objects
NASA Astrophysics Data System (ADS)
Arroyo Ohori, K.; Ledoux, H.; Stoter, J.
2016-10-01
An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
Active earth pressure model tests versus finite element analysis
NASA Astrophysics Data System (ADS)
Pietrzak, Magdalena
2017-06-01
The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.
Self-Organized Criticality and Scaling in Lifetime of Traffic Jams
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-01-01
The deterministic cellular automaton 184 (the one-dimensional asymmetric simple-exclusion model with parallel dynamics) is extended to take into account injection or extraction of particles. The model presents the traffic flow on a highway with inflow or outflow of cars.Introducing injection or extraction of particles into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. The typical lifetime
Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads
2006-04-01
We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.
A Generalized Simple Formulation of Convective Adjustment ...
Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la
NASA Technical Reports Server (NTRS)
Kraft, R. E.; Yu, J.; Kwan, H. W.
1999-01-01
The primary purpose of this study is to develop improved models for the acoustic impedance of treatment panels at high frequencies, for application to subscale treatment designs. Effects that cause significant deviation of the impedance from simple geometric scaling are examined in detail, an improved high-frequency impedance model is developed, and the improved model is correlated with high-frequency impedance measurements. Only single-degree-of-freedom honeycomb sandwich resonator panels with either perforated sheet or "linear" wiremesh faceplates are considered. The objective is to understand those effects that cause the simple single-degree-of- freedom resonator panels to deviate at the higher-scaled frequency from the impedance that would be obtained at the corresponding full-scale frequency. This will allow the subscale panel to be designed to achieve a specified impedance spectrum over at least a limited range of frequencies. An advanced impedance prediction model has been developed that accounts for some of the known effects at high frequency that have previously been ignored as a small source of error for full-scale frequency ranges.
A simple model of hohlraum power balance and mitigation of SRS
Albright, Brian J.; Montgomery, David S.; Yin, Lin; ...
2016-04-01
A simple energy balance model has been obtained for laser-plasma heating in indirect drive hohlraum plasma that allows rapid temperature scaling and evolution with parameters such as plasma density and composition. Furthermore, this model enables assessment of the effects on plasma temperature of, e.g., adding high-Z dopant to the gas fill or magnetic fields.
Mathematics and the Internet: A Source of Enormous Confusion and Great Potential
2009-05-01
free Internet Myth The story recounted below of the scale-free nature of the Internet seems convincing, sound, and al- most too good to be true ...models. In fact, much of the initial excitement in the nascent field of network science can be attributed to an ear- ly and appealingly simple class...this new class of networks, com- monly referred to as scale-free networks. The term scale-free derives from the simple observation that power-law node
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity
Harbin Li; Steven G. McNulty
2007-01-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...
Self-organized criticality in asymmetric exclusion model with noise for freeway traffic
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-02-01
The one-dimensional asymmetric simple-exclusion model with open boundaries for parallel update is extended to take into account temporary stopping of particles. The model presents the traffic flow on a highway with temporary deceleration of cars. Introducing temporary stopping into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. In the self-organized critical state, start-stop waves (or traffic jams) appear with various sizes (or lifetimes). The typical interval < s>between consecutive jams scales as < s> ≃ Lv with v = 0.51 ± 0.05 where L is the system size. It is shown that the cumulative jam-interval distribution Ns( L) satisfies the finite-size scaling form ( Ns( L) ≃ L- vf( s/ Lv). Also, the typical lifetime
NASA Technical Reports Server (NTRS)
Carlson, J. M.; Chayes, J. T.; Swindle, G. H.; Grannan, E. R.
1990-01-01
The scaling behavior of sandpile models is investigated analytically. First, it is shown that sandpile models contain a set of domain walls, referred to as troughs, which bound regions that can experience avalanches. It is further shown that the dynamics of the troughs is governed by a simple set of rules involving birth, death, and coalescence events. A simple trough model is then introduced, and it is proved that the model has a phase transition with the density of the troughs as an order parameter and that, in the thermodynamic limit, the trough density goes to zero at the transition point. Finally, it is shown that the observed scaling behavior is a consequence of finite-size effects.
A simple model for calculating air pollution within street canyons
NASA Astrophysics Data System (ADS)
Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.
2014-04-01
This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Investigation of shear damage considering the evolution of anisotropy
NASA Astrophysics Data System (ADS)
Kweon, S.
2013-12-01
The damage that occurs in shear deformations in view of anisotropy evolution is investigated. It is widely believed in the mechanics research community that damage (or porosity) does not evolve (increase) in shear deformations since the hydrostatic stress in shear is zero. This paper proves that the above statement can be false in large deformations of simple shear. The simulation using the proposed anisotropic ductile fracture model (macro-scale) in this study indicates that hydrostatic stress becomes nonzero and (thus) porosity evolves (increases or decreases) in the simple shear deformation of anisotropic (orthotropic) materials. The simple shear simulation using a crystal plasticity based damage model (meso-scale) shows the same physics as manifested in the above macro-scale model that porosity evolves due to the grain-to-grain interaction, i.e., due to the evolution of anisotropy. Through a series of simple shear simulations, this study investigates the effect of the evolution of anisotropy, i.e., the rotation of the orthotropic axes onto the damage (porosity) evolution. The effect of the evolutions of void orientation and void shape onto the damage (porosity) evolution is investigated as well. It is found out that the interaction among porosity, the matrix anisotropy and void orientation/shape plays a crucial role in the ductile damage of porous materials.
De Bartolo, Samuele; Fallico, Carmine; Veltri, Massimo
2013-01-01
Hydraulic conductivity and effective porosity values for the confined sandy loam aquifer of the Montalto Uffugo (Italy) test field were obtained by laboratory and field measurements; the first ones were carried out on undisturbed soil samples and the others by slug and aquifer tests. A direct simple-scaling analysis was performed for the whole range of measurement and a comparison among the different types of fractal models describing the scale behavior was made. Some indications about the largest pore size to utilize in the fractal models were given. The results obtained for a sandy loam soil show that it is possible to obtain global indications on the behavior of the hydraulic conductivity versus the porosity utilizing a simple scaling relation and a fractal model in coupled manner. PMID:24385876
An egalitarian network model for the emergence of simple and complex cells in visual cortex
Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert
2004-01-01
We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891
ERIC Educational Resources Information Center
Rovšek, Barbara; Guštin, Andrej
2018-01-01
An astronomy "experiment" composed of three parts is described in the article. Being given necessary data a simple model of inner planets of the solar system is made in the first part with planets' circular orbits using appropriate scale. In the second part revolution of the figurines used as model representations of the planets along…
Composite annotations: requirements for mapping multiscale data and models to biomedical ontologies
Cook, Daniel L.; Mejino, Jose L. V.; Neal, Maxwell L.; Gennari, John H.
2009-01-01
Current methods for annotating biomedical data resources rely on simple mappings between data elements and the contents of a variety of biomedical ontologies and controlled vocabularies. Here we point out that such simple mappings are inadequate for large-scale multiscale, multidomain integrative “virtual human” projects. For such integrative challenges, we describe a “composite annotation” schema that is simple yet sufficiently extensible for mapping the biomedical content of a variety of data sources and biosimulation models to available biomedical ontologies. PMID:19964601
NASA Astrophysics Data System (ADS)
Cubrovic, Mihailo
2005-02-01
We report on our theoretical and numerical results concerning the transport mechanisms in the asteroid belt. We first derive a simple kinetic model of chaotic diffusion and show how it gives rise to some simple correlations (but not laws) between the removal time (the time for an asteroid to experience a qualitative change of dynamical behavior and enter a wide chaotic zone) and the Lyapunov time. The correlations are shown to arise in two different regimes, characterized by exponential and power-law scalings. We also show how is the so-called “stable chaos” (exponential regime) related to anomalous diffusion. Finally, we check our results numerically and discuss their possible applications in analyzing the motion of particular asteroids.
Simple scaling of catastrophic landslide dynamics.
Ekström, Göran; Stark, Colin P
2013-03-22
Catastrophic landslides involve the acceleration and deceleration of millions of tons of rock and debris in response to the forces of gravity and dissipation. Their unpredictability and frequent location in remote areas have made observations of their dynamics rare. Through real-time detection and inverse modeling of teleseismic data, we show that landslide dynamics are primarily determined by the length scale of the source mass. When combined with geometric constraints from satellite imagery, the seismically determined landslide force histories yield estimates of landslide duration, momenta, potential energy loss, mass, and runout trajectory. Measurements of these dynamical properties for 29 teleseismogenic landslides are consistent with a simple acceleration model in which height drop and rupture depth scale with the length of the failing slope.
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
Length-scale crossover of the hydrophobic interaction in a coarse-grained water model
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2013-11-01
It has been difficult to establish a clear connection between the hydrophobic interaction among small molecules typically studied in molecular simulations (a weak, oscillatory force) and that found between large, macroscopic surfaces in experiments (a strong, monotonic force). Here, we show that both types of interaction can emerge with a simple, core-softened water model that captures water's unique pairwise structure. As in hydrophobic hydration, we find that the hydrophobic interaction manifests a length-scale dependence, exhibiting distinct driving forces in the molecular and macroscopic regimes. Moreover, the ability of this simple model to capture both regimes suggests that several features of the hydrophobic force can be understood merely through water's pair correlations.
Length-scale crossover of the hydrophobic interaction in a coarse-grained water model.
Chaimovich, Aviel; Shell, M Scott
2013-11-01
It has been difficult to establish a clear connection between the hydrophobic interaction among small molecules typically studied in molecular simulations (a weak, oscillatory force) and that found between large, macroscopic surfaces in experiments (a strong, monotonic force). Here, we show that both types of interaction can emerge with a simple, core-softened water model that captures water's unique pairwise structure. As in hydrophobic hydration, we find that the hydrophobic interaction manifests a length-scale dependence, exhibiting distinct driving forces in the molecular and macroscopic regimes. Moreover, the ability of this simple model to capture both regimes suggests that several features of the hydrophobic force can be understood merely through water's pair correlations.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
How well can regional fluxes be derived from smaller-scale estimates?
NASA Technical Reports Server (NTRS)
Moore, Kathleen E.; Fitzjarrald, David R.; Ritter, John A.
1992-01-01
Regional surface fluxes are essential lower boundary conditions for large scale numerical weather and climate models and are the elements of global budgets of important trace gases. Surface properties affecting the exchange of heat, moisture, momentum and trace gases vary with length scales from one meter to hundreds of km. A classical difficulty is that fluxes have been measured directly only at points or along lines. The process of scaling up observations limited in space and/or time to represent larger areas was done by assigning properties to surface classes and combining estimated or calculated fluxes using an area weighted average. It is not clear that a simple area weighted average is sufficient to produce the large scale from the small scale, chiefly due to the effect of internal boundary layers, nor is it known how important the uncertainty is to large scale model outcomes. Simultaneous aircraft and tower data obtained in the relatively simple terrain of the western Alaska tundra were used to determine the extent to which surface type variation can be related to fluxes of heat, moisture, and other properties. Surface type was classified as lake or land with aircraft borne infrared thermometer, and flight level heat and moisture fluxes were related to surface type. The magnitude and variety of sampling errors inherent in eddy correlation flux estimation place limits on how well any flux can be known even in simple geometries.
Statistical self-similarity of width function maxima with implications to floods
Veitzer, S.A.; Gupta, V.K.
2001-01-01
Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.
The Factor Structure and Screening Utility of the Social Interaction Anxiety Scale
ERIC Educational Resources Information Center
Rodebaugh, Thomas L.; Woods, Carol M.; Heimberg, Richard G.; Liebowitz, Michael R.; Schneier, Franklin R.
2006-01-01
The widely used Social Interaction Anxiety Scale (SIAS; R. P. Mattick & J. C. Clarke, 1998) possesses favorable psychometric properties, but questions remain concerning its factor structure and item properties. Analyses included 445 people with social anxiety disorder and 1,689 undergraduates. Simple unifactorial models fit poorly, and models that…
A robust data scaling algorithm to improve classification accuracies in biomedical data.
Cao, Xi Hang; Stojkovic, Ivan; Obradovic, Zoran
2016-09-09
Machine learning models have been adapted in biomedical research and practice for knowledge discovery and decision support. While mainstream biomedical informatics research focuses on developing more accurate models, the importance of data preprocessing draws less attention. We propose the Generalized Logistic (GL) algorithm that scales data uniformly to an appropriate interval by learning a generalized logistic function to fit the empirical cumulative distribution function of the data. The GL algorithm is simple yet effective; it is intrinsically robust to outliers, so it is particularly suitable for diagnostic/classification models in clinical/medical applications where the number of samples is usually small; it scales the data in a nonlinear fashion, which leads to potential improvement in accuracy. To evaluate the effectiveness of the proposed algorithm, we conducted experiments on 16 binary classification tasks with different variable types and cover a wide range of applications. The resultant performance in terms of area under the receiver operation characteristic curve (AUROC) and percentage of correct classification showed that models learned using data scaled by the GL algorithm outperform the ones using data scaled by the Min-max and the Z-score algorithm, which are the most commonly used data scaling algorithms. The proposed GL algorithm is simple and effective. It is robust to outliers, so no additional denoising or outlier detection step is needed in data preprocessing. Empirical results also show models learned from data scaled by the GL algorithm have higher accuracy compared to the commonly used data scaling algorithms.
NASA Astrophysics Data System (ADS)
Liu, Ruipeng; Di Matteo, T.; Lux, Thomas
2007-09-01
In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.
Li, Harbin; McNulty, Steven G
2007-10-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Attema, Jisk
2015-08-01
Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.
Population Genetics of Three Dimensional Range Expansions
NASA Astrophysics Data System (ADS)
Lavrentovich, Maxim; Nelson, David
2014-03-01
We develop a simple model of genetic diversity in growing spherical cell clusters, where the growth is confined to the cluster surface. This kind of growth occurs in cells growing in soft agar, and can also serve as a simple model of avascular tumors. Mutation-selection balance in these radial expansions is strongly influenced by scaling near a neutral, voter model critical point and by the inflating frontier. We develop a scaling theory to describe how the dynamics of mutation-selection balance is cut off by inflation. Genetic drift, i.e., local fluctuations in the genetic diversity, also plays an important role, and can lead to the extinction even of selectively advantageous strains. We calculate this extinction probability, taking into account the effect of rough population frontiers.
A simple phenomenological model for grain clustering in turbulence
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2016-01-01
We propose a simple model for density fluctuations of aerodynamic grains, embedded in a turbulent, gravitating gas disc. The model combines a calculation for the behaviour of a group of grains encountering a single turbulent eddy, with a hierarchical approximation of the eddy statistics. This makes analytic predictions for a range of quantities including: distributions of grain densities, power spectra and correlation functions of fluctuations, and maximum grain densities reached. We predict how these scale as a function of grain drag time ts, spatial scale, grain-to-gas mass ratio tilde{ρ }, strength of turbulence α, and detailed disc properties. We test these against numerical simulations with various turbulence-driving mechanisms. The simulations agree well with the predictions, spanning ts Ω ˜ 10-4-10, tilde{ρ }˜ 0{-}3, α ˜ 10-10-10-2. Results from `turbulent concentration' simulations and laboratory experiments are also predicted as a special case. Vortices on a wide range of scales disperse and concentrate grains hierarchically. For small grains this is most efficient in eddies with turnover time comparable to the stopping time, but fluctuations are also damped by local gas-grain drift. For large grains, shear and gravity lead to a much broader range of eddy scales driving fluctuations, with most power on the largest scales. The grain density distribution has a log-Poisson shape, with fluctuations for large grains up to factors ≳1000. We provide simple analytic expressions for the predictions, and discuss implications for planetesimal formation, grain growth, and the structure of turbulence.
A simple, analytical, axisymmetric microburst model for downdraft estimation
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1991-01-01
A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.
Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide
2014-06-01
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
Xueri Dang; Chun-Ta Lai; David Y. Hollinger; Andrew J. Schauer; Jingfeng Xiao; J. William Munger; Clenton Owensby; James R. Ehleringer
2011-01-01
We evaluated an idealized boundary layer (BL) model with simple parameterizations using vertical transport information from community model outputs (NCAR/NCEP Reanalysis and ECMWF Interim Analysis) to estimate regional-scale net CO2 fluxes from 2002 to 2007 at three forest and one grassland flux sites in the United States. The BL modeling...
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Scale Interactions in the Tropics from a Simple Multi-Cloud Model
NASA Astrophysics Data System (ADS)
Niu, X.; Biello, J. A.
2017-12-01
Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.
Evaporation estimation of rift valley lakes: comparison of models.
Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe
2009-01-01
Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.
On the context-dependent scaling of consumer feeding rates.
Barrios-O'Neill, Daniel; Kelly, Ruth; Dick, Jaimie T A; Ricciardi, Anthony; MacIsaac, Hugh J; Emmerson, Mark C
2016-06-01
The stability of consumer-resource systems can depend on the form of feeding interactions (i.e. functional responses). Size-based models predict interactions - and thus stability - based on consumer-resource size ratios. However, little is known about how interaction contexts (e.g. simple or complex habitats) might alter scaling relationships. Addressing this, we experimentally measured interactions between a large size range of aquatic predators (4-6400 mg over 1347 feeding trials) and an invasive prey that transitions among habitats: from the water column (3D interactions) to simple and complex benthic substrates (2D interactions). Simple and complex substrates mediated successive reductions in capture rates - particularly around the unimodal optimum - and promoted prey population stability in model simulations. Many real consumer-resource systems transition between 2D and 3D interactions, and along complexity gradients. Thus, Context-Dependent Scaling (CDS) of feeding interactions could represent an unrecognised aspect of food webs, and quantifying the extent of CDS might enhance predictive ecology. © The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.
A simple model of low-scale direct gauge mediation
NASA Astrophysics Data System (ADS)
Csáki, Csaba; Shirman, Yuri; Terning, John
2007-05-01
We construct a calculable model of low-energy direct gauge mediation making use of the metastable supersymmetry breaking vacua recently discovered by Intriligator, Seiberg and Shih. The standard model gauge group is a subgroup of the global symmetries of the SUSY breaking sector and messengers play an essential role in dynamical SUSY breaking: they are composites of a confining gauge theory, and the holomorphic scalar messenger mass appears as a consequence of the confining dynamics. The SUSY breaking scale is around 100 TeV nevertheless the model is calculable. The minimal non-renormalizable coupling of the Higgs to the DSB sector leads in a simple way to a μ-term, while the B-term arises at two-loop order resulting in a moderately large tan β. A novel feature of this class of models is that some particles from the dynamical SUSY breaking sector may be accessible at the LHC.
Bhaumik, Basabi; Mathur, Mona
2003-01-01
We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.
SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.
Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi
2010-01-01
Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.
Scaling laws and fluctuations in the statistics of word frequencies
NASA Astrophysics Data System (ADS)
Gerlach, Martin; Altmann, Eduardo G.
2014-11-01
In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.
Testing particle filters on convective scale dynamics
NASA Astrophysics Data System (ADS)
Haslehner, Mylene; Craig, George. C.; Janjic, Tijana
2014-05-01
Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.
Outbreak statistics and scaling laws for externally driven epidemics.
Singh, Sarabjeet; Myers, Christopher R
2014-04-01
Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic susceptible-infectious-recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as P(n)∼n-3/2 at the critical point as the system size N becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a "reservoir forcing". We find that the statistics of outbreaks in this system fundamentally differ from those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size O(N1/3) and O(N) depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as O(Nξ), where ξ∈(0,1]∖{2/3} and O((N/lnN)2/3) at the multicritical point.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
A simple predictive model for the structure of the oceanic pycnocline
Gnanadesikan
1999-03-26
A simple theory for the large-scale oceanic circulation is developed, relating pycnocline depth, Northern Hemisphere sinking, and low-latitude upwelling to pycnocline diffusivity and Southern Ocean winds and eddies. The results show that Southern Ocean processes help maintain the global ocean structure and that pycnocline diffusion controls low-latitude upwelling.
USDA-ARS?s Scientific Manuscript database
Pasta is a simple food made from water and durum wheat (Triticum turgidum subsp. durum) semolina. As pasta increases in popularity, studies have endeavored to analyze the attributes that contribute to high quality pasta. Despite being a simple food, the laboratory scale analysis of pasta quality is ...
Convective Detrainment and Control of the Tropical Water Vapor Distribution
NASA Astrophysics Data System (ADS)
Kursinski, E. R.; Rind, D.
2006-12-01
Sherwood et al. (2006) developed a simple power law model describing the relative humidity distribution in the tropical free troposphere where the power law exponent is the ratio of a drying time scale (tied to subsidence rates) and a moistening time which is the average time between convective moistening events whose temporal distribution is described as a Poisson distribution. Sherwood et al. showed that the relative humidity distribution observed by GPS occultations and MLS is indeed close to a power law, approximately consistent with the simple model's prediction. Here we modify this simple model to be in terms of vertical length scales rather than time scales in a manner that we think more correctly matches the model predictions to the observations. The subsidence is now in terms of the vertical distance the air mass has descended since it last detrained from a convective plume. The moisture source term becomes a profile of convective detrainment flux versus altitude. The vertical profile of the convective detrainment flux is deduced from the observed distribution of the specific humidity at each altitude combined with sinking rates estimated from radiative cooling. The resulting free tropospheric detrainment profile increases with altitude above 3 km somewhat like an exponential profile which explains the approximate power law behavior observed by Sherwood et al. The observations also reveal a seasonal variation in the detrainment profile reflecting changes in the convective behavior expected by some based on observed seasonal changes in the vertical structure of convective regions. The simple model results will be compared with the moisture control mechanisms in a GCM with many additional mechanisms, the GISS climate model, as described in Rind (2006). References Rind. D., 2006: Water-vapor feedback. In Frontiers of Climate Modeling, J. T. Kiehl and V. Ramanathan (eds), Cambridge University Press [ISBN-13 978-0-521- 79132-8], 251-284. Sherwood, S., E. R. Kursinski and W. Read, A distribution law for free-tropospheric relative humidity, J. Clim. In press. 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, Fujio; Kuwagata, Tuneo
1995-02-01
The thermally induced local circulation over a periodic valley is simulated by a two-dimensional numerical model that does-not include condensational processes. During the daytime of a clear, calm day, heat is transported from the mountainous region to the valley area by anabatic wind and its return flow. The specific humidity is, however, transported in an inverse manner. The horizontal exchange rate of sensible heat has a horizontal scale similarity, as long as the horizontal scale is less than a critical width of about 100 km. The sensible heat accumulated in an atmospheric column over an arbitrary point can be estimatedmore » by a simple model termed the uniform mixed-layer model (UML). The model assumes that the potential temperature is both vertically and horizontally uniform in the mixed layer, even over the complex terrain. The UML model is valid only when the horizontal scale of the topography is less than the critical width and the maximum difference in the elevation of the topography is less than about 1500 m. Latent heat is accumulated over the mountainous region while the atmosphere becomes dry over the valley area. When the horizontal scale is close to the critical width, the largest amount of humidity is accumulated during the late afternoon over the mountainous region. 18 refs., 15 figs., 1 tab.« less
HOW GALACTIC ENVIRONMENT REGULATES STAR FORMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meidt, Sharon E.
2016-02-10
In a new simple model I reconcile two contradictory views on the factors that determine the rate at which molecular clouds form stars—internal structure versus external, environmental influences—providing a unified picture for the regulation of star formation in galaxies. In the presence of external pressure, the pressure gradient set up within a self-gravitating turbulent (isothermal) cloud leads to a non-uniform density distribution. Thus the local environment of a cloud influences its internal structure. In the simple equilibrium model, the fraction of gas at high density in the cloud interior is determined simply by the cloud surface density, which is itselfmore » inherited from the pressure in the immediate surroundings. This idea is tested using measurements of the properties of local clouds, which are found to show remarkable agreement with the simple equilibrium model. The model also naturally predicts the star formation relation observed on cloud scales and at the same time provides a mapping between this relation and the closer-to-linear molecular star formation relation measured on larger scales in galaxies. The key is that pressure regulates not only the molecular content of the ISM but also the cloud surface density. I provide a straightforward prescription for the pressure regulation of star formation that can be directly implemented in numerical models. Predictions for the dense gas fraction and star formation efficiency measured on large-scales within galaxies are also presented, establishing the basis for a new picture of star formation regulated by galactic environment.« less
Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony
2012-08-17
A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.
Observation and modelling of urban dew
NASA Astrophysics Data System (ADS)
Richards, Katrina
Despite its relevance to many aspects of urban climate and to several practical questions, urban dew has largely been ignored. Here, simple observations an out-of-doors scale model, and numerical simulation are used to investigate patterns of dewfall and surface moisture (dew + guttation) in urban environments. Observations and modelling were undertaken in Vancouver, B.C., primarily during the summers of 1993 and 1996. Surveys at several scales (0.02-25 km) show that the main controls on dew are weather, location and site configuration (geometry and surface materials). Weather effects are discussed using an empirical factor, FW . Maximum dew accumulation (up to ~ 0.2 mm per night) is seen on nights with moist air and high FW , i.e., cloudless conditions with light winds. Favoured sites are those with high Ysky and surfaces which cool rapidly after sunset, e.g., grass and well insulated roofs. A 1/8-scale model is designed, constructed, and run at an out-of-doors site to study dew patterns in an urban residential landscape which consists of house lots, a street and an open grassed park. The Internal Thermal Mass (ITM) approach is used to scale the thermal inertia of buildings. The model is validated using data from full-scale sites in Vancouver. Patterns in the model agree with those seen at the full-scale, i.e., dew distribution is governed by weather, site geometry and substrate conditions. Correlation is shown between Ysky and surface moisture accumulation. The feasibility of using a numerical model to simulate urban dew is investigated using a modified version of a rural dew model. Results for simple isolated surfaces-a deciduous tree leaf and an asphalt shingle roof-show promise, especially for built surfaces.
Simple spatial scaling rules behind complex cities.
Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2017-11-28
Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.
Role of large-scale velocity fluctuations in a two-vortex kinematic dynamo.
Kaplan, E J; Brown, B P; Rahbarnia, K; Forest, C B
2012-06-01
This paper presents an analysis of the Dudley-James two-vortex flow, which inspired several laboratory-scale liquid-metal experiments, in order to better demonstrate its relation to astrophysical dynamos. A coordinate transformation splits the flow into components that are axisymmetric and nonaxisymmetric relative to the induced magnetic dipole moment. The reformulation gives the flow the same dynamo ingredients as are present in more complicated convection-driven dynamo simulations. These ingredients are currents driven by the mean flow and currents driven by correlations between fluctuations in the flow and fluctuations in the magnetic field. The simple model allows us to isolate the dynamics of the growing eigenvector and trace them back to individual three-wave couplings between the magnetic field and the flow. This simple model demonstrates the necessity of poloidal advection in sustaining the dynamo and points to the effect of large-scale flow fluctuations in exciting a dynamo magnetic field.
Collision geometry scaling of Au+Au pseudorapidity density from √(sNN )=19.6 to 200 GeV
NASA Astrophysics Data System (ADS)
Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Decowski, M. P.; García, E.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steinberg, P.; Stephans, G. S.; Sukhanov, A.; Tonjes, M. B.; Tang, J.-L.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Verdier, R.; Wolfs, F. L.; Wosiek, B.; Woźniak, K.; Wuosmaa, A. H.; Wysłouch, B.
2004-08-01
The centrality dependence of the midrapidity charged particle multiplicity in Au+Au heavy-ion collisions at √(sNN )=19.6 and 200 GeV is presented. Within a simple model, the fraction of hard (scaling with number of binary collisions) to soft (scaling with number of participant pairs) interactions is consistent with a value of x=0.13±0.01 (stat) ±0.05 (syst) at both energies. The experimental results at both energies, scaled by inelastic p ( p¯ ) +p collision data, agree within systematic errors. The ratio of the data was found not to depend on centrality over the studied range and yields a simple linear scale factor of R200/19.6 =2.03±0.02 (stat) ±0.05 (syst) .
Bridging the scales in a eulerian air quality model to assess megacity export of pollution
NASA Astrophysics Data System (ADS)
Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.
2013-08-01
In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.
Infiltration is important to modeling the overland transport of microorganisms in environmental waters. In watershed- and hillslope scale-models, infiltration is commonly described by simple equations relating infiltration rate to soil saturated conductivity and by empirical para...
A simple inertial model for Neptune's zonal circulation
NASA Technical Reports Server (NTRS)
Allison, Michael; Lumetta, James T.
1990-01-01
Voyager imaging observations of zonal cloud-tracked winds on Neptune revealed a strongly subrotational equatorial jet with a speed approaching 500 m/s and generally decreasing retrograde motion toward the poles. The wind data are interpreted with a speculative but revealingly simple model based on steady gradient flow balance and an assumed global homogenization of potential vorticity for shallow layer motion. The prescribed model flow profile relates the equatorial velocity to the mid-latitude shear, in reasonable agreement with the available data, and implies a global horizontal deformation scale L(D) of about 3000 km.
Correlation lengths in hydrodynamic models of active nematics.
Hemingway, Ewan J; Mishra, Prashant; Marchetti, M Cristina; Fielding, Suzanne M
2016-09-28
We examine the scaling with activity of the emergent length scales that control the nonequilibrium dynamics of an active nematic liquid crystal, using two popular hydrodynamic models that have been employed in previous studies. In both models we find that the chaotic spatio-temporal dynamics in the regime of fully developed active turbulence is controlled by a single active scale determined by the balance of active and elastic stresses, regardless of whether the active stress is extensile or contractile in nature. The observed scaling of the kinetic energy and enstrophy with activity is consistent with our single-length scale argument and simple dimensional analysis. Our results provide a unified understanding of apparent discrepancies in the previous literature and demonstrate that the essential physics is robust to the choice of model.
The dark side of cosmology: dark matter and dark energy.
Spergel, David N
2015-03-06
A simple model with only six parameters (the age of the universe, the density of atoms, the density of matter, the amplitude of the initial fluctuations, the scale dependence of this amplitude, and the epoch of first star formation) fits all of our cosmological data . Although simple, this standard model is strange. The model implies that most of the matter in our Galaxy is in the form of "dark matter," a new type of particle not yet detected in the laboratory, and most of the energy in the universe is in the form of "dark energy," energy associated with empty space. Both dark matter and dark energy require extensions to our current understanding of particle physics or point toward a breakdown of general relativity on cosmological scales. Copyright © 2015, American Association for the Advancement of Science.
Damage and strength of composite materials: Trends, predictions, and challenges
NASA Technical Reports Server (NTRS)
Obrien, T. Kevin
1994-01-01
Research on damage mechanisms and ultimate strength of composite materials relevant to scaling issues will be addressed in this viewgraph presentation. The use of fracture mechanics and Weibull statistics to predict scaling effects for the onset of isolated damage mechanisms will be highlighted. The ability of simple fracture mechanics models to predict trends that are useful in parametric or preliminary designs studies will be reviewed. The limitations of these simple models for complex loading conditions will also be noted. The difficulty in developing generic criteria for the growth of these mechanisms needed in progressive damage models to predict strength will be addressed. A specific example for a problem where failure is a direct consequence of progressive delamination will be explored. A damage threshold/fail-safety concept for addressing composite damage tolerance will be discussed.
Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions
NASA Astrophysics Data System (ADS)
Soltani, S. S.; Cvetkovic, V.; Destouni, G.
2017-12-01
The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.
Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting.
Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M
2014-06-01
Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind "noise," which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical "downscaling" of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations.
Effects of scale-dependent non-Gaussianity on cosmological structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
LoVerde, Marilena; Miller, Amber; Shandera, Sarah
2008-04-15
The detection of primordial non-Gaussianity could provide a powerful means to test various inflationary scenarios. Although scale-invariant non-Gaussianity (often described by the f{sub NL} formalism) is currently best constrained by the CMB, single-field models with changing sound speed can have strongly scale-dependent non-Gaussianity. Such models could evade the CMB constraints but still have important effects at scales responsible for the formation of cosmological objects such as clusters and galaxies. We compute the effect of scale-dependent primordial non-Gaussianity on cluster number counts as a function of redshift, using a simple ansatz to model scale-dependent features. We forecast constraints on these modelsmore » achievable with forthcoming datasets. We also examine consequences for the galaxy bispectrum. Our results are relevant for the Dirac-Born-Infeld model of brane inflation, where the scale dependence of the non-Gaussianity is directly related to the geometry of the extra dimensions.« less
A simple model for the evolution of a non-Abelian cosmic string network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cella, G.; Pieroni, M., E-mail: giancarlo.cella@pi.infn.it, E-mail: mauro.pieroni@apc.univ-paris7.fr
2016-06-01
In this paper we present the results of numerical simulations intended to study the behavior of non-Abelian cosmic strings networks. In particular we are interested in discussing the variations in the asymptotic behavior of the system as we variate the number of generators for the topological defects. A simple model which allows for cosmic strings is presented and its lattice discretization is discussed. The evolution of the generated cosmic string networks is then studied for different values for the number of generators for the topological defects. Scaling solution appears to be approached in most cases and we present an argumentmore » to justify the lack of scaling for the residual cases.« less
Estimation of critical behavior from the density of states in classical statistical models
NASA Astrophysics Data System (ADS)
Malakis, A.; Peratzakis, A.; Fytas, N. G.
2004-12-01
We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.
Scaling laws of passive-scalar diffusion in the interstellar medium
NASA Astrophysics Data System (ADS)
Colbrook, Matthew J.; Ma, Xiangcheng; Hopkins, Philip F.; Squire, Jonathan
2017-05-01
Passive-scalar mixing (metals, molecules, etc.) in the turbulent interstellar medium (ISM) is critical for abundance patterns of stars and clusters, galaxy and star formation, and cooling from the circumgalactic medium. However, the fundamental scaling laws remain poorly understood in the highly supersonic, magnetized, shearing regime relevant for the ISM. We therefore study the full scaling laws governing passive-scalar transport in idealized simulations of supersonic turbulence. Using simple phenomenological arguments for the variation of diffusivity with scale based on Richardson diffusion, we propose a simple fractional diffusion equation to describe the turbulent advection of an initial passive scalar distribution. These predictions agree well with the measurements from simulations, and vary with turbulent Mach number in the expected manner, remaining valid even in the presence of a large-scale shear flow (e.g. rotation in a galactic disc). The evolution of the scalar distribution is not the same as obtained using simple, constant 'effective diffusivity' as in Smagorinsky models, because the scale dependence of turbulent transport means an initially Gaussian distribution quickly develops highly non-Gaussian tails. We also emphasize that these are mean scalings that apply only to ensemble behaviours (assuming many different, random scalar injection sites): individual Lagrangian 'patches' remain coherent (poorly mixed) and simply advect for a large number of turbulent flow-crossing times.
ERIC Educational Resources Information Center
Johnson, Matthew S.; Jenkins, Frank
2005-01-01
Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…
Forward-bias tunneling - A limitation to bipolar device scaling
NASA Technical Reports Server (NTRS)
Del Alamo, Jesus A.; Swanson, Richard M.
1986-01-01
Forward-bias tunneling is observed in heavily doped p-n junctions of bipolar transistors. A simple phenomenological model suitable to incorporation in device codes is developed. The model identifies as key parameters the space-charge-region (SCR) thickness at zero bias and the reduced doping level at its edges which can both be obtained from CV characteristics. This tunneling mechanism may limit the maximum gain achievable from scaled bipolar devices.
A simple approximation for larval retention around reefs
NASA Astrophysics Data System (ADS)
Cetina-Heredia, Paulina; Connolly, Sean R.
2011-09-01
Estimating larval retention at individual reefs by local scale three-dimensional flows is a significant problem for understanding, and predicting, larval dispersal. Determining larval dispersal commonly involves the use of computationally demanding and expensively calibrated/validated hydrodynamic models that resolve reef wake eddies. This study models variation in larval retention times for a range of reef shapes and circulation regimes, using a reef-scale three-dimensional hydrodynamic model. It also explores how well larval retention time can be estimated based on the "Island Wake Parameter", a measure of the degree of flow turbulence in the wake of reefs that is a simple function of flow speed, reef dimension, and vertical diffusion. The mean residence times found in the present study (0.48-5.64 days) indicate substantial potential for self-recruitment of species whose larvae are passive, or weak swimmers, for the first several days after release. Results also reveal strong and significant relationships between the Island Wake Parameter and mean residence time, explaining 81-92% of the variability in retention among reefs across a range of unidirectional flow speeds and tidal regimes. These findings suggest that good estimates of larval retention may be obtained from relatively coarse-scale characteristics of the flow, and basic features of reef geomorphology. Such approximations may be a valuable tool for modeling connectivity and meta-population dynamics over large spatial scales, where explicitly characterizing fine-scale flows around reef requires a prohibitive amount of computation and extensive model calibration.
A hierarchy of granular continuum models: Why flowing grains are both simple and complex
NASA Astrophysics Data System (ADS)
Kamrin, Ken
2017-06-01
Granular materials have a strange propensity to behave as either a complex media or a simple media depending on the precise question being asked. This review paper offers a summary of granular flow rheologies for well-developed or steady-state motion, and seeks to explain this dichotomy through the vast range of complexity intrinsic to these models. A key observation is that to achieve accuracy in predicting flow fields in general geometries, one requires a model that accounts for a number of subtleties, most notably a nonlocal effect to account for cooperativity in the flow as induced by the finite size of grains. On the other hand, forces and tractions that develop on macro-scale, submerged boundaries appear to be minimally affected by grain size and, barring very rapid motions, are well represented by simple rate-independent frictional plasticity models. A major simplification observed in experiments of granular intrusion, which we refer to as the `resistive force hypothesis' of granular Resistive Force Theory, can be shown to arise directly from rate-independent plasticity. Because such plasticity models have so few parameters, and the major rheological parameter is a dimensionless internal friction coefficient, some of these simplifications can be seen as consequences of scaling.
Foreshock and aftershocks in simple earthquake models.
Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R
2015-02-27
Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.
Test of the efficiency of three storm water quality models with a rich set of data.
Ahyerre, M; Henry, F O; Gogien, F; Chabanel, M; Zug, M; Renaudet, D
2005-01-01
The objective of this article is to test the efficiency of three different Storm Water Quality Model (SWQM) on the same data set (34 rain events, SS measurements) sampled on a 42 ha watershed in the center of Paris. The models have been calibrated at the scale of the rain event. Considering the mass of pollution calculated per event, the results on the models are satisfactory but that they are in the same order of magnitude as the simple hydraulic approach associated to a constant concentration. In a second time, the mass of pollutant at the outlet of the catchment at the global scale of the 34 events has been calculated. This approach shows that the simple hydraulic calculations gives better results than SWQM. Finally, the pollutographs are analysed, showing that storm water quality models are interesting tools to represent the shape of the pollutographs, and the dynamics of the phenomenon which can be useful in some projects for managers.
Forgetting in Immediate Serial Recall: Decay, Temporal Distinctiveness, or Interference?
ERIC Educational Resources Information Center
Oberauer, Klaus; Lewandowsky, Stephan
2008-01-01
Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively.…
Active-to-absorbing-state phase transition in an evolving population with mutation.
Sarkar, Niladri
2015-10-01
We study the active to absorbing phase transition (AAPT) in a simple two-component model system for a species and its mutant. We uncover the nontrivial critical scaling behavior and weak dynamic scaling near the AAPT that shows the significance of mutation and highlights the connection of this model with the well-known directed percolation universality class. Our model should be a useful starting point to study how mutation may affect extinction or survival of a species.
Groundwater development stress: Global-scale indices compared to regional modeling
Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia
2018-01-01
The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.
USDA-ARS?s Scientific Manuscript database
Various computer models, ranging from simple to complex, have been developed to simulate hydrology and water quality from field to watershed scales. However, many users are uncertain about which model to choose when estimating water quantity and quality conditions in a watershed. This study compared...
USE OF MODELS FOR GAMMA SHIELDING STUDIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clifford, C.E.
1962-02-01
The use of models for shielding studies of buildings exposed to gamma radiation was evaluated by comparing the dose distributions produced in a blockhouse with movable inside walls exposed to 0.66 Mev gamma radiation with corresponding distributions in an iron 1 to 10 scale model. The effects of air and ground scaling on the readings in the model were also investigated. Iron appeared to be a suitable model material for simple closed buildings but for more complex structures it appeared that the use of iron models would progressively overestimite the gamms shielding protection as the complexity increased. (auth)
NASA Astrophysics Data System (ADS)
Huber, A.; Chankin, A. V.
2017-06-01
A simple two-point representation of the tokamak scrape-off layer (SOL) in the conduction limited regime, based on the parallel and perpendicular energy balance equations in combination with the heat flux width predicted by a heuristic drift-based model, was used to derive a scaling for the cross-field thermal diffusivity {χ }\\perp . For fixed plasma shape and neglecting weak power dependence indexes 1/8, the scaling {χ }\\perp \\propto {P}{{S}{{O}}{{L}}}/(n{B}θ {R}2) is derived.
Remote tropical and sub-tropical responses to Amazon deforestation
NASA Astrophysics Data System (ADS)
Badger, Andrew M.; Dirmeyer, Paul A.
2016-05-01
Replacing natural vegetation with realistic tropical crops over the Amazon region in a global Earth system model impacts vertical transport of heat and moisture, modifying the interaction between the atmospheric boundary layer and the free atmosphere. Vertical velocity is decreased over a majority of the Amazon region, shifting the ascending branch and modifying the seasonality of the Hadley circulation over the Atlantic and eastern Pacific oceans. Using a simple model that relates circulation changes to heating anomalies and generalizing the upper-atmosphere temperature response to deforestation, agreement is found between the response in the fully-coupled model and the simple solution. These changes to the large-scale dynamics significantly impact precipitation in several remote regions, namely sub-Saharan Africa, Mexico, the southwestern United States and extratropical South America, suggesting non-local climate repercussions for large-scale land use changes in the tropics are possible.
Seismic waves and earthquakes in a global monolithic model
NASA Astrophysics Data System (ADS)
Roubíček, Tomáš
2018-03-01
The philosophy that a single "monolithic" model can "asymptotically" replace and couple in a simple elegant way several specialized models relevant on various Earth layers is presented and, in special situations, also rigorously justified. In particular, global seismicity and tectonics is coupled to capture, e.g., (here by a simplified model) ruptures of lithospheric faults generating seismic waves which then propagate through the solid-like mantle and inner core both as shear (S) or pressure (P) waves, while S-waves are suppressed in the fluidic outer core and also in the oceans. The "monolithic-type" models have the capacity to describe all the mentioned features globally in a unified way together with corresponding interfacial conditions implicitly involved, only when scaling its parameters appropriately in different Earth's layers. Coupling of seismic waves with seismic sources due to tectonic events is thus an automatic side effect. The global ansatz is here based, rather for an illustration, only on a relatively simple Jeffreys' viscoelastic damageable material at small strains whose various scaling (limits) can lead to Boger's viscoelastic fluid or even to purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis, centrifugal, and tidal forces are counted in our global model, as well. The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservation is briefly presented.
NASA Astrophysics Data System (ADS)
Orr, Matthew; Hopkins, Philip F.
2018-06-01
I will present a simple model of non-equilibrium star formation and its relation to the scatter in the Kennicutt-Schmidt relation and large-scale star formation efficiencies in galaxies. I will highlight the importance of a hierarchy of timescales, between the galaxy dynamical time, local free-fall time, the delay time of stellar feedback, and temporal overlap in observables, in setting the scatter of the observed star formation rates for a given gas mass. Further, I will talk about how these timescales (and their associated duty-cycles of star formation) influence interpretations of the large-scale star formation efficiency in reasonably star-forming galaxies. Lastly, the connection with galactic centers and out-of-equilibrium feedback conditions will be mentioned.
A Lattice Boltzmann Method for Turbomachinery Simulations
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Lopez, I.
2003-01-01
Lattice Boltzmann (LB) Method is a relatively new method for flow simulations. The start point of LB method is statistic mechanics and Boltzmann equation. The LB method tries to set up its model at molecular scale and simulate the flow at macroscopic scale. LBM has been applied to mostly incompressible flows and simple geometry.
Wenchi Jin; Hong S. He; Frank R. Thompson
2016-01-01
Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...
NASA Technical Reports Server (NTRS)
Anderson, G. S.; Hayden, R. E.; Thompson, A. R.; Madden, R.
1985-01-01
The feasibility of acoustical scale modeling techniques for modeling wind effects on long range, low frequency outdoor sound propagation was evaluated. Upwind and downwind propagation was studied in 1/100 scale for flat ground and simple hills with both rigid and finite ground impedance over a full scale frequency range from 20 to 500 Hz. Results are presented as 1/3-octave frequency spectra of differences in propagation loss between the case studied and a free-field condition. Selected sets of these results were compared with validated analytical models for propagation loss, when such models were available. When they were not, results were compared with predictions from approximate models developed. Comparisons were encouraging in many cases considering the approximations involved in both the physical modeling and analysis methods. Of particular importance was the favorable comparison between theory and experiment for propagation over soft ground.
Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting
Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M
2014-01-01
Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind “noise,” which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical “downscaling” of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Key Points Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations PMID:26213518
Exploring global carbon turnover and radiocarbon cycling in terrestrial biosphere models
NASA Astrophysics Data System (ADS)
Graven, H. D.; Warren, H.
2017-12-01
The uptake of carbon into terrestrial ecosystems through net primary productivity (NPP) and the turnover of that carbon through various pathways are the fundamental drivers of changing carbon stocks on land, in addition to human-induced and natural disturbances. Terrestrial biosphere models use different formulations for carbon uptake and release, resulting in a range of values in NPP of 40-70 PgC/yr and biomass turnover times of about 25-40 years for the preindustrial period in current-generation models from CMIP5. Biases in carbon uptake and turnover impact simulated carbon uptake and storage in the historical period and later in the century under changing climate and CO2 concentration, however evaluating global-scale NPP and carbon turnover is challenging. Scaling up of plot-scale measurements involves uncertainty due to the large heterogeneity across ecosystems and biomass types, some of which are not well-observed. We are developing the modelling of radiocarbon in terrestrial biosphere models, with a particular focus on decadal 14C dynamics after the nuclear weapons testing in the 1950s-60s, including the impact of carbon flux trends and variability on 14C cycling. We use an estimate of the total inventory of excess 14C in the biosphere constructed by Naegler and Levin (2009) using a 14C budget approach incorporating estimates of total 14C produced by the weapons tests and atmospheric and oceanic 14C observations. By simulating radiocarbon in simple biosphere box models using carbon fluxes from the CMIP5 models, we find that carbon turnover is too rapid in many of the simple models - the models appear to take up too much 14C and release it too quickly. Therefore many CMIP5 models may also simulate carbon turnover that is too rapid. A caveat is that the simple box models we use may not adequately represent carbon dynamics in the full-scale models. Explicit simulation of radiocarbon in terrestrial biosphere models would allow more robust evaluation of biosphere models and the investigation of climate-carbon cycle feedbacks on various timescales. Explicit simulation of radiocarbon and carbon-13 in terrestrial biosphere models of Earth System Models, as well as in ocean models, is recommended by CMIP6 and supported by CMIP6 protocols and forcing datasets.
Toward micro-scale spatial modeling of gentrification
NASA Astrophysics Data System (ADS)
O'Sullivan, David
A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.
COSP - A computer model of cyclic oxidation
NASA Technical Reports Server (NTRS)
Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.
1991-01-01
A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
Polarizable molecular interactions in condensed phase and their equivalent nonpolarizable models.
Leontyev, Igor V; Stuchebrukhov, Alexei A
2014-07-07
Earlier, using phenomenological approach, we showed that in some cases polarizable models of condensed phase systems can be reduced to nonpolarizable equivalent models with scaled charges. Examples of such systems include ionic liquids, TIPnP-type models of water, protein force fields, and others, where interactions and dynamics of inherently polarizable species can be accurately described by nonpolarizable models. To describe electrostatic interactions, the effective charges of simple ionic liquids are obtained by scaling the actual charges of ions by a factor of 1/√(ε(el)), which is due to electronic polarization screening effect; the scaling factor of neutral species is more complicated. Here, using several theoretical models, we examine how exactly the scaling factors appear in theory, and how, and under what conditions, polarizable Hamiltonians are reduced to nonpolarizable ones. These models allow one to trace the origin of the scaling factors, determine their values, and obtain important insights on the nature of polarizable interactions in condensed matter systems.
Santillán, Moisés
2003-07-21
A simple model of an oxygen exchanging network is presented and studied. This network's task is to transfer a given oxygen rate from a source to an oxygen consuming system. It consists of a pipeline, that interconnects the oxygen consuming system and the reservoir and of a fluid, the active oxygen transporting element, moving through the pipeline. The network optimal design (total pipeline surface) and dynamics (volumetric flow of the oxygen transporting fluid), which minimize the energy rate expended in moving the fluid, are calculated in terms of the oxygen exchange rate, the pipeline length, and the pipeline cross-section. After the oxygen exchanging network is optimized, the energy converting system is shown to satisfy a 3/4-like allometric scaling law, based upon the assumption that its performance regime is scale invariant as well as on some feasible geometric scaling assumptions. Finally, the possible implications of this result on the allometric scaling properties observed elsewhere in living beings are discussed.
Spontaneous parity violation and SUSY strong gauge theory
NASA Astrophysics Data System (ADS)
Haba, Naoyuki; Ohki, Hiroshi
2012-07-01
We suggest simple models of spontaneous parity violation in supersymmetric strong gauge theory. We focus on left-right symmetric model and investigate vacuum with spontaneous parity violation. Non-perturbative effects are calculable in supersymmetric gauge theory, and we suggest new models. Our models show confinement, so that we try to understand them by using a dual description of the theory. The left-right symmetry breaking and electroweak symmetry breaking are simultaneously occurred with the suitable energy scale hierarchy. This structure has several advantages compared to the MSSM. The scale of the Higgs mass (left-right breaking scale) and that of VEVs are different, so the SUSY little hierarchy problems are absent. The second model also induces spontaneous supersymmetry breaking [1].
GIS-BASED HYDROLOGIC MODELING: THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL
Planning and assessment in land and water resource management are evolving from simple, local scale problems toward complex, spatially explicit regional ones. Such problems have to be
addressed with distributed models that can compute runoff and erosion at different spatial a...
Pelletier, Jon D
2002-02-19
The majority of numerical models in climatology and geomagnetism rely on deterministic finite-difference techniques and attempt to include as many empirical constraints on the many processes and boundary conditions applicable to their very complex systems. Despite their sophistication, many of these models are unable to reproduce basic aspects of climatic or geomagnetic dynamics. We show that a simple stochastic model, which treats the flux of heat energy in the atmosphere by convective instabilities with random advection and diffusive mixing, does a remarkable job at matching the observed power spectrum of historical and proxy records for atmospheric temperatures from time scales of one day to one million years (Myr). With this approach distinct changes in the power-spectral form can be associated with characteristic time scales of ocean mixing and radiative damping. Similarly, a simple model of the diffusion of magnetic intensity in Earth's core coupled with amplification and destruction of the local intensity can reproduce the observed 1/f noise behavior of Earth's geomagnetic intensity from time scales of 1 (Myr) to 100 yr. In addition, the statistics of the fluctuations in the polarity reversal rate from time scales of 1 Myr to 100 Myr are consistent with the hypothesis that reversals are the result of variations in 1/f noise geomagnetic intensity above a certain threshold, suggesting that reversals may be associated with internal fluctuations rather than changes in mantle thermal or magnetic boundary conditions.
Pelletier, Jon D.
2002-01-01
The majority of numerical models in climatology and geomagnetism rely on deterministic finite-difference techniques and attempt to include as many empirical constraints on the many processes and boundary conditions applicable to their very complex systems. Despite their sophistication, many of these models are unable to reproduce basic aspects of climatic or geomagnetic dynamics. We show that a simple stochastic model, which treats the flux of heat energy in the atmosphere by convective instabilities with random advection and diffusive mixing, does a remarkable job at matching the observed power spectrum of historical and proxy records for atmospheric temperatures from time scales of one day to one million years (Myr). With this approach distinct changes in the power-spectral form can be associated with characteristic time scales of ocean mixing and radiative damping. Similarly, a simple model of the diffusion of magnetic intensity in Earth's core coupled with amplification and destruction of the local intensity can reproduce the observed 1/f noise behavior of Earth's geomagnetic intensity from time scales of 1 (Myr) to 100 yr. In addition, the statistics of the fluctuations in the polarity reversal rate from time scales of 1 Myr to 100 Myr are consistent with the hypothesis that reversals are the result of variations in 1/f noise geomagnetic intensity above a certain threshold, suggesting that reversals may be associated with internal fluctuations rather than changes in mantle thermal or magnetic boundary conditions. PMID:11875208
Application of Support Vector Machine to Forex Monitoring
NASA Astrophysics Data System (ADS)
Kamruzzaman, Joarder; Sarker, Ruhul A.
Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.
The application of sensitivity analysis to models of large scale physiological systems
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
A Simple Model for Fine Structure Transitions in Alkali-Metal Noble-Gas Collisions
2015-03-01
63 33 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for KHe, KNe, and KAr...64 ix Figure Page 34 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for RbHe, RbNe, and...RbAr . . . . . . . . . . . . . . . . . . . . . . . . . 64 35 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for CsHe, CsNe, and CsAr
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Mahanama, Sarith P.
2012-01-01
The inherent soil moisture-evaporation relationships used in today 's land surface models (LSMs) arguably reflect a lot of guesswork given the lack of contemporaneous evaporation and soil moisture observations at the spatial scales represented by regional and global models. The inherent soil moisture-runoff relationships used in the LSMs are also of uncertain accuracy. Evaluating these relationships is difficult but crucial given that they have a major impact on how the land component contributes to hydrological and meteorological variability within the climate system. The relationships, it turns out, can be examined efficiently and effectively with a simple water balance model framework. The simple water balance model, driven with multi-decadal observations covering the conterminous United States, shows how different prescribed relationships lead to different manifestations of hydrological variability, some of which can be compared directly to observations. Through the testing of a wide suite of relationships, the simple model provides estimates for the underlying relationships that operate in nature and that should be operating in LSMs. We examine the relationships currently used in a number of different LSMs in the context of the simple water balance model results and make recommendations for potential first-order improvements to these LSMs.
Hartin, Corinne A.; Patel, Pralit L.; Schwarber, Adria; ...
2015-04-01
Simple climate models play an integral role in the policy and scientific communities. They are used for climate mitigation scenarios within integrated assessment models, complex climate model emulation, and uncertainty analyses. Here we describe Hector v1.0, an open source, object-oriented, simple global climate carbon-cycle model. This model runs essentially instantaneously while still representing the most critical global-scale earth system processes. Hector has a three-part main carbon cycle: a one-pool atmosphere, land, and ocean. The model's terrestrial carbon cycle includes primary production and respiration fluxes, accommodating arbitrary geographic divisions into, e.g., ecological biomes or political units. Hector actively solves the inorganicmore » carbon system in the surface ocean, directly calculating air–sea fluxes of carbon and ocean pH. Hector reproduces the global historical trends of atmospheric [CO 2], radiative forcing, and surface temperatures. The model simulates all four Representative Concentration Pathways (RCPs) with equivalent rates of change of key variables over time compared to current observations, MAGICC (a well-known simple climate model), and models from the 5th Coupled Model Intercomparison Project. Hector's flexibility, open-source nature, and modular design will facilitate a broad range of research in various areas.« less
On the Subgrid-Scale Modeling of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Squires, Kyle; Zeman, Otto
1990-01-01
A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.
BIOGENIC HYDROCARBON EMISSION INVENTORY FOR THE U.S. USING A SIMPLE FOREST CANOPY MODEL
A biogenic hydrocarbon emission inventory system, developed for acid deposition and regional oxidant modeling, is described, and results for a U.S. emission inventory are presented. or deciduous and coniferous forests, scaling relationships are used to account for canopy effects ...
A comprehensive surface-groundwater flow model
NASA Astrophysics Data System (ADS)
Arnold, Jeffrey G.; Allen, Peter M.; Bernhardt, Gilbert
1993-02-01
In this study, a simple groundwater flow and height model was added to an existing basin-scale surface water model. The linked model is: (1) watershed scale, allowing the basin to be subdivided; (2) designed to accept readily available inputs to allow general use over large regions; (3) continuous in time to allow simulation of land management, including such factors as climate and vegetation changes, pond and reservoir management, groundwater withdrawals, and stream and reservoir withdrawals. The model is described, and is validated on a 471 km 2 watershed near Waco, Texas. This linked model should provide a comprehensive tool for water resource managers in development and planning.
Single-particle dynamics of the Anderson model: a local moment approach
NASA Astrophysics Data System (ADS)
Glossop, Matthew T.; Logan, David E.
2002-07-01
A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.
Transverse-velocity scaling of femtoscopy in \\sqrt{s}=7\\,{TeV} proton–proton collisions
NASA Astrophysics Data System (ADS)
Humanic, T. J.
2018-05-01
Although transverse-mass scaling of femtoscopic radii is found to hold to a good approximation in heavy-ion collision experiments, it is seen to fail for high-energy proton–proton collisions. It is shown that if invariant radius parameters are plotted versus the transverse velocity instead, scaling with the transverse velocity is seen in \\sqrt{s}=7 TeV proton–proton experiments. A simple semi-classical model is shown to qualitatively reproduce this transverse velocity scaling.
Experimental Investigation of the Flow on a Simple Frigate Shape (SFS)
Mora, Rafael Bardera
2014-01-01
Helicopters operations on board ships require special procedures introducing additional limitations known as ship helicopter operational limitations (SHOLs) which are a priority for all navies. This paper presents the main results obtained from the experimental investigation of a simple frigate shape (SFS) which is a typical case of study in experimental and computational aerodynamics. The results obtained in this investigation are used to make an assessment of the flow predicted by the SFS geometry in comparison with experimental data obtained testing a ship model (reduced scale) in the wind tunnel and on board (full scale) measurements performed on a real frigate type ship geometry. PMID:24523646
Models for small-scale structure on cosmic strings. II. Scaling and its stability
NASA Astrophysics Data System (ADS)
Vieira, J. P. P.; Martins, C. J. A. P.; Shellard, E. P. S.
2016-11-01
We make use of the formalism described in a previous paper [Martins et al., Phys. Rev. D 90, 043518 (2014)] to address general features of wiggly cosmic string evolution. In particular, we highlight the important role played by poorly understood energy loss mechanisms and propose a simple Ansatz which tackles this problem in the context of an extended velocity-dependent one-scale model. We find a general procedure to determine all the scaling solutions admitted by a specific string model and study their stability, enabling a detailed comparison with future numerical simulations. A simpler comparison with previous Goto-Nambu simulations supports earlier evidence that scaling is easier to achieve in the matter era than in the radiation era. In addition, we also find that the requirement that a scaling regime be stable seems to notably constrain the allowed range of energy loss parameters.
Could the electroweak scale be linked to the large scale structure of the Universe?
NASA Technical Reports Server (NTRS)
Chakravorty, Alak; Massarotti, Alessandro
1991-01-01
We study a model where the domain walls are generated through a cosmological phase transition involving a scalar field. We assume the existence of a coupling between the scalar field and dark matter and show that the interaction between domain walls and dark matter leads to an energy dependent reflection mechanism. For a simple Yakawa coupling, we find that the vacuum expectation value of the scalar field is theta approx. equals 30GeV - 1TeV, in order for the model to be successful in the formation of large scale 'pancake' structures.
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
NASA Astrophysics Data System (ADS)
RUIZ, L.; Fovet, O.; Faucheux, M.; Molenat, J.; Sekhar, M.; Aquilina, L.; Gascuel-odoux, C.
2013-12-01
The development of simple and easily accessible metrics is required for characterizing and comparing catchment response to external forcings (climate or anthropogenic) and for managing water resources. The hydrological and geochemical signatures in the stream represent the integration of the various processes controlling this response. The complexity of these signatures over several time scales from sub-daily to several decades [Kirchner et al., 2001] makes their deconvolution very difficult. A large range of modeling approaches intent to represent this complexity by accounting for the spatial and/or temporal variability of the processes involved. However, simple metrics are not easily retrieved from these approaches, mostly because of over-parametrization issues. We hypothesize that to obtain relevant metrics, we need to use models that are able to simulate the observed variability of river signatures at different time scales, while being as parsimonious as possible. The lumped model ETNA (modified from[Ruiz et al., 2002]) is able to simulate adequately the seasonal and inter-annual patterns of stream NO3 concentration. Shallow groundwater is represented by two linear stores with double porosity and riparian processes are represented by a constant nitrogen removal function. Our objective was to identify simple metrics of catchment response by calibrating this lumped model on two paired agricultural catchments where both N inputs and outputs were monitored for a period of 20 years. These catchments, belonging to ORE AgrHys, although underlain by the same granitic bedrock are displaying contrasted chemical signatures. The model was able to simulate the two contrasted observed patterns in stream and groundwater, both on hydrology and chemistry, and at the seasonal and pluri-annual scales. It was also compatible with the expected trends of nitrate concentration since 1960. The output variables of the model were used to compute the nitrate residence time in both the catchments. We used the Global Likelihood Uncertainty Estimations (GLUE) approach [Beven and Binley, 1992] to assess the parameter uncertainties and the subsequent error in model outputs and residence times. Reasonably low parameter uncertainties were obtained by calibrating simultaneously the two paired catchments with two outlets time series of stream flow and nitrate concentrations. Finally, only one parameter controlled the contrast in nitrogen residence times between the catchments. Therefore, this approach provided a promising metric for classifying the variability of catchment response to agricultural nitrogen inputs. Beven, K., and A. Binley (1992), THE FUTURE OF DISTRIBUTED MODELS - MODEL CALIBRATION AND UNCERTAINTY PREDICTION, Hydrological Processes, 6(3), 279-298. Kirchner, J. W., X. Feng, and C. Neal (2001), Catchment-scale advection and dispersion as a mechanism for fractal scaling in stream tracer concentrations, Journal of Hydrology, 254(1-4), 82-101. Ruiz, L., S. Abiven, C. Martin, P. Durand, V. Beaujouan, and J. Molenat (2002), Effect on nitrate concentration in stream water of agricultural practices in small catchments in Brittany : II. Temporal variations and mixing processes, Hydrology and Earth System Sciences, 6(3), 507-513.
Synapse fits neuron: joint reduction by model inversion.
van der Scheer, H T; Doelman, A
2017-08-01
In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.
Spatial scaling of net primary productivity using subpixel landcover information
NASA Astrophysics Data System (ADS)
Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.
2008-10-01
Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.
NASA Astrophysics Data System (ADS)
Johari, A. H.; Muslim
2018-05-01
Experiential learning model using simple physics kit has been implemented to get a picture of improving attitude toward physics senior high school students on Fluid. This study aims to obtain a description of the increase attitudes toward physics senior high school students. The research method used was quasi experiment with non-equivalent pretest -posttest control group design. Two class of tenth grade were involved in this research 28, 26 students respectively experiment class and control class. Increased Attitude toward physics of senior high school students is calculated using an attitude scale consisting of 18 questions. Based on the experimental class test average of 86.5% with the criteria of almost all students there is an increase and in the control class of 53.75% with the criteria of half students. This result shows that the influence of experiential learning model using simple physics kit can improve attitude toward physics compared to experiential learning without using simple physics kit.
Physical models of collective cell motility: from cell to tissue
NASA Astrophysics Data System (ADS)
Camley, B. A.; Rappel, W.-J.
2017-03-01
In this article, we review physics-based models of collective cell motility. We discuss a range of techniques at different scales, ranging from models that represent cells as simple self-propelled particles to phase field models that can represent a cell’s shape and dynamics in great detail. We also extensively review the ways in which cells within a tissue choose their direction, the statistics of cell motion, and some simple examples of how cell-cell signaling can interact with collective cell motility. This review also covers in more detail selected recent works on collective cell motion of small numbers of cells on micropatterns, in wound healing, and the chemotaxis of clusters of cells.
Validation of the replica trick for simple models
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-04-01
We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.
Development of mathematical models of environmental physiology
NASA Technical Reports Server (NTRS)
Stolwijk, J. A. J.; Mitchell, J. W.; Nadel, E. R.
1971-01-01
Selected articles concerned with mathematical or simulation models of human thermoregulation are presented. The articles presented include: (1) development and use of simulation models in medicine, (2) model of cardio-vascular adjustments during exercise, (3) effective temperature scale based on simple model of human physiological regulatory response, (4) behavioral approach to thermoregulatory set point during exercise, and (5) importance of skin temperature in sweat regulation.
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.
2018-02-01
The Scaled SLW model for prediction of radiation transfer in non-uniform gaseous media is presented. The paper considers a new approach for construction of a Scaled SLW model. In order to maintain the SLW method as a simple and computationally efficient engineering method special attention is paid to explicit non-iterative methods of calculation of the scaling coefficient. The moments of gas absorption cross-section weighted by the Planck blackbody emissive power (in particular, the first moment - Planck mean, and first inverse moment - Rosseland mean) are used as the total characteristics of the absorption spectrum to be preserved by scaling. Generalized SLW modelling using these moments including both discrete gray gases and the continuous formulation is presented. Application of line-by-line look-up table for corresponding ALBDF and inverse ALBDF distribution functions (such that no solution of implicit equations is needed) ensures that the method is flexible and efficient. Predictions for radiative transfer using the Scaled SLW model are compared to line-by-line benchmark solutions, and predictions using the Rank Correlated SLW model and SLW Reference Approach. Conclusions and recommendations regarding application of the Scaled SLW model are made.
Geospatial application of the Water Erosion Prediction Project (WEPP) model
USDA-ARS?s Scientific Manuscript database
At the hillslope profile and/or field scale, a simple Windows graphical user interface (GUI) is available to easily specify the slope, soil, and management inputs for application of the USDA Water Erosion Prediction Project (WEPP) model. Likewise, basic small watershed configurations of a few hillsl...
USDA-ARS?s Scientific Manuscript database
Thermal-infrared remote sensing of land surface temperature provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition. This paper describes a robust but relatively simple thermal-based energy balance model that parameterizes the key soil/s...
A relativistic signature in large-scale structure
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
Landscape scale mapping of forest inventory data by nearest neighbor classification
Andrew Lister
2009-01-01
One of the goals of the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis (FIA) program is large-area mapping. FIA scientists have tried many methods in the past, including geostatistical methods, linear modeling, nonlinear modeling, and simple choropleth and dot maps. Mapping methods that require individual model-based maps to be...
A study on assimilating potential vorticity data
NASA Astrophysics Data System (ADS)
Li, Yong; Ménard, Richard; Riishøjgaard, Lars Peter; Cohn, Stephen E.; Rood, Richard B.
1998-08-01
The correlation that exists between the potential vorticity (PV) field and the distribution of chemical tracers such as ozone suggests the possibility of using tracer observations as proxy PV data in atmospheric data assimilation systems. Especially in the stratosphere, there are plentiful tracer observations but a general lack of reliable wind observations, and the correlation is most pronounced. The issue investigated in this study is how model dynamics would respond to the assimilation of PV data. First, numerical experiments of identical-twin type were conducted with a simple univariate nuding algorithm and a global shallow water model based on PV and divergence (PV-D model). All model fields are successfully reconstructed through the insertion of complete PV data alone if an appropriate value for the nudging coefficient is used. A simple linear analysis suggests that slow modes are recovered rapidly, at a rate nearly independent of spatial scale. In a more realistic experiment, appropriately scaled total ozone data from the NIMBUS-7 TOMS instrument were assimilated as proxy PV data into the PV-D model over a 10-day period. The resulting model PV field matches the observed total ozone field relatively well on large spatial scales, and the PV, geopotential and divergence fields are dynamically consistent. These results indicate the potential usefulness that tracer observations, as proxy PV data, may offer in a data assimilation system.
Natural electroweak breaking from a mirror symmetry.
Chacko, Z; Goh, Hock-Seng; Harnik, Roni
2006-06-16
We present "twin Higgs models," simple realizations of the Higgs boson as a pseudo Goldstone boson that protect the weak scale from radiative corrections up to scales of order 5-10 TeV. In the ultraviolet these theories have a discrete symmetry which interchanges each standard model particle with a corresponding particle which transforms under a twin or a mirror standard model gauge group. In addition, the Higgs sector respects an approximate global symmetry. When this global symmetry is broken, the discrete symmetry tightly constrains the form of corrections to the pseudo Goldstone Higgs potential, allowing natural electroweak symmetry breaking. Precision electroweak constraints are satisfied by construction. These models demonstrate that, contrary to the conventional wisdom, stabilizing the weak scale does not require new light particles charged under the standard model gauge groups.
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Upscale Impact of Mesoscale Disturbances of Tropical Convection on Convectively Coupled Kelvin Waves
NASA Astrophysics Data System (ADS)
Yang, Q.; Majda, A.
2017-12-01
Tropical convection associated with convectively coupled Kelvin waves (CCKWs) is typically organized by an eastward-moving synoptic-scale convective envelope with numerous embedded westward-moving mesoscale disturbances. It is of central importance to assess upscale impact of mesoscale disturbances on CCKWs as mesoscale disturbances propagate at various tilt angles and speeds. Here a simple multi-scale model is used to capture this multi-scale structure, where mesoscale fluctuations are directly driven by mesoscale heating and synoptic-scale circulation is forced by mean heating and eddy transfer of momentum and temperature. The two-dimensional version of the multi-scale model drives the synoptic-scale circulation, successfully reproduces key features of flow fields with a front-to-rear tilt and compares well with results from a cloud resolving model. In the scenario with an elevated upright mean heating, the tilted vertical structure of synoptic-scale circulation is still induced by the upscale impact of mesoscale disturbances. In a faster propagation scenario, the upscale impact becomes less important, while the synoptic-scale circulation response to mean heating dominates. In the unrealistic scenario with upward/westward tilted mesoscale heating, positive potential temperature anomalies are induced in the leading edge, which will suppress shallow convection in a moist environment. In its three-dimensional version, results show that upscale impact of mesoscale disturbances that propagate at tilt angles (110o 250o) induces negative lower-tropospheric potential temperature anomalies in the leading edge, providing favorable conditions for shallow convection in a moist environment, while the remaining tilt angle cases have opposite effects. Even in the presence of upright mean heating, the front-to-rear tilted synoptic-scale circulation can still be induced by eddy terms at tilt angles (120o 240o). In the case with fast propagating mesoscale heating, positive potential temperature anomalies are induced in the lower troposphere, suppressing convection in a moist environment. This simple model also reproduces convective momentum transport and CCKWs in agreement with results from a recent cloud resolving simulation.
Perspective: Sloppiness and emergent theories in physics, biology, and beyond.
Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P
2015-07-07
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.
Laplacian scale-space behavior of planar curve corners.
Zhang, Xiaohong; Qu, Ying; Yang, Dan; Wang, Hongxing; Kymer, Jeff
2015-11-01
Scale-space behavior of corners is important for developing an efficient corner detection algorithm. In this paper, we analyze the scale-space behavior with the Laplacian of Gaussian (LoG) operator on a planar curve which constructs Laplacian Scale Space (LSS). The analytical expression of a Laplacian Scale-Space map (LSS map) is obtained, demonstrating the Laplacian Scale-Space behavior of the planar curve corners, based on a newly defined unified corner model. With this formula, some Laplacian Scale-Space behavior is summarized. Although LSS demonstrates some similarities to Curvature Scale Space (CSS), there are still some differences. First, no new extreme points are generated in the LSS. Second, the behavior of different cases of a corner model is consistent and simple. This makes it easy to trace the corner in a scale space. At last, the behavior of LSS is verified in an experiment on a digital curve.
NASA Astrophysics Data System (ADS)
Nikurashin, Maxim; Gunn, Andrew
2017-04-01
The meridional overturning circulation (MOC) is a planetary-scale oceanic flow which is of direct importance to the climate system: it transports heat meridionally and regulates the exchange of CO2 with the atmosphere. The MOC is forced by wind and heat and freshwater fluxes at the surface and turbulent mixing in the ocean interior. A number of conceptual theories for the sensitivity of the MOC to changes in forcing have recently been developed and tested with idealized numerical models. However, the skill of the simple conceptual theories to describe the MOC simulated with higher complexity global models remains largely unknown. In this study, we present a systematic comparison of theoretical and modelled sensitivity of the MOC and associated deep ocean stratification to vertical mixing and southern hemisphere westerlies. The results show that theories that simplify the ocean into a single-basin, zonally-symmetric box are generally in a good agreement with a realistic, global ocean circulation model. Some disagreement occurs in the abyssal ocean, where complex bottom topography is not taken into account by simple theories. Distinct regimes, where the MOC has a different sensitivity to wind or mixing, as predicted by simple theories, are also clearly shown by the global ocean model. The sensitivity of the Indo-Pacific, Atlantic, and global basins is analysed separately to validate the conceptual understanding of the upper and lower overturning cells in the theory.
Stable clustering and the resolution of dissipationless cosmological N-body simulations
NASA Astrophysics Data System (ADS)
Benhaiem, David; Joyce, Michael; Sylos Labini, Francesco
2017-10-01
The determination of the resolution of cosmological N-body simulations, I.e. the range of scales in which quantities measured in them represent accurately the continuum limit, is an important open question. We address it here using scale-free models, for which self-similarity provides a powerful tool to control resolution. Such models also provide a robust testing ground for the so-called stable clustering approximation, which gives simple predictions for them. Studying large N-body simulations of such models with different force smoothing, we find that these two issues are in fact very closely related: our conclusion is that the accuracy of two-point statistics in the non-linear regime starts to degrade strongly around the scale at which their behaviour deviates from that predicted by the stable clustering hypothesis. Physically the association of the two scales is in fact simple to understand: stable clustering fails to be a good approximation when there are strong interactions of structures (in particular merging) and it is precisely such non-linear processes which are sensitive to fluctuations at the smaller scales affected by discretization. Resolution may be further degraded if the short distance gravitational smoothing scale is larger than the scale to which stable clustering can propagate. We examine in detail the very different conclusions of studies by Smith et al. and Widrow et al. and find that the strong deviations from stable clustering reported by these works are the results of over-optimistic assumptions about scales resolved accurately by the measured power spectra, and the reliance on Fourier space analysis. We emphasize the much poorer resolution obtained with the power spectrum compared to the two-point correlation function.
Predicting outcome in severe traumatic brain injury using a simple prognostic model.
Sobuwa, Simpiwe; Hartzenberg, Henry Benjamin; Geduld, Heike; Uys, Corrie
2014-06-17
Several studies have made it possible to predict outcome in severe traumatic brain injury (TBI) making it beneficial as an aid for clinical decision-making in the emergency setting. However, reliable predictive models are lacking for resource-limited prehospital settings such as those in developing countries like South Africa. To develop a simple predictive model for severe TBI using clinical variables in a South African prehospital setting. All consecutive patients admitted at two level-one centres in Cape Town, South Africa, for severe TBI were included. A binary logistic regression model was used, which included three predictor variables: oxygen saturation (SpO₂), Glasgow Coma Scale (GCS) and pupil reactivity. The Glasgow Outcome Scale was used to assess outcome on hospital discharge. A total of 74.4% of the outcomes were correctly predicted by the logistic regression model. The model demonstrated SpO₂ (p=0.019), GCS (p=0.001) and pupil reactivity (p=0.002) as independently significant predictors of outcome in severe TBI. Odds ratios of a good outcome were 3.148 (SpO₂ ≥ 90%), 5.108 (GCS 6 - 8) and 4.405 (pupils bilaterally reactive). This model is potentially useful for effective predictions of outcome in severe TBI.
Milky Way Mass Models and MOND
NASA Astrophysics Data System (ADS)
McGaugh, Stacy S.
2008-08-01
Using the Tuorla-Heidelberg model for the mass distribution of the Milky Way, I determine the rotation curve predicted by MOND (modified Newtonian dynamics). The result is in good agreement with the observed terminal velocities interior to the solar radius and with estimates of the Galaxy's rotation curve exterior thereto. There are no fit parameters: given the mass distribution, MOND provides a good match to the rotation curve. The Tuorla-Heidelberg model does allow for a variety of exponential scale lengths; MOND prefers short scale lengths in the range 2.0 kpc lesssim Rdlesssim 2.5 kpc. The favored value of Rd depends somewhat on the choice of interpolation function. There is some preference for the "simple" interpolation function as found by Famaey & Binney. I introduce an interpolation function that shares the advantages of the simple function on galaxy scales while having a much smaller impact in the solar system. I also solve the inverse problem, inferring the surface mass density distribution of the Milky Way from the terminal velocities. The result is a Galaxy with "bumps and wiggles" in both its luminosity profile and rotation curve that are reminiscent of those frequently observed in external galaxies.
Integral Design Methodology of Photocatalytic Reactors for Air Pollution Remediation.
Passalía, Claudio; Alfano, Orlando M; Brandi, Rodolfo J
2017-06-07
An integral reactor design methodology was developed to address the optimal design of photocatalytic wall reactors to be used in air pollution control. For a target pollutant to be eliminated from an air stream, the proposed methodology is initiated with a mechanistic derived reaction rate. The determination of intrinsic kinetic parameters is associated with the use of a simple geometry laboratory scale reactor, operation under kinetic control and a uniform incident radiation flux, which allows computing the local superficial rate of photon absorption. Thus, a simple model can describe the mass balance and a solution may be obtained. The kinetic parameters may be estimated by the combination of the mathematical model and the experimental results. The validated intrinsic kinetics obtained may be directly used in the scaling-up of any reactor configuration and size. The bench scale reactor may require the use of complex computational software to obtain the fields of velocity, radiation absorption and species concentration. The complete methodology was successfully applied to the elimination of airborne formaldehyde. The kinetic parameters were determined in a flat plate reactor, whilst a bench scale corrugated wall reactor was used to illustrate the scaling-up methodology. In addition, an optimal folding angle of the corrugated reactor was found using computational fluid dynamics tools.
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
ERIC Educational Resources Information Center
Marsh, Herbert W.; Scalas, L. Francesca; Nagengast, Benjamin
2010-01-01
Self-esteem, typically measured by the Rosenberg Self-Esteem Scale (RSE), is one of the most widely studied constructs in psychology. Nevertheless, there is broad agreement that a simple unidimensional factor model, consistent with the original design and typical application in applied research, does not provide an adequate explanation of RSE…
ERIC Educational Resources Information Center
Iniguez, J.; Raposo, V.
2009-01-01
In this paper we analyse the behaviour of a small-scale model of a magnetic levitation system based on the Inductrack concept. Drag and lift forces acting on our prototype, moving above a continuous copper track, are studied analytically following a simple low-speed approach. The experimental results are in good agreement with the theoretical…
Self-folding and aggregation of amyloid nanofibrils
NASA Astrophysics Data System (ADS)
Paparcone, Raffaella; Cranford, Steven W.; Buehler, Markus J.
2011-04-01
Amyloids are highly organized protein filaments, rich in β-sheet secondary structures that self-assemble to form dense plaques in brain tissues affected by severe neurodegenerative disorders (e.g. Alzheimer's Disease). Identified as natural functional materials in bacteria, in addition to their remarkable mechanical properties, amyloids have also been proposed as a platform for novel biomaterials in nanotechnology applications including nanowires, liquid crystals, scaffolds and thin films. Despite recent progress in understanding amyloid structure and behavior, the latent self-assembly mechanism and the underlying adhesion forces that drive the aggregation process remain poorly understood. On the basis of previous full atomistic simulations, here we report a simple coarse-grain model to analyze the competition between adhesive forces and elastic deformation of amyloid fibrils. We use simple model system to investigate self-assembly mechanisms of fibrils, focused on the formation of self-folded nanorackets and nanorings, and thereby address a critical issue in linking the biochemical (Angstrom) to micrometre scales relevant for larger-scale states of functional amyloid materials. We investigate the effect of varying the interfibril adhesion energy on the structure and stability of self-folded nanorackets and nanorings and demonstrate that these aggregated amyloid fibrils are stable in such states even when the fibril-fibril interaction is relatively weak, given that the constituting amyloid fibril length exceeds a critical fibril length-scale of several hundred nanometres. We further present a simple approach to directly determine the interfibril adhesion strength from geometric measures. In addition to providing insight into the physics of aggregation of amyloid fibrils our model enables the analysis of large-scale amyloid plaques and presents a new method for the estimation and engineering of the adhesive forces responsible of the self-assembly process of amyloidnanostructures, filling a gap that previously existed between full atomistic simulations of primarily ultra-short fibrils and much larger micrometre-scale amyloid aggregates. Via direct simulation of large-scale amyloid aggregates consisting of hundreds of fibrils we demonstrate that the fibril length has a profound impact on their structure and mechanical properties, where the critical fibril length-scale derived from our analysis of self-folded nanorackets and nanorings defines the structure of amyloid aggregates. A multi-scale modeling approach as used here, bridging the scales from Angstroms to micrometres, opens a wide range of possible nanotechnology applications by presenting a holistic framework that balances mechanical properties of individual fibrils, hierarchical self-assembly, and the adhesive forces determining their stability to facilitate the design of de novoamyloid materials.
Characteristic time scales for diffusion processes through layers and across interfaces
NASA Astrophysics Data System (ADS)
Carr, Elliot J.
2018-04-01
This paper presents a simple tool for characterizing the time scale for continuum diffusion processes through layered heterogeneous media. This mathematical problem is motivated by several practical applications such as heat transport in composite materials, flow in layered aquifers, and drug diffusion through the layers of the skin. In such processes, the physical properties of the medium vary across layers and internal boundary conditions apply at the interfaces between adjacent layers. To characterize the time scale, we use the concept of mean action time, which provides the mean time scale at each position in the medium by utilizing the fact that the transition of the transient solution of the underlying partial differential equation model, from initial state to steady state, can be represented as a cumulative distribution function of time. Using this concept, we define the characteristic time scale for a multilayer diffusion process as the maximum value of the mean action time across the layered medium. For given initial conditions and internal and external boundary conditions, this approach leads to simple algebraic expressions for characterizing the time scale that depend on the physical and geometrical properties of the medium, such as the diffusivities and lengths of the layers. Numerical examples demonstrate that these expressions provide useful insight into explaining how the parameters in the model affect the time it takes for a multilayer diffusion process to reach steady state.
Characteristic time scales for diffusion processes through layers and across interfaces.
Carr, Elliot J
2018-04-01
This paper presents a simple tool for characterizing the time scale for continuum diffusion processes through layered heterogeneous media. This mathematical problem is motivated by several practical applications such as heat transport in composite materials, flow in layered aquifers, and drug diffusion through the layers of the skin. In such processes, the physical properties of the medium vary across layers and internal boundary conditions apply at the interfaces between adjacent layers. To characterize the time scale, we use the concept of mean action time, which provides the mean time scale at each position in the medium by utilizing the fact that the transition of the transient solution of the underlying partial differential equation model, from initial state to steady state, can be represented as a cumulative distribution function of time. Using this concept, we define the characteristic time scale for a multilayer diffusion process as the maximum value of the mean action time across the layered medium. For given initial conditions and internal and external boundary conditions, this approach leads to simple algebraic expressions for characterizing the time scale that depend on the physical and geometrical properties of the medium, such as the diffusivities and lengths of the layers. Numerical examples demonstrate that these expressions provide useful insight into explaining how the parameters in the model affect the time it takes for a multilayer diffusion process to reach steady state.
Yurk, Brian P
2018-07-01
Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.
NASA Astrophysics Data System (ADS)
Stockli, R.; Vidale, P. L.
2003-04-01
The importance of correctly including land surface processes in climate models has been increasingly recognized in the past years. Even on seasonal to interannual time scales land surface - atmosphere feedbacks can play a substantial role in determining the state of the near-surface climate. The availability of soil moisture for both runoff and evapotranspiration is dependent on biophysical processes occuring in plants and in the soil acting on a wide time-scale from minutes to years. Fluxnet site measurements in various climatic zones are used to drive three generations of LSM's (land surface models) in order to assess the level of complexity needed to represent vegetation processes at the local scale. The three models were the Bucket model (Manabe 1969), BATS 1E (Dickinson 1984) and SiB 2 (Sellers et al. 1996). Evapotranspiration and runoff processes simulated by these models range from simple one-layer soils and no-vegetation parameterizations to complex multilayer soils, including realistic photosynthesis-stomatal conductance models. The latter is driven by satellite remote sensing land surface parameters inheriting the spatiotemporal evolution of vegetation phenology. In addition a simulation with SiB 2 not only including vertical water fluxes but also lateral soil moisture transfers by downslope flow is conducted for a pre-alpine catchment in Switzerland. Preliminary results are presented and show that - depending on the climatic environment and on the season - a realistic representation of evapotranspiration processes including seasonally and interannually-varying state of vegetation is significantly improving the representation of observed latent and sensible heat fluxes on the local scale. Moreover, the interannual evolution of soil moisture availability and runoff is strongly dependent on the chosen model complexity. Biophysical land surface parameters from satellite allow to represent the seasonal changes in vegetation activity, which has great impact on the yearly budget of transpiration fluxes. For some sites, however, the hydrological cycle is simulated reasonably well even with simple land surface representations.
Dynamics of liquid spreading on solid surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalliadasis, S.; Chang, H.C.
1996-09-01
Using simple scaling arguments and a precursor film model, the authors show that the appropriate macroscopic contact angle {theta} during the slow spreading of a completely or partially wetting liquid under conditions of viscous flow and small slopes should be described by tan {theta} = [tan{sup 3} {theta}{sub e} {minus} 9 log {eta}Ca]{sup 1/3} where {theta}{sub e} is the static contact angle, Ca is the capillary number, and {eta} is a scaled Hamaker constant. Using this simple relation as a boundary condition, the authors are able to quantitatively model, without any empirical parameter, the spreading dynamics of several classical spreadingmore » phenomena (capillary rise, sessile, and pendant drop spreading) by simply equating the slope of the leading order static bulk region to the dynamic contact angle boundary condition without performing a matched asymptotic analysis for each case independently as is usually done in the literature.« less
NASA Astrophysics Data System (ADS)
Co, Raymond T.; Harigaya, Keisuke; Nomura, Yasunori
2017-03-01
We present a simple and natural dark sector model in which dark matter particles arise as composite states of hidden strong dynamics and their stability is ensured by accidental symmetries. The model has only a few free parameters. In particular, the gauge symmetry of the model forbids the masses of dark quarks, and the confinement scale of the dynamics provides the unique mass scale of the model. The gauge group contains an Abelian symmetry U (1 )D , which couples the dark and standard model sectors through kinetic mixing. This model, despite its simple structure, has rich and distinctive phenomenology. In the case where the dark pion becomes massive due to U (1 )D quantum corrections, direct and indirect detection experiments can probe thermal relic dark matter which is generically a mixture of the dark pion and the dark baryon, and the Large Hadron Collider can discover the U (1 )D gauge boson. Alternatively, if the dark pion stays light due to a specific U (1 )D charge assignment of the dark quarks, then the dark pion constitutes dark radiation. The signal of this radiation is highly correlated with that of dark baryons in dark matter direct detection.
Co, Raymond T; Harigaya, Keisuke; Nomura, Yasunori
2017-03-10
We present a simple and natural dark sector model in which dark matter particles arise as composite states of hidden strong dynamics and their stability is ensured by accidental symmetries. The model has only a few free parameters. In particular, the gauge symmetry of the model forbids the masses of dark quarks, and the confinement scale of the dynamics provides the unique mass scale of the model. The gauge group contains an Abelian symmetry U(1)_{D}, which couples the dark and standard model sectors through kinetic mixing. This model, despite its simple structure, has rich and distinctive phenomenology. In the case where the dark pion becomes massive due to U(1)_{D} quantum corrections, direct and indirect detection experiments can probe thermal relic dark matter which is generically a mixture of the dark pion and the dark baryon, and the Large Hadron Collider can discover the U(1)_{D} gauge boson. Alternatively, if the dark pion stays light due to a specific U(1)_{D} charge assignment of the dark quarks, then the dark pion constitutes dark radiation. The signal of this radiation is highly correlated with that of dark baryons in dark matter direct detection.
Visualization of atomic-scale phenomena in superconductors: application to FeSe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas
Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less
Visualization of atomic-scale phenomena in superconductors: application to FeSe
Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas; ...
2014-10-31
Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less
NASA Astrophysics Data System (ADS)
Paiewonsky, Pablo; Elison Timm, Oliver
2018-03-01
In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.
Disability: a model and measurement technique.
Williams, R G; Johnston, M; Willis, L A; Bennett, A E
1976-01-01
Current methods of ranking or scoring disability tend to be arbitrary. A new method is put forward on the hypothesis that disability progresses in regular, cumulative patterns. A model of disability is defined and tested with the use of Guttman scale analysis. Its validity is indicated on data from a survey in the community and from postsurgical patients, and some factors involved in scale variation are identified. The model provides a simple measurement technique and has implications for the assessment of individual disadvantage, for the prediction of progress in recovery or deterioration, and for evaluation of the outcome of treatment regimes. PMID:953379
Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.
Green, Sara; Batterman, Robert
2017-02-01
A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.
Osterberg, T; Norinder, U
2001-01-01
A method of modelling and predicting biopharmaceutical properties using simple theoretically computed molecular descriptors and multivariate statistics has been investigated for several data sets related to solubility, IAM chromatography, permeability across Caco-2 cell monolayers, human intestinal perfusion, brain-blood partitioning, and P-glycoprotein ATPase activity. The molecular descriptors (e.g. molar refractivity, molar volume, index of refraction, surface tension and density) and logP were computed with ACD/ChemSketch and ACD/logP, respectively. Good statistical models were derived that permit simple computational prediction of biopharmaceutical properties. All final models derived had R(2) values ranging from 0.73 to 0.95 and Q(2) values ranging from 0.69 to 0.86. The RMSEP values for the external test sets ranged from 0.24 to 0.85 (log scale).
A simple Lagrangian forecast system with aviation forecast potential
NASA Technical Reports Server (NTRS)
Petersen, R. A.; Homan, J. H.
1983-01-01
A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.
A two-scale roughness model for the gloss of coated paper
NASA Astrophysics Data System (ADS)
Elton, N. J.
2008-08-01
A model for gloss is developed for surfaces with two-scale random roughness where one scale lies in the wavelength region (microroughness) and the other in the geometrical optics limit (macroroughness). A number of important industrial materials such as coated and printed paper and some paints exhibit such two-scale rough surfaces. Scalar Kirchhoff theory is used to describe scattering in the wavelength region and a facet model used for roughness features much greater than the wavelength. Simple analytical expressions are presented for the gloss of surfaces with Gaussian, modified and intermediate Lorentzian distributions of surface slopes, valid for gloss at high angle of incidence. In the model, gloss depends only on refractive index, rms microroughness amplitude and the FWHM of the surface slope distribution, all of which may be obtained experimentally. Model predictions are compared with experimental results for a range of coated papers and gloss standards, and found to be in fair agreement within model limitations.
Fuzzy logic-based flight control system design
NASA Astrophysics Data System (ADS)
Nho, Kyungmoon
The application of fuzzy logic to aircraft motion control is studied in this dissertation. The self-tuning fuzzy techniques are developed by changing input scaling factors to obtain a robust fuzzy controller over a wide range of operating conditions and nonlinearities for a nonlinear aircraft model. It is demonstrated that the properly adjusted input scaling factors can meet the required performance and robustness in a fuzzy controller. For a simple demonstration of the easy design and control capability of a fuzzy controller, a proportional-derivative (PD) fuzzy control system is compared to the conventional controller for a simple dynamical system. This thesis also describes the design principles and stability analysis of fuzzy control systems by considering the key features of a fuzzy control system including the fuzzification, rule-base and defuzzification. The wing-rock motion of slender delta wings, a linear aircraft model and the six degree of freedom nonlinear aircraft dynamics are considered to illustrate several self-tuning methods employing change in input scaling factors. Finally, this dissertation is concluded with numerical simulation of glide-slope capture in windshear demonstrating the robustness of the fuzzy logic based flight control system.
Sozanski, Krzysztof; Wisniewska, Agnieszka; Kalwarczyk, Tomasz; Sznajder, Anna; Holyst, Robert
2016-01-01
We investigate transport properties of model polyelectrolyte systems at physiological ionic strength (0.154 M). Covering a broad range of flow length scales—from diffusion of molecular probes to macroscopic viscous flow—we establish a single, continuous function describing the scale dependent viscosity of high-salt polyelectrolyte solutions. The data are consistent with the model developed previously for electrically neutral polymers in a good solvent. The presented approach merges the power-law scaling concepts of de Gennes with the idea of exponential length scale dependence of effective viscosity in complex liquids. The result is a simple and applicable description of transport properties of high-salt polyelectrolyte solutions at all length scales, valid for motion of single molecules as well as macroscopic flow of the complex liquid. PMID:27536866
NASA Technical Reports Server (NTRS)
Randall, David A.
1990-01-01
A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.
Numerical tests of local scale invariance in ageing q-state Potts models
NASA Astrophysics Data System (ADS)
Lorenz, E.; Janke, W.
2007-01-01
Much effort has been spent over the last years to achieve a coherent theoretical description of ageing as a non-linear dynamics process. Long supposed to be a consequence of the slow dynamics of glassy systems only, ageing phenomena could also be identified in the phase-ordering kinetics of simple ferromagnets. As a phenomenological approach Henkel et al. developed a group of local scale transformations under which two-time autocorrelation and response functions should transform covariantly. This work is to extend previous numerical tests of the predicted scaling functions for the Ising model by Monte Carlo simulations of two-dimensional q-state Potts models with q=3 and 8, which, in equilibrium, undergo temperature-driven phase transitions of second and first order, respectively.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
NASA Astrophysics Data System (ADS)
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Predictions of Bedforms in Tidal Inlets and River Mouths
2016-07-31
that community modeling environment. APPROACH Bedforms are ubiquitous in unconsolidated sediments . They act as roughness elements, altering the...flow and creating feedback between the bed and the flow and, in doing so, they are intimately tied to erosion, transport and deposition of sediments ...With this approach, grain-scale sediment transport is parameterized with simple rules to drive bedform-scale dynamics. Gallagher (2011) developed a
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.
2014-01-01
With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.
Chaotic Lagrangian models for turbulent relative dispersion.
Lacorata, Guglielmo; Vulpiani, Angelo
2017-04-01
A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.
Chaotic Lagrangian models for turbulent relative dispersion
NASA Astrophysics Data System (ADS)
Lacorata, Guglielmo; Vulpiani, Angelo
2017-04-01
A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.
ERIC Educational Resources Information Center
Eaton, Bruce G., Ed.
1977-01-01
Presents four short articles on: a power supply for the measurement of the charge-to-mass ratio of the electron; a modified centripetal force apparatus; a black box electronic unknown for the scientific instruments laboratory; and a simple scaling model for biological systems. (MLH)
NASA Astrophysics Data System (ADS)
Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.
2018-06-01
Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
The overconstraint of response time models: rethinking the scaling problem.
Donkin, Chris; Brown, Scott D; Heathcote, Andrew
2009-12-01
Theories of choice response time (RT) provide insight into the psychological underpinnings of simple decisions. Evidence accumulation (or sequential sampling) models are the most successful theories of choice RT. These models all have the same "scaling" property--that a subset of their parameters can be multiplied by the same amount without changing their predictions. This property means that a single parameter must be fixed to allow the estimation of the remaining parameters. In the present article, we show that the traditional solution to this problem has overconstrained these models, unnecessarily restricting their ability to account for data and making implicit--and therefore unexamined--psychological assumptions. We show that versions of these models that address the scaling problem in a minimal way can provide a better description of data than can their overconstrained counterparts, even when increased model complexity is taken into account.
Simple scaling of cooperation in donor-recipient games.
Berger, Ulrich
2009-09-01
We present a simple argument which proves a general version of the scaling phenomenon recently observed in donor-recipient games by Tanimoto [Tanimoto, J., 2009. A simple scaling of the effectiveness of supporting mutual cooperation in donor-recipient games by various reciprocity mechanisms. BioSystems 96, 29-34].
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
NASA Astrophysics Data System (ADS)
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
The role of strength defects in shaping impact crater planforms
NASA Astrophysics Data System (ADS)
Watters, W. A.; Geiger, L. M.; Fendrock, M.; Gibson, R.; Hundal, C. B.
2017-04-01
High-resolution imagery and digital elevation models (DEMs) were used to measure the planimetric shapes of well-preserved impact craters. These measurements were used to characterize the size-dependent scaling of the departure from circular symmetry, which provides useful insights into the processes of crater growth and modification. For example, we characterized the dependence of the standard deviation of radius (σR) on crater diameter (D) as σR ∼ Dm. For complex craters on the Moon and Mars, m ranges from 0.9 to 1.2 among strong and weak target materials. For the martian simple craters in our data set, m varies from 0.5 to 0.8. The value of m tends toward larger values in weak materials and modified craters, and toward smaller values in relatively unmodified craters as well as craters in high-strength targets, such as young lava plains. We hypothesize that m ≈ 1 for planforms shaped by modification processes (slumping and collapse), whereas m tends toward ∼ 1/2 for planforms shaped by an excavation flow that was influenced by strength anisotropies. Additional morphometric parameters were computed to characterize the following planform properties: the planform aspect ratio or ellipticity, the deviation from a fitted ellipse, and the deviation from a convex shape. We also measured the distribution of crater shapes using Fourier decomposition of the planform, finding a similar distribution for simple and complex craters. By comparing the strength of small and large circular harmonics, we confirmed that lunar and martian complex craters are more polygonal at small sizes. Finally, we have used physical and geometrical principles to motivate scaling arguments and simple Monte Carlo models for generating synthetic planforms, which depend on a characteristic length scale of target strength defects. One of these models can be used to generate populations of synthetic planforms which are very similar to the measured population of well-preserved simple craters on Mars.
NASA Astrophysics Data System (ADS)
Finnegan, N. J.; Roe, G.; Montgomery, D. R.; Hallet, B.
2004-12-01
The fundamental role of bedrock channel incision on the evolution of mountainous topography has become a central concept in tectonic geomorphology over the past decade. During this time the stream power model of bedrock river incision has immerged as a valuable tool for exploring the dynamics of bedrock river incision in time and space. In most stream power analyses, river channel width--a necessary ingredient for calculating power or shear stress per unit of bed area--is assumed to scale solely with discharge. However, recent field-based studies provide evidence for the alternative view that channel width varies locally, much like channel slope does, in association with spatial changes in rock uplift rate and erodibility. This suggests that simple scaling relations between width and discharge, and hence estimates of stream power, don't apply in regions where rock uplift and erodibility vary spatially. It also highlights the need for an alternative to the traditional assumptions of hydraulic geometry to further investigation of the coupling between bedrock river incision and tectonic processes. Based on Manning's equation, basic mass conservation principles, and an assumption of self-similarity for channel cross sections, we present a new relation for scaling the steady-state width of bedrock river channels as a function of discharge (Q), channel slope (S), and roughness (Ks): W \\propto Q3/8S-3/16Ks1/16. In longitudinally simple, uniform-concavity rivers from the King Range in coastal Northern California, the model emulates traditional width-discharge relations that scale channel width with the square root of discharge. More significantly, our relation describes river width trends for the Yarlung Tsangpo in SE Tibet and the Wenatchee River in the Washington Cascades, both rivers that narrow considerably as they incise terrain with spatially varied rock uplift rates and/or lithology. We suggest that much of observed channel width variability is a simple consequence of the tendency for water to flow faster in steeper reaches and therefore maintain smaller channel cross sections. We demonstrate that using conventional scaling relations for bedrock channel width can significantly underestimate stream power variability in bedrock channels, and that our model improves estimates of spatial patterns of bedrock incision rates.
Structural Preferential Attachment: Network Organization beyond the Link
NASA Astrophysics Data System (ADS)
Hébert-Dufresne, Laurent; Allard, Antoine; Marceau, Vincent; Noël, Pierre-André; Dubé, Louis J.
2011-10-01
We introduce a mechanism which models the emergence of the universal properties of complex networks, such as scale independence, modularity and self-similarity, and unifies them under a scale-free organization beyond the link. This brings a new perspective on network organization where communities, instead of links, are the fundamental building blocks of complex systems. We show how our simple model can reproduce social and information networks by predicting their community structure and more importantly, how their nodes or communities are interconnected, often in a self-similar manner.
Exploration–exploitation trade-off features a saltatory search behaviour
Volchenkov, Dimitri; Helbach, Jonathan; Tscherepanow, Marko; Kühnel, Sina
2013-01-01
Searching experiments conducted in different virtual environments over a gender-balanced group of people revealed a gender irrelevant scale-free spread of searching activity on large spatio-temporal scales. We have suggested and solved analytically a simple statistical model of the coherent-noise type describing the exploration–exploitation trade-off in humans (‘should I stay’ or ‘should I go’). The model exhibits a variety of saltatory behaviours, ranging from Lévy flights occurring under uncertainty to Brownian walks performed by a treasure hunter confident of the eventual success. PMID:23782535
The brainstem reticular formation is a small-world, not scale-free, network
Humphries, M.D; Gurney, K; Prescott, T.J
2005-01-01
Recently, it has been demonstrated that several complex systems may have simple graph-theoretic characterizations as so-called ‘small-world’ and ‘scale-free’ networks. These networks have also been applied to the gross neural connectivity between primate cortical areas and the nervous system of Caenorhabditis elegans. Here, we extend this work to a specific neural circuit of the vertebrate brain—the medial reticular formation (RF) of the brainstem—and, in doing so, we have made three key contributions. First, this work constitutes the first model (and quantitative review) of this important brain structure for over three decades. Second, we have developed the first graph-theoretic analysis of vertebrate brain connectivity at the neural network level. Third, we propose simple metrics to quantitatively assess the extent to which the networks studied are small-world or scale-free. We conclude that the medial RF is configured to create small-world (implying coherent rapid-processing capabilities), but not scale-free, type networks under assumptions which are amenable to quantitative measurement. PMID:16615219
Scale dependant compensational stacking of channelized sedimentary deposits
NASA Astrophysics Data System (ADS)
Wang, Y.; Straub, K. M.; Hajek, E. A.
2010-12-01
Compensational stacking, the tendency for sediment transport system to preferentially fill topographic lows, thus smoothing out topographic relief is a concept used in the interpretation of the stratigraphic record. Recently, a metric was developed to quantify the strength of compensation in sedimentary basins by comparing observed stacking patterns to what would be expected from simple, uncorrelated stacking. This method uses the rate of decay of spatial variability in sedimentation between picked depositional horizons with increasing vertical stratigraphic averaging distance. We explore how this metric varies as a function of stratigraphic scale using data from physical experiments, stratigraphy exposed in outcrops and numerical models. In an experiment conducted at Tulane University’s Sediment Dynamics Laboratory, the topography of a channelized delta formed by weakly cohesive sediment was monitored along flow-perpendicular transects at a high temporal resolution relative to channel kinematics. Over the course of this experiment a uniform relative subsidence pattern, designed to isolate autogenic processes, resulted in the construction of a stratigraphic package that is 25 times as thick as the depth of the experimental channels. We observe a scale-dependence on the compensational stacking of deposits set by the system’s avulsion time-scale. Above the avulsion time-scale deposits stack purely compensationally, but below this time-scale deposits stack somewhere between randomly and deterministically. The well-exposed Ferris Formation (Cretaceous/Paleogene, Hanna Basin, Wyoming, USA) also shows scale-dependant stratigraphic organization which appears to be set by an avulsion time-scale. Finally, we utilize simple object-based models to illustrate how channel avulsions influence compensation in alluvial basins.
Sridhar, Upasana Manimegalai; Govindarajan, Anand; Rhinehart, R Russell
2016-01-01
This work reveals the applicability of a relatively new optimization technique, Leapfrogging, for both nonlinear regression modeling and a methodology for nonlinear model-predictive control. Both are relatively simple, yet effective. The application on a nonlinear, pilot-scale, shell-and-tube heat exchanger reveals practicability of the techniques. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Chaudhari, Mangesh I; Muralidharan, Ajay; Pratt, Lawrence R; Rempe, Susan B
2018-02-12
Progress in understanding liquid ethylene carbonate (EC) and propylene carbonate (PC) on the basis of molecular simulation, emphasizing simple models of interatomic forces, is reviewed. Results on the bulk liquids are examined from the perspective of anticipated applications to materials for electrical energy storage devices. Preliminary results on electrochemical double-layer capacitors based on carbon nanotube forests and on model solid-electrolyte interphase (SEI) layers of lithium ion batteries are considered as examples. The basic results discussed suggest that an empirically parameterized, non-polarizable force field can reproduce experimental structural, thermodynamic, and dielectric properties of EC and PC liquids with acceptable accuracy. More sophisticated force fields might include molecular polarizability and Buckingham-model description of inter-atomic overlap repulsions as extensions to Lennard-Jones models of van der Waals interactions. Simple approaches should be similarly successful also for applications to organic molecular ions in EC/PC solutions, but the important case of Li[Formula: see text] deserves special attention because of the particularly strong interactions of that small ion with neighboring solvent molecules. To treat the Li[Formula: see text] ions in liquid EC/PC solutions, we identify interaction models defined by empirically scaled partial charges for ion-solvent interactions. The empirical adjustments use more basic inputs, electronic structure calculations and ab initio molecular dynamics simulations, and also experimental results on Li[Formula: see text] thermodynamics and transport in EC/PC solutions. Application of such models to the mechanism of Li[Formula: see text] transport in glassy SEI models emphasizes the advantage of long time-scale molecular dynamics studies of these non-equilibrium materials.
High Performance Computing for Modeling Wind Farms and Their Impact
NASA Astrophysics Data System (ADS)
Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.
2016-12-01
As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.
USDA-ARS?s Scientific Manuscript database
Soil moisture monitoring with in situ technology is a time consuming and costly endeavor for which a method of increasing the resolution of spatial estimates across in situ networks is necessary. Using a simple hydrologic model, the resolution of an in situ watershed network can be increased beyond...
Geospatial application of the Water Erosion Prediction Project (WEPP) model
D. C. Flanagan; J. R. Frankenberger; T. A. Cochrane; C. S. Renschler; W. J. Elliot
2013-01-01
At the hillslope profile and/or field scale, a simple Windows graphical user interface (GUI) is available to easily specify the slope, soil, and management inputs for application of the USDA Water Erosion Prediction Project (WEPP) model. Likewise, basic small watershed configurations of a few hillslopes and channels can be created and simulated with this GUI. However,...
Impact force as a scaling parameter
NASA Technical Reports Server (NTRS)
Poe, Clarence C., Jr.; Jackson, Wade C.
1994-01-01
The Federal Aviation Administration (FAR PART 25) requires that a structure carry ultimate load with nonvisible impact damage and carry 70 percent of limit flight loads with discrete damage. The Air Force has similar criteria (MIL-STD-1530A). Both civilian and military structures are designed by a building block approach. First, critical areas of the structure are determined, and potential failure modes are identified. Then, a series of representative specimens are tested that will fail in those modes. The series begins with tests of simple coupons, progresses through larger and more complex subcomponents, and ends with a test on a full-scale component, hence the term 'building block.' In order to minimize testing, analytical models are needed to scale impact damage and residual strength from the simple coupons to the full-scale component. Using experiments and analysis, the present paper illustrates that impact damage can be better understood and scaled using impact force than just kinetic energy. The plate parameters considered are size and thickness, boundary conditions, and material, and the impact parameters are mass, shape, and velocity.
Upgrades to the REA method for producing probabilistic climate change projections
NASA Astrophysics Data System (ADS)
Xu, Ying; Gao, Xuejie; Giorgi, Filippo
2010-05-01
We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3
NASA Astrophysics Data System (ADS)
Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani
2014-05-01
Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.
A model for allometric scaling of mammalian metabolism with ambient heat loss.
Kwak, Ho Sang; Im, Hong G; Shim, Eun Bo
2016-03-01
Allometric scaling, which represents the dependence of biological traits or processes on body size, is a long-standing subject in biological science. However, there has been no study to consider heat loss to the ambient and an insulation layer representing mammalian skin and fur for the derivation of the scaling law of metabolism. A simple heat transfer model is proposed to analyze the allometry of mammalian metabolism. The present model extends existing studies by incorporating various external heat transfer parameters and additional insulation layers. The model equations were solved numerically and by an analytic heat balance approach. A general observation is that the present heat transfer model predicted the 2/3 surface scaling law, which is primarily attributed to the dependence of the surface area on the body mass. External heat transfer effects introduced deviations in the scaling law, mainly due to natural convection heat transfer, which becomes more prominent at smaller mass. These deviations resulted in a slight modification of the scaling exponent to a value < 2/3. The finding that additional radiative heat loss and the consideration of an outer insulation fur layer attenuate these deviation effects and render the scaling law closer to 2/3 provides in silico evidence for a functional impact of heat transfer mode on the allometric scaling law in mammalian metabolism.
A simple microstructure return model explaining microstructure noise and Epps effects
NASA Astrophysics Data System (ADS)
Saichev, A.; Sornette, D.
2014-01-01
We present a novel simple microstructure model of financial returns that combines (i) the well-known ARFIMA process applied to tick-by-tick returns, (ii) the bid-ask bounce effect, (iii) the fat tail structure of the distribution of returns and (iv) the non-Poissonian statistics of inter-trade intervals. This model allows us to explain both qualitatively and quantitatively important stylized facts observed in the statistics of both microstructure and macrostructure returns, including the short-ranged correlation of returns, the long-ranged correlations of absolute returns, the microstructure noise and Epps effects. According to the microstructure noise effect, volatility is a decreasing function of the time-scale used to estimate it. The Epps effect states that cross correlations between asset returns are increasing functions of the time-scale at which the returns are estimated. The microstructure noise is explained as the result of the negative return correlations inherent in the definition of the bid-ask bounce component (ii). In the presence of a genuine correlation between the returns of two assets, the Epps effect is due to an average statistical overlap of the momentum of the returns of the two assets defined over a finite time-scale in the presence of the long memory process (i).
Description of the US Army small-scale 2-meter rotor test system
NASA Technical Reports Server (NTRS)
Phelps, Arthur E., III; Berry, John D.
1987-01-01
A small-scale powered rotor model was designed for use as a research tool in the exploratory testing of rotors and helicopter models. The model, which consists of a 29 hp rotor drive system, a four-blade fully articulated rotor, and a fuselage, was designed to be simple to operate and maintain in wind tunnels of moderate size and complexity. Two six-component strain-gauge balances are used to provide independent measurement of the rotor and fuselage aerodynamic loads. Commercially available standardized hardware and equipment were used to the maximum extent possible, and specialized parts were designed so that they could be fabricated by normal methods without using highly specialized tooling. The model was used in a hover test of three rotors having different planforms and in a forward flight investigation of a 21-percent-scale model of a U.S. Army scout helicopter equipped with a mast-mounted sight.
Universal fitting formulae for baryon oscillation surveys
NASA Astrophysics Data System (ADS)
Blake, Chris; Parkinson, David; Bassett, Bruce; Glazebrook, Karl; Kunz, Martin; Nichol, Robert C.
2006-01-01
The next generation of galaxy surveys will attempt to measure the baryon oscillations in the clustering power spectrum with high accuracy. These oscillations encode a preferred scale which may be used as a standard ruler to constrain cosmological parameters and dark energy models. In this paper we present simple analytical fitting formulae for the accuracy with which the preferred scale may be determined in the tangential and radial directions by future spectroscopic and photometric galaxy redshift surveys. We express these accuracies as a function of survey parameters such as the central redshift, volume, galaxy number density and (where applicable) photometric redshift error. These fitting formulae should greatly increase the efficiency of optimizing future surveys, which requires analysis of a potentially vast number of survey configurations and cosmological models. The formulae are calibrated using a grid of Monte Carlo simulations, which are analysed by dividing out the overall shape of the power spectrum before fitting a simple decaying sinusoid to the oscillations. The fitting formulae reproduce the simulation results with a fractional scatter of 7 per cent (10 per cent) in the tangential (radial) directions over a wide range of input parameters. We also indicate how sparse-sampling strategies may enhance the effective survey area if the sampling scale is much smaller than the projected baryon oscillation scale.
Validity of the two-level model for Viterbi decoder gap-cycle performance
NASA Technical Reports Server (NTRS)
Dolinar, S.; Arnold, S.
1990-01-01
A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.
Transcription, intercellular variability and correlated random walk.
Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar
2008-11-01
We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.
Cosmic microwave background radiation anisotropies in brane worlds.
Koyama, Kazuya
2003-11-28
We propose a new formulation to calculate the cosmic microwave background (CMB) spectrum in the Randall-Sundrum two-brane model based on recent progress in solving the bulk geometry using a low energy approximation. The evolution of the anisotropic stress imprinted on the brane by the 5D Weyl tensor is calculated. An impact of the dark radiation perturbation on the CMB spectrum is investigated in a simple model assuming an initially scale-invariant adiabatic perturbation. The dark radiation perturbation induces isocurvature perturbations, but the resultant spectrum can be quite different from the prediction of simple mixtures of adiabatic and isocurvature perturbations due to Weyl anisotropic stress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroy, Adam K.; Hughes, Annie; Schruba, Andreas
2016-11-01
The cloud-scale density, velocity dispersion, and gravitational boundedness of the interstellar medium (ISM) vary within and among galaxies. In turbulent models, these properties play key roles in the ability of gas to form stars. New high-fidelity, high-resolution surveys offer the prospect to measure these quantities across galaxies. We present a simple approach to make such measurements and to test hypotheses that link small-scale gas structure to star formation and galactic environment. Our calculations capture the key physics of the Larson scaling relations, and we show good correspondence between our approach and a traditional “cloud properties” treatment. However, we argue thatmore » our method is preferable in many cases because of its simple, reproducible characterization of all emission. Using, low- J {sup 12}CO data from recent surveys, we characterize the molecular ISM at 60 pc resolution in the Antennae, the Large Magellanic Cloud (LMC), M31, M33, M51, and M74. We report the distributions of surface density, velocity dispersion, and gravitational boundedness at 60 pc scales and show galaxy-to-galaxy and intragalaxy variations in each. The distribution of flux as a function of surface density appears roughly lognormal with a 1 σ width of ∼0.3 dex, though the center of this distribution varies from galaxy to galaxy. The 60 pc resolution line width and molecular gas surface density correlate well, which is a fundamental behavior expected for virialized or free-falling gas. Varying the measurement scale for the LMC and M31, we show that the molecular ISM has higher surface densities, lower line widths, and more self-gravity at smaller scales.« less
Predictions from a flavour GUT model combined with a SUSY breaking sector
NASA Astrophysics Data System (ADS)
Antusch, Stefan; Hohl, Christian
2017-10-01
We discuss how flavour GUT models in the context of supergravity can be completed with a simple SUSY breaking sector, such that the flavour-dependent (non-universal) soft breaking terms can be calculated. As an example, we discuss a model based on an SU(5) GUT symmetry and A 4 family symmetry, plus additional discrete "shaping symmetries" and a ℤ 4 R symmetry. We calculate the soft terms and identify the relevant high scale input parameters, and investigate the resulting predictions for the low scale observables, such as flavour violating processes, the sparticle spectrum and the dark matter relic density.
Applications of Perron-Frobenius theory to population dynamics.
Li, Chi-Kwong; Schneider, Hans
2002-05-01
By the use of Perron-Frobenius theory, simple proofs are given of the Fundamental Theorem of Demography and of a theorem of Cushing and Yicang on the net reproductive rate occurring in matrix models of population dynamics. The latter result, which is closely related to the Stein-Rosenberg theorem in numerical linear algebra, is further refined with some additional nonnegative matrix theory. When the fertility matrix is scaled by the net reproductive rate, the growth rate of the model is $1$. More generally, we show how to achieve a given growth rate for the model by scaling the fertility matrix. Demographic interpretations of the results are given.
NASA Technical Reports Server (NTRS)
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
Simulations of Flame Acceleration and Deflagration-to-Detonation Transitions in Methane-Air Systems
2010-03-17
are neglected. 3. Model parameter calibration The one-step Arrhenius kinetics used in this model cannot ex- actly reproduce all properties of laminar...with obstacles are compared to previ- ously reported experimental data. The results obtained using the simple reaction model qualitatively, and in...have taken in developing a multidimensional numerical model to study explosions in large-scale systems containing mixtures of nat- ural gas and air
Break-up of Gondwana and opening of the South Atlantic: Review of existing plate tectonic models
Ghidella, M.E.; Lawver, L.A.; Gahagan, L.M.
2007-01-01
each model. We also plot reconstructions at four selected epochs for all models using the same projection and scale to facilitate comparison. The diverse simplifying assumptions that need to be made in every case regarding plate fragmentation to account for the numerous syn-rift basins and periods of stretching are strong indicators that rigid plate tectonics is too simple a model for the present problem.
NASA Astrophysics Data System (ADS)
Yamakou, Marius E.; Jost, Jürgen
2017-10-01
In recent years, several, apparently quite different, weak-noise-induced resonance phenomena have been discovered. Here, we show that at least two of them, self-induced stochastic resonance (SISR) and inverse stochastic resonance (ISR), can be related by a simple parameter switch in one of the simplest models, the FitzHugh-Nagumo (FHN) neuron model. We consider a FHN model with a unique fixed point perturbed by synaptic noise. Depending on the stability of this fixed point and whether it is located to either the left or right of the fold point of the critical manifold, two distinct weak-noise-induced phenomena, either SISR or ISR, may emerge. SISR is more robust to parametric perturbations than ISR, and the coherent spike train generated by SISR is more robust than that generated deterministically. ISR also depends on the location of initial conditions and on the time-scale separation parameter of the model equation. Our results could also explain why real biological neurons having similar physiological features and synaptic inputs may encode very different information.
Economic decision making and the application of nonparametric prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2007-01-01
Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.
Towards a Model for Protein Production Rates
NASA Astrophysics Data System (ADS)
Dong, J. J.; Schmittmann, B.; Zia, R. K. P.
2007-07-01
In the process of translation, ribosomes read the genetic code on an mRNA and assemble the corresponding polypeptide chain. The ribosomes perform discrete directed motion which is well modeled by a totally asymmetric simple exclusion process (TASEP) with open boundaries. Using Monte Carlo simulations and a simple mean-field theory, we discuss the effect of one or two "bottlenecks" (i.e., slow codons) on the production rate of the final protein. Confirming and extending previous work by Chou and Lakatos, we find that the location and spacing of the slow codons can affect the production rate quite dramatically. In particular, we observe a novel "edge" effect, i.e., an interaction of a single slow codon with the system boundary. We focus in detail on ribosome density profiles and provide a simple explanation for the length scale which controls the range of these interactions.
Renormalization Group Theory of Bolgiano Scaling in Boussinesq Turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, Robert
1994-01-01
Bolgiano scaling in Boussinesq turbulence is analyzed using the Yakhot-Orszag renormalization group. For this purpose, an isotropic model is introduced. Scaling exponents are calculated by forcing the temperature equation so that the temperature variance flux is constant in the inertial range. Universal amplitudes associated with the scaling laws are computed by expanding about a logarithmic theory. Connections between this formalism and the direct interaction approximation are discussed. It is suggested that the Yakhot-Orszag theory yields a lowest order approximate solution of a regularized direct interaction approximation which can be corrected by a simple iterative procedure.
Gas production in the Barnett Shale obeys a simple scaling theory
Patzek, Tad W.; Male, Frank; Marder, Michael
2013-01-01
Natural gas from tight shale formations will provide the United States with a major source of energy over the next several decades. Estimates of gas production from these formations have mainly relied on formulas designed for wells with a different geometry. We consider the simplest model of gas production consistent with the basic physics and geometry of the extraction process. In principle, solutions of the model depend upon many parameters, but in practice and within a given gas field, all but two can be fixed at typical values, leading to a nonlinear diffusion problem we solve exactly with a scaling curve. The scaling curve production rate declines as 1 over the square root of time early on, and it later declines exponentially. This simple model provides a surprisingly accurate description of gas extraction from 8,294 wells in the United States’ oldest shale play, the Barnett Shale. There is good agreement with the scaling theory for 2,057 horizontal wells in which production started to decline exponentially in less than 10 y. The remaining 6,237 horizontal wells in our analysis are too young for us to predict when exponential decline will set in, but the model can nevertheless be used to establish lower and upper bounds on well lifetime. Finally, we obtain upper and lower bounds on the gas that will be produced by the wells in our sample, individually and in total. The estimated ultimate recovery from our sample of 8,294 wells is between 10 and 20 trillion standard cubic feet. PMID:24248376
Magnetic Doppler imaging of Ap stars
NASA Astrophysics Data System (ADS)
Silvester, J.; Wade, G. A.; Kochukhov, O.; Landstreet, J. D.; Bagnulo, S.
2008-04-01
Historically, the magnetic field geometries of the chemically peculiar Ap stars were modelled in the context of a simple dipole field. However, with the acquisition of increasingly sophisticated diagnostic data, it has become clear that the large-scale field topologies exhibit important departures from this simple model. Recently, new high-resolution circular and linear polarisation spectroscopy has even hinted at the presence of strong, small-scale field structures, which were completely unexpected based on earlier modelling. This project investigates the detailed structure of these strong fossil magnetic fields, in particular the large-scale field geometry, as well as small scale magnetic structures, by mapping the magnetic and chemical surface structure of a selected sample of Ap stars. These maps will be used to investigate the relationship between the local field vector and local surface chemistry, looking for the influence the field may have on the various chemical transport mechanisms (i.e., diffusion, convection and mass loss). This will lead to better constraints on the origin and evolution, as well as refining the magnetic field model for Ap stars. Mapping will be performed using high resolution and signal-to-noise ratio time-series of spectra in both circular and linear polarisation obtained using the new-generation ESPaDOnS (CFHT, Mauna Kea, Hawaii) and NARVAL spectropolarimeters (Pic du Midi Observatory). With these data we will perform tomographic inversion of Doppler-broadened Stokes IQUV Zeeman profiles of a large variety of spectral lines using the INVERS10 magnetic Doppler imaging code, simultaneously recovering the detailed surface maps of the vector magnetic field and chemical abundances.
Gas production in the Barnett Shale obeys a simple scaling theory.
Patzek, Tad W; Male, Frank; Marder, Michael
2013-12-03
Natural gas from tight shale formations will provide the United States with a major source of energy over the next several decades. Estimates of gas production from these formations have mainly relied on formulas designed for wells with a different geometry. We consider the simplest model of gas production consistent with the basic physics and geometry of the extraction process. In principle, solutions of the model depend upon many parameters, but in practice and within a given gas field, all but two can be fixed at typical values, leading to a nonlinear diffusion problem we solve exactly with a scaling curve. The scaling curve production rate declines as 1 over the square root of time early on, and it later declines exponentially. This simple model provides a surprisingly accurate description of gas extraction from 8,294 wells in the United States' oldest shale play, the Barnett Shale. There is good agreement with the scaling theory for 2,057 horizontal wells in which production started to decline exponentially in less than 10 y. The remaining 6,237 horizontal wells in our analysis are too young for us to predict when exponential decline will set in, but the model can nevertheless be used to establish lower and upper bounds on well lifetime. Finally, we obtain upper and lower bounds on the gas that will be produced by the wells in our sample, individually and in total. The estimated ultimate recovery from our sample of 8,294 wells is between 10 and 20 trillion standard cubic feet.
Communication: Simple liquids' high-density viscosity
NASA Astrophysics Data System (ADS)
Costigliola, Lorenzo; Pedersen, Ulf R.; Heyes, David M.; Schrøder, Thomas B.; Dyre, Jeppe C.
2018-02-01
This paper argues that the viscosity of simple fluids at densities above that of the triple point is a specific function of temperature relative to the freezing temperature at the density in question. The proposed viscosity expression, which is arrived at in part by reference to the isomorph theory of systems with hidden scale invariance, describes computer simulations of the Lennard-Jones system as well as argon and methane experimental data and simulation results for an effective-pair-potential model of liquid sodium.
Simple atmospheric perturbation models for sonic-boom-signature distortion studies
NASA Technical Reports Server (NTRS)
Ehernberger, L. J.; Wurtele, Morton G.; Sharman, Robert D.
1994-01-01
Sonic-boom propagation from flight level to ground is influenced by wind and speed-of-sound variations resulting from temperature changes in both the mean atmospheric structure and small-scale perturbations. Meteorological behavior generally produces complex combinations of atmospheric perturbations in the form of turbulence, wind shears, up- and down-drafts and various wave behaviors. Differences between the speed of sound at the ground and at flight level will influence the threshold flight Mach number for which the sonic boom first reaches the ground as well as the width of the resulting sonic-boom carpet. Mean atmospheric temperature and wind structure as a function of altitude vary with location and time of year. These average properties of the atmosphere are well-documented and have been used in many sonic-boom propagation assessments. In contrast, smaller scale atmospheric perturbations are also known to modulate the shape and amplitude of sonic-boom signatures reaching the ground, but specific perturbation models have not been established for evaluating their effects on sonic-boom propagation. The purpose of this paper is to present simple examples of atmospheric vertical temperature gradients, wind shears, and wave motions that can guide preliminary assessments of nonturbulent atmospheric perturbation effects on sonic-boom propagation to the ground. The use of simple discrete atmospheric perturbation structures can facilitate the interpretation of the resulting sonic-boom propagation anomalies as well as intercomparisons among varied flight conditions and propagation models.
Cosmic Star Formation: A Simple Model of the SFRD(z)
NASA Astrophysics Data System (ADS)
Chiosi, Cesare; Sciarratta, Mauro; D’Onofrio, Mauro; Chiosi, Emanuela; Brotto, Francesca; De Michele, Rosaria; Politino, Valeria
2017-12-01
We investigate the evolution of the cosmic star formation rate density (SFRD) from redshift z = 20 to z = 0 and compare it with the observational one by Madau and Dickinson derived from recent compilations of ultraviolet (UV) and infrared (IR) data. The theoretical SFRD(z) and its evolution are obtained using a simple model that folds together the star formation histories of prototype galaxies that are designed to represent real objects of different morphological type along the Hubble sequence and the hierarchical growing of structures under the action of gravity from small perturbations to large-scale objects in Λ-CDM cosmogony, i.e., the number density of dark matter halos N(M,z). Although the overall model is very simple and easy to set up, it provides results that mimic results obtained from highly complex large-scale N-body simulations well. The simplicity of our approach allows us to test different assumptions for the star formation law in galaxies, the effects of energy feedback from stars to interstellar gas, the efficiency of galactic winds, and also the effect of N(M,z). The result of our analysis is that in the framework of the hierarchical assembly of galaxies, the so-called time-delayed star formation under plain assumptions mainly for the energy feedback and galactic winds can reproduce the observational SFRD(z).
A class of simple bouncing and late-time accelerating cosmologies in f(R) gravity
NASA Astrophysics Data System (ADS)
Kuiroukidis, A.
We consider the field equations for a flat FRW cosmological model, given by Eq. (??), in an a priori generic f(R) gravity model and cast them into a, completely normalized and dimensionless, system of ODEs for the scale factor and the function f(R), with respect to the scalar curvature R. It is shown that under reasonable assumptions, namely for power-law functional form for the f(R) gravity model, one can produce simple analytical and numerical solutions describing bouncing cosmological models where in addition there are late-time accelerating. The power-law form for the f(R) gravity model is typically considered in the literature as the most concrete, reasonable, practical and viable assumption [see S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 90 (2014) 124083, arXiv:1410.8183 [gr-qc
Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a pres...
Johnson, Jay R.; Wing, Simon
2017-01-01
Sheared plasma flows at the low-latitude boundary layer (LLBL) correlate well with early afternoon auroral arcs and upward field-aligned currents. We present a simple analytic model that relates solar wind and ionospheric parameters to the strength and thickness of field-aligned currents (Λ) in a region of sheared velocity, such as the LLBL. We compare the predictions of the model with DMSP observations and find remarkably good scaling of the upward region 1 currents with solar wind and ionospheric parameters in region located at the boundary layer or open field lines at 1100–1700 magnetic local time. We demonstrate that Λ~nsw−0.5 and Λ ~ L when Λ/L < 5 where L is the auroral electrostatic scale length. The sheared boundary layer thickness (Δm) is inferred to be around 3000 km, which appears to have weak dependence on Vsw. J‖ has dependencies on Δm, Σp, nsw, and Vsw. The analytic model provides a simple way to organize data and to infer boundary layer structures from ionospheric data. PMID:29057194
Predator prey oscillations in a simple cascade model of drift wave turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berionni, V.; Guercan, Oe. D.
2011-11-15
A reduced three shell limit of a simple cascade model of drift wave turbulence, which emphasizes nonlocal interactions with a large scale mode, is considered. It is shown to describe both the well known predator prey dynamics between the drift waves and zonal flows and to reduce to the standard three wave interaction equations. Here, this model is considered as a dynamical system whose characteristics are investigated. The analytical solutions for the purely nonlinear limit are given in terms of the Jacobi elliptic functions. An approximate analytical solution involving Jacobi elliptic functions and exponential growth is computed using scale separationmore » for the case of unstable solutions that are observed when the energy injection rate is high. The fixed points of the system are determined, and the behavior around these fixed points is studied. The system is shown to display periodic solutions corresponding to limit cycle oscillations, apparently chaotic phase space orbits, as well as unstable solutions that grow slowly while oscillating rapidly. The period doubling route to transition to chaos is examined.« less
Self-organizing biopsychosocial dynamics and the patient-healer relationship.
Pincus, David
2012-01-01
The patient-healer relationship has an increasing area of interest for complementary and alternative medicine (CAM) researchers. This focus on the interpersonal context of treatment is not surprising as dismantling studies, clinical trials and other linear research designs continually point toward the critical role of context and the broadband biopsychosocial nature of therapeutic responses to CAM. Unfortunately, the same traditional research models and methods that fail to find simple and specific treatment-outcome relations are similarly failing to find simple and specific mechanisms to explain how interpersonal processes influence patient outcomes. This paper presents an overview of some of the key models and methods from nonlinear dynamical systems that are better equipped for empirical testing of CAM outcomes on broadband biopsychosocial processes. Suggestions are made for CAM researchers to assist in modeling the interactions among key process dynamics interacting across biopsychosocial scales: empathy, intra-psychic conflict, physiological arousal, and leukocyte telomerase activity. Finally, some speculations are made regarding the possibility for deeper cross-scale information exchange involving quantum temporal nonlocality. Copyright © 2012 S. Karger AG, Basel.
Inverse and Predictive Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syracuse, Ellen Marie
The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less
Interpreting the power spectrum of Dansgaard-Oeschger events via stochastic dynamical systems
NASA Astrophysics Data System (ADS)
Mitsui, Takahito; Lenoir, Guillaume; Crucifix, Michel
2017-04-01
Dansgaard-Oeschger (DO) events are abrupt climate shifts, which are particularly pronounced in the North Atlantic region during glacial periods [Dansgaard et al. 1993]. The signals are most clearly found in δ 18O or log [Ca2+] records of Greenland ice cores. The power spectrum S(f) of DO events has attracted attention over two decades with debates on the apparent 1.5-kyr periodicity [Grootes & Stuiver 1997; Schultz et al. 2002; Ditlevsen et al. 2007] and scaling property over several time scales [Schmitt, Lovejoy, & Schertzer 1995; Rypdal & Rypdal 2016]. The scaling property is written most simply as S(f)˜ f-β , β ≈ 1.4. However, physical as well as underlying dynamics of the periodicity and the scaling property are still not clear. Pioneering works for modelling the spectrum of DO events are done by Cessi (1994) and Ditlevsen (1999), but their model-data comparisons of the spectra are rather qualitative. Here, we show that simple stochastic dynamical systems can generate power spectra statistically consistent with the observed spectra over a wide range of frequency from orbital to the Nyquist frequency (=1/40 yr-1). We characterize the scaling property of the spectrum by defining a local scaling exponentβ _loc. For the NGRIP log [Ca2+] record, the local scaling exponent β _loc increases from ˜ 1 to ˜ 2 as the frequency increases from ˜ 1/5000 yr-1 to ˜ 1/500 yr-1, and β _loc decreases toward zero as the frequency increases from ˜ 1/500 yr-1 to the Nyquist frequency. For the δ 18O record, the local scaling exponent β _loc increases from ˜ 1 to ˜ 1.5 as the frequency increases from ˜ 1/5000 yr^{-1 to ˜ 1/1000 yr-1, and β _loc decreases toward zero as the frequency increases from ˜ 1/1000 yr-1 to the Nyquist frequency. This systematic breaking of a single scaling is reproduced by the simple stochastic models. Especially, the models suggest that the flattening of the spectra starting from multi-centennial scale and ending at the Nyquist frequency results from both non-dynamical (or non-system) noise and 20-yr binning of the ice core records. The modelling part of this research is partially based on the following work: Takahito Mitsui and Michel Crucifix, Influence of external forcings on abrupt millennial-scale climate changes: a statistical modelling study, Climate Dynamics (first online). doi:10.1007/s00382-016-3235-z
Simple rules govern the patterns of Arctic sea ice melt ponds
NASA Astrophysics Data System (ADS)
Popovic, P.; Cael, B. B.; Abbot, D. S.; Silber, M.
2017-12-01
Climate change, amplified in the far north, has led to a rapid sea ice decline in recent years. Melt ponds that form on the surface of Arctic sea ice in the summer significantly lower the ice albedo, thereby accelerating ice melt. Pond geometry controls the details of this crucial feedback. However, currently it is unclear how to model this intricate geometry. Here we show that an extremely simple model of voids surrounding randomly sized and placed overlapping circles reproduces the essential features of pond patterns. The model has only two parameters, circle scale and the fraction of the surface covered by voids, and we choose them by comparing the model to pond images. Using these parameters the void model robustly reproduces all of the examined pond features such as the ponds' area-perimeter relationship and the area-abundance relationship over nearly 7 orders of magnitude. By analyzing airborne photographs of sea ice, we also find that the typical pond scale is surprisingly constant across different years, regions, and ice types. These results demonstrate that the geometric and abundance patterns of Arctic melt ponds can be simply described, and can guide future models of Arctic melt ponds to improve predictions of how sea ice will respond to Arctic warming.
Improved pattern scaling approaches for the use in climate impact studies
NASA Astrophysics Data System (ADS)
Herger, Nadja; Sanderson, Benjamin M.; Knutti, Reto
2015-05-01
Pattern scaling is a simple way to produce climate projections beyond the scenarios run with expensive global climate models (GCMs). The simplest technique has known limitations and assumes that a spatial climate anomaly pattern obtained from a GCM can be scaled by the global mean temperature (GMT) anomaly. We propose alternatives and assess their skills and limitations. One approach which avoids scaling is to consider a period in a different scenario with the same GMT change. It is attractive as it provides patterns of any temporal resolution that are consistent across variables, and it does not distort variability. Second, we extend the traditional approach with a land-sea contrast term, which provides the largest improvements over the traditional technique. When interpolating between known bounding scenarios, the proposed methods significantly improve the accuracy of the pattern scaled scenario with little computational cost. The remaining errors are much smaller than the Coupled Model Intercomparison Project Phase 5 model spread.
A model of return intervals between earthquake events
NASA Astrophysics Data System (ADS)
Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger
2016-06-01
Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.
Le Mouël, Jean-Louis; Allègre, Claude J.; Narteau, Clément
1997-01-01
A scaling law approach is used to simulate the dynamo process of the Earth’s core. The model is made of embedded turbulent domains of increasing dimensions, until the largest whose size is comparable with the site of the core, pervaded by large-scale magnetic fields. Left-handed or right-handed cyclones appear at the lowest scale, the scale of the elementary domains of the hierarchical model, and disappear. These elementary domains then behave like electromotor generators with opposite polarities depending on whether they contain a left-handed or a right-handed cyclone. To transfer the behavior of the elementary domains to larger ones, a dynamic renormalization approach is used. A simple rule is adopted to determine whether a domain of scale l is a generator—and what its polarity is—in function of the state of the (l − 1) domains it is made of. This mechanism is used as the main ingredient of a kinematic dynamo model, which displays polarity intervals, excursions, and reversals of the geomagnetic field. PMID:11038547
NASA Astrophysics Data System (ADS)
Dhara, Chirag; Renner, Maik; Kleidon, Axel
2015-04-01
The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1986-01-01
Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.
Asynchronously Coupled Models of Ice Loss from Airless Planetary Bodies
NASA Astrophysics Data System (ADS)
Schorghofer, N.
2016-12-01
Ice is found near the surface of dwarf planet Ceres, in some main belt asteroids, and perhaps in NEOs that will be explored or even mined in future. The simple but important question of how fast ice is lost from airless bodies can present computational challenges. The thermal cycle on the surface repeats on much shorter time-scales than ice retreats; one process acts on the time-scale of hours, the other over billions of years. This multi-scale situation is addressed with asynchronous coupling, where models with different time steps are woven together. The sharp contrast at the retreating ice table is dealt with with explicit interface tracking. For Ceres, which is covered with a thermally insulating dust mantle, desiccation rates are orders of magnitude slower than had been calculated with simpler models. More model challenges remain: The role of impact devolatization and the time-scale for complete desiccation of an asteroid. I will also share my experience with code distribution using GitHub and Zenodo.
Cosmology in Mirror Twin Higgs and neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacko, Zackaria; Craig, Nathaniel; Fox, Patrick J.
We explore a simple solution to the cosmological challenges of the original Mirror Twin Higgs (MTH) model that leads to interesting implications for experiment. We consider theories in which both the standard model and mirror neutrinos acquire masses through the familiar seesaw mechanism, but with a low right-handed neutrino mass scale of order a few GeV. In thesemore » $$\
ERIC Educational Resources Information Center
Cai, Li
2013-01-01
Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…
Cosmology in Mirror Twin Higgs and neutrino masses
Chacko, Zackaria; Craig, Nathaniel; Fox, Patrick J.; ...
2017-07-06
We explore a simple solution to the cosmological challenges of the original Mirror Twin Higgs (MTH) model that leads to interesting implications for experiment. We consider theories in which both the standard model and mirror neutrinos acquire masses through the familiar seesaw mechanism, but with a low right-handed neutrino mass scale of order a few GeV. In thesemore » $$\
A Fuzzy Cognitive Model of aeolian instability across the South Texas Sandsheet
NASA Astrophysics Data System (ADS)
Houser, C.; Bishop, M. P.; Barrineau, C. P.
2014-12-01
Characterization of aeolian systems is complicated by rapidly changing surface-process regimes, spatio-temporal scale dependencies, and subjective interpretation of imagery and spatial data. This paper describes the development and application of analytical reasoning to quantify instability of an aeolian environment using scale-dependent information coupled with conceptual knowledge of process and feedback mechanisms. Specifically, a simple Fuzzy Cognitive Model (FCM) for aeolian landscape instability was developed that represents conceptual knowledge of key biophysical processes and feedbacks. Model inputs include satellite-derived surface biophysical and geomorphometric parameters. FCMs are a knowledge-based Artificial Intelligence (AI) technique that merges fuzzy logic and neural computing in which knowledge or concepts are structured as a web of relationships that is similar to both human reasoning and the human decision-making process. Given simple process-form relationships, the analytical reasoning model is able to map the influence of land management practices and the geomorphology of the inherited surface on aeolian instability within the South Texas Sandsheet. Results suggest that FCMs can be used to formalize process-form relationships and information integration analogous to human cognition with future iterations accounting for the spatial interactions and temporal lags across the sand sheets.
Stöckl, Anna L; Kihlström, Klara; Chandler, Steven; Sponberg, Simon
2017-04-05
Flight control in insects is heavily dependent on vision. Thus, in dim light, the decreased reliability of visual signal detection also prompts consequences for insect flight. We have an emerging understanding of the neural mechanisms that different species employ to adapt the visual system to low light. However, much less explored are comparative analyses of how low light affects the flight behaviour of insect species, and the corresponding links between physiological adaptations and behaviour. We investigated whether the flower tracking behaviour of three hawkmoth species with different diel activity patterns revealed luminance-dependent adaptations, using a system identification approach. We found clear luminance-dependent differences in flower tracking in all three species, which were explained by a simple luminance-dependent delay model, which generalized across species. We discuss physiological and anatomical explanations for the variance in tracking responses, which could not be explained by such simple models. Differences between species could not be explained by the simple delay model. However, in several cases, they could be explained through the addition on a second model parameter, a simple scaling term, that captures the responsiveness of each species to flower movements. Thus, we demonstrate here that much of the variance in the luminance-dependent flower tracking responses of hawkmoths with different diel activity patterns can be captured by simple models of neural processing.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
da Silva, Roberto; Vainstein, Mendeli H.; Lamb, Luis C.; Prado, Sandra D.
2013-03-01
We propose a novel probabilistic model that outputs the final standings of a soccer league, based on a simple dynamics that mimics a soccer tournament. In our model, a team is created with a defined potential (ability) which is updated during the tournament according to the results of previous games. The updated potential modifies a team future winning/losing probabilities. We show that this evolutionary game is able to reproduce the statistical properties of final standings of actual editions of the Brazilian tournament (Brasileirão) if the starting potential is the same for all teams. Other leagues such as the Italian (Calcio) and the Spanish (La Liga) tournaments have notoriously non-Gaussian traces and cannot be straightforwardly reproduced by this evolutionary non-Markovian model with simple initial conditions. However, we show that by setting the initial abilities based on data from previous tournaments, our model is able to capture the stylized statistical features of double round robin system (DRRS) tournaments in general. A complete understanding of these phenomena deserves much more attention, but we suggest a simple explanation based on data collected in Brazil: here several teams have been crowned champion in previous editions corroborating that the champion typically emerges from random fluctuations that partly preserve the Gaussian traces during the tournament. On the other hand, in the Italian and Spanish cases, only a few teams in recent history have won their league tournaments. These leagues are based on more robust and hierarchical structures established even before the beginning of the tournament. For the sake of completeness, we also elaborate a totally Gaussian model (which equalizes the winning, drawing, and losing probabilities) and we show that the scores of the Brazilian tournament “Brasileirão” cannot be reproduced. This shows that the evolutionary aspects are not superfluous and play an important role which must be considered in other alternative models. Finally, we analyze the distortions of our model in situations where a large number of teams is considered, showing the existence of a transition from a single to a double peaked histogram of the final classification scores. An interesting scaling is presented for different sized tournaments.
LHC-scale left-right symmetry and unification
NASA Astrophysics Data System (ADS)
Arbeláez, Carolina; Romão, Jorge C.; Hirsch, Martin; Malinský, Michal
2014-02-01
We construct a comprehensive list of nonsupersymmetric standard model extensions with a low-scale left-right (LR)-symmetric intermediate stage that may be obtained as simple low-energy effective theories within a class of renormalizable SO(10) grand unified theories. Unlike the traditional "minimal" LR models many of our example settings support a perfect gauge coupling unification even if the LR scale is in the LHC domain at a price of only (a few copies of) one or two types of extra fields pulled down to the TeV-scale ballpark. We discuss the main aspects of a potentially realistic model building conforming the basic constraints from the quark and lepton sector flavor structure, proton decay limits, etc. We pay special attention to the theoretical uncertainties related to the limited information about the underlying unified framework in the bottom-up approach, in particular, to their role in the possible extraction of the LR-breaking scale. We observe a general tendency for the models without new colored states in the TeV domain to be on the verge of incompatibility with the proton stability constraints.
Luminet, Jean-Pierre; Weeks, Jeffrey R; Riazuelo, Alain; Lehoucq, Roland; Uzan, Jean-Philippe
2003-10-09
The current 'standard model' of cosmology posits an infinite flat universe forever expanding under the pressure of dark energy. First-year data from the Wilkinson Microwave Anisotropy Probe (WMAP) confirm this model to spectacular precision on all but the largest scales. Temperature correlations across the microwave sky match expectations on angular scales narrower than 60 degrees but, contrary to predictions, vanish on scales wider than 60 degrees. Several explanations have been proposed. One natural approach questions the underlying geometry of space--namely, its curvature and topology. In an infinite flat space, waves from the Big Bang would fill the universe on all length scales. The observed lack of temperature correlations on scales beyond 60 degrees means that the broadest waves are missing, perhaps because space itself is not big enough to support them. Here we present a simple geometrical model of a finite space--the Poincaré dodecahedral space--which accounts for WMAP's observations with no fine-tuning required. The predicted density is Omega(0) approximately 1.013 > 1, and the model also predicts temperature correlations in matching circles on the sky.
NASA Astrophysics Data System (ADS)
Legates, David R.; Junghenn, Katherine T.
2018-04-01
Many local weather station networks that measure a number of meteorological variables (i.e. , mesonetworks) have recently been established, with soil moisture occasionally being part of the suite of measured variables. These mesonetworks provide data from which detailed estimates of various hydrological parameters, such as precipitation and reference evapotranspiration, can be made which, when coupled with simple surface characteristics available from soil surveys, can be used to obtain estimates of soil moisture. The question is Can meteorological data be used with a simple hydrologic model to estimate accurately daily soil moisture at a mesonetwork site? Using a state-of-the-art mesonetwork that also includes soil moisture measurements across the US State of Delaware, the efficacy of a simple, modified Thornthwaite/Mather-based daily water balance model based on these mesonetwork observations to estimate site-specific soil moisture is determined. Results suggest that the model works reasonably well for most well-drained sites and provides good qualitative estimates of measured soil moisture, often near the accuracy of the soil moisture instrumentation. The model exhibits particular trouble in that it cannot properly simulate the slow drainage that occurs in poorly drained soils after heavy rains and interception loss, resulting from grass not being short cropped as expected also adversely affects the simulation. However, the model could be tuned to accommodate some non-standard siting characteristics.
Li, Xinan; Xu, Hongyuan; Cheung, Jeffrey T
2016-12-01
This work describes a new approach for gait analysis and balance measurement. It uses an inertial measurement unit (IMU) that can either be embedded inside a dynamically unstable platform for balance measurement or mounted on the lower back of a human participant for gait analysis. The acceleration data along three Cartesian coordinates is analyzed by the gait-force model to extract bio-mechanics information in both the dynamic state as in the gait analyzer and the steady state as in the balance scale. For the gait analyzer, the simple, noninvasive and versatile approach makes it appealing to a broad range of applications in clinical diagnosis, rehabilitation monitoring, athletic training, sport-apparel design, and many other areas. For the balance scale, it provides a portable platform to measure the postural deviation and the balance index under visual or vestibular sensory input conditions. Despite its simple construction and operation, excellent agreement has been demonstrated between its performance and the high-cost commercial balance unit over a wide dynamic range. The portable balance scale is an ideal tool for routine monitoring of balance index, fall-risk assessment, and other balance-related health issues for both clinical and household use.
Phenomenology of NMSSM in TeV scale mirage mediation
NASA Astrophysics Data System (ADS)
Hagimoto, Kei; Kobayashi, Tatsuo; Makino, Hiroki; Okumura, Ken-ichi; Shimomura, Takashi
2016-02-01
We study the next-to-minimal supersymmetric standard model (NMSSM) with the TeV scale mirage mediation, which is known as a solution for the little hierarchy problem in supersymmetry. Our previous study showed that 125 GeV Higgs boson is realized with {O} (10)% fine-tuning for 1.5 TeV gluino (1 TeV stop) mass. The μ term could be as large as 500 GeV without sacrificing the fine-tuning thanks to a cancellation mechanism. The singlet-doublet mixing is suppressed by tan β. In this paper, we further extend this analysis. We argue that approximate scale symmetries play a role behind the suppression of the singlet-doublet mixing. They reduce the mixing matrix to a simple form that is useful to understand the results of the numerical analysis. We perform a comprehensive analysis of the fine-tuning including the singlet sector by introducing a simple formula for the fine-tuning measure. This shows that the singlet mass of the least fine-tuning is favored by the LEP anomaly for moderate tan β. We also discuss prospects for the precision measurements of the Higgs couplings at LHC and ILC and direct/indirect dark matter searches in the model.
The fluid trampoline: droplets bouncing on a soap film
NASA Astrophysics Data System (ADS)
Bush, John; Gilet, Tristan
2008-11-01
We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.
NASA Astrophysics Data System (ADS)
Thurmond, John B.; Drzewiecki, Peter A.; Xu, Xueming
2005-08-01
Geological data collected from outcrop are inherently three-dimensional (3D) and span a variety of scales, from the megascopic to the microscopic. This presents challenges in both interpreting and communicating observations. The Virtual Reality Modeling Language provides an easy way for geoscientists to construct complex visualizations that can be viewed with free software. Field data in tabular form can be used to generate hierarchical multi-scale visualizations of outcrops, which can convey the complex relationships between a variety of data types simultaneously. An example from carbonate mud-mounds in southeastern New Mexico illustrates the embedding of three orders of magnitude of observation into a single visualization, for the purpose of interpreting depositional facies relationships in three dimensions. This type of raw data visualization can be built without software tools, yet is incredibly useful for interpreting and communicating data. Even simple visualizations can aid in the interpretation of complex 3D relationships that are frequently encountered in the geosciences.
A simple microviscometric approach based on Brownian motion tracking.
Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan
2015-02-01
Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).
Modeling turbulent energy behavior and sudden viscous dissipation in compressing plasma turbulence
Davidovits, Seth; Fisch, Nathaniel J.
2017-12-21
Here, we present a simple model for the turbulent kinetic energy behavior of subsonic plasma turbulence undergoing isotropic three-dimensional compression, which may exist in various inertial confinement fusion experiments or astrophysical settings. The plasma viscosity depends on both the temperature and the ionization state, for which many possible scalings with compression are possible. For example, in an adiabatic compression the temperature scales as 1/L 2, with L the linear compression ratio, but if thermal energy loss mechanisms are accounted for, the temperature scaling may be weaker. As such, the viscosity has a wide range of net dependencies on the compression.more » The model presented here, with no parameter changes, agrees well with numerical simulations for a range of these dependencies. This model permits the prediction of the partition of injected energy between thermal and turbulent energy in a compressing plasma.« less
Modeling turbulent energy behavior and sudden viscous dissipation in compressing plasma turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidovits, Seth; Fisch, Nathaniel J.
Here, we present a simple model for the turbulent kinetic energy behavior of subsonic plasma turbulence undergoing isotropic three-dimensional compression, which may exist in various inertial confinement fusion experiments or astrophysical settings. The plasma viscosity depends on both the temperature and the ionization state, for which many possible scalings with compression are possible. For example, in an adiabatic compression the temperature scales as 1/L 2, with L the linear compression ratio, but if thermal energy loss mechanisms are accounted for, the temperature scaling may be weaker. As such, the viscosity has a wide range of net dependencies on the compression.more » The model presented here, with no parameter changes, agrees well with numerical simulations for a range of these dependencies. This model permits the prediction of the partition of injected energy between thermal and turbulent energy in a compressing plasma.« less
On identifying relationships between the flood scaling exponent and basin attributes.
Medhi, Hemanta; Tripathi, Shivam
2015-07-01
Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.
Gravity waves and the LHC: towards high-scale inflation with low-energy SUSY
NASA Astrophysics Data System (ADS)
He, Temple; Kachru, Shamit; Westphal, Alexander
2010-06-01
It has been argued that rather generic features of string-inspired inflationary theories with low-energy supersymmetry (SUSY) make it difficult to achieve inflation with a Hubble scale H > m 3/2, where m 3/2 is the gravitino mass in the SUSY-breaking vacuum state. We present a class of string-inspired supergravity realizations of chaotic inflation where a simple, dynamical mechanism yields hierarchically small scales of post-inflationary supersymmetry breaking. Within these toy models we can easily achieve small ratios between m 3/2 and the Hubble scale of inflation. This is possible because the expectation value of the superpotential < W> relaxes from large to small values during the course of inflation. However, our toy models do not provide a reasonable fit to cosmological data if one sets the SUSY-breaking scale to m 3/2 ≤ TeV. Our work is a small step towards relieving the apparent tension between high-scale inflation and low-scale supersymmetry breaking in string compactifications.
Modes and emergent time scales of embayed beach dynamics
NASA Astrophysics Data System (ADS)
Ratliff, Katherine M.; Murray, A. Brad
2014-10-01
In this study, we use a simple numerical model (the Coastline Evolution Model) to explore alongshore transport-driven shoreline dynamics within generalized embayed beaches (neglecting cross-shore effects). Using principal component analysis (PCA), we identify two primary orthogonal modes of shoreline behavior that describe shoreline variation about its unchanging mean position: the rotation mode, which has been previously identified and describes changes in the mean shoreline orientation, and a newly identified breathing mode, which represents changes in shoreline curvature. Wavelet analysis of the PCA mode time series reveals characteristic time scales of these modes (typically years to decades) that emerge within even a statistically constant white-noise wave climate (without changes in external forcing), suggesting that these time scales can arise from internal system dynamics. The time scales of both modes increase linearly with shoreface depth, suggesting that the embayed beach sediment transport dynamics exhibit a diffusive scaling.
Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.
2009-01-01
The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.
Possible biomechanical origins of the long-range correlations in stride intervals of walking
NASA Astrophysics Data System (ADS)
Gates, Deanna H.; Su, Jimmy L.; Dingwell, Jonathan B.
2007-07-01
When humans walk, the time duration of each stride varies from one stride to the next. These temporal fluctuations exhibit long-range correlations. It has been suggested that these correlations stem from higher nervous system centers in the brain that control gait cycle timing. Existing proposed models of this phenomenon have focused on neurophysiological mechanisms that might give rise to these long-range correlations, and generally ignored potential alternative mechanical explanations. We hypothesized that a simple mechanical system could also generate similar long-range correlations in stride times. We modified a very simple passive dynamic model of bipedal walking to incorporate forward propulsion through an impulsive force applied to the trailing leg at each push-off. Push-off forces were varied from step to step by incorporating both “sensory” and “motor” noise terms that were regulated by a simple proportional feedback controller. We generated 400 simulations of walking, with different combinations of sensory noise, motor noise, and feedback gain. The stride time data from each simulation were analyzed using detrended fluctuation analysis to compute a scaling exponent, α. This exponent quantified how each stride interval was correlated with previous and subsequent stride intervals over different time scales. For different variations of the noise terms and feedback gain, we obtained short-range correlations (α<0.5), uncorrelated time series (α=0.5), long-range correlations (0.5<α<1.0), or Brownian motion (α>1.0). Our results indicate that a simple biomechanical model of walking can generate long-range correlations and thus perhaps these correlations are not a complex result of higher level neuronal control, as has been previously suggested.
Dripps, W.R.; Bradbury, K.R.
2007-01-01
Quantifying the spatial and temporal distribution of natural groundwater recharge is usually a prerequisite for effective groundwater modeling and management. As flow models become increasingly utilized for management decisions, there is an increased need for simple, practical methods to delineate recharge zones and quantify recharge rates. Existing models for estimating recharge distributions are data intensive, require extensive parameterization, and take a significant investment of time in order to establish. The Wisconsin Geological and Natural History Survey (WGNHS) has developed a simple daily soil-water balance (SWB) model that uses readily available soil, land cover, topographic, and climatic data in conjunction with a geographic information system (GIS) to estimate the temporal and spatial distribution of groundwater recharge at the watershed scale for temperate humid areas. To demonstrate the methodology and the applicability and performance of the model, two case studies are presented: one for the forested Trout Lake watershed of north central Wisconsin, USA and the other for the urban-agricultural Pheasant Branch Creek watershed of south central Wisconsin, USA. Overall, the SWB model performs well and presents modelers and planners with a practical tool for providing recharge estimates for modeling and water resource planning purposes in humid areas. ?? Springer-Verlag 2007.
Basinwide response of the Atlantic Meridional Overturning Circulation to interannual wind forcing
NASA Astrophysics Data System (ADS)
Zhao, Jian
2017-12-01
An eddy-resolving Ocean general circulation model For the Earth Simulator (OFES) and a simple wind-driven two-layer model are used to investigate the role of momentum fluxes in driving the Atlantic Meridional Overturning Circulation (AMOC) variability throughout the Atlantic basin from 1950 to 2010. Diagnostic analysis using the OFES results suggests that interior baroclinic Rossby waves and coastal topographic waves play essential roles in modulating the AMOC interannual variability. The proposed mechanisms are verified in the context of a simple two-layer model with realistic topography and only forced by surface wind. The topographic waves communicate high-latitude anomalies into lower latitudes and account for about 50% of the AMOC interannual variability in the subtropics. In addition, the large scale Rossby waves excited by wind forcing together with topographic waves set up coherent AMOC interannual variability patterns across the tropics and subtropics. The comparisons between the simple model and OFES results suggest that a large fraction of the AMOC interannual variability in the Atlantic basin can be explained by wind-driven dynamics.
NASA Astrophysics Data System (ADS)
Chatterjee, Tanmoy; Peet, Yulia T.
2017-07-01
A large eddy simulation (LES) methodology coupled with near-wall modeling has been implemented in the current study for high Re neutral atmospheric boundary layer flows using an exponentially accurate spectral element method in an open-source research code Nek 5000. The effect of artificial length scales due to subgrid scale (SGS) and near wall modeling (NWM) on the scaling laws and structure of the inner and outer layer eddies is studied using varying SGS and NWM parameters in the spectral element framework. The study provides an understanding of the various length scales and dynamics of the eddies affected by the LES model and also the fundamental physics behind the inner and outer layer eddies which are responsible for the correct behavior of the mean statistics in accordance with the definition of equilibrium layers by Townsend. An economical and accurate LES model based on capturing the near wall coherent eddies has been designed, which is successful in eliminating the artificial length scale effects like the log-layer mismatch or the secondary peak generation in the streamwise variance.
NASA Astrophysics Data System (ADS)
Bier, Martin; Brak, Bastiaan
2015-04-01
In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
Bidault, Xavier; Chaussedent, Stéphane; Blanc, Wilfried
2015-10-21
A simple transferable adaptive model is developed and it allows for the first time to simulate by molecular dynamics the separation of large phases in the MgO-SiO2 binary system, as experimentally observed and as predicted by the phase diagram, meaning that separated phases have various compositions. This is a real improvement over fixed-charge models, which are often limited to an interpretation involving the formation of pure clusters, or involving the modified random network model. Our adaptive model, efficient to reproduce known crystalline and glassy structures, allows us to track the formation of large amorphous Mg-rich Si-poor nanoparticles in an Mg-poor Si-rich matrix from a 0.1MgO-0.9SiO2 melt.
Experimental evaluation of expendable supersonic nozzle concepts
NASA Technical Reports Server (NTRS)
Baker, V.; Kwon, O.; Vittal, B.; Berrier, B.; Re, R.
1990-01-01
Exhaust nozzles for expendable supersonic turbojet engine missile propulsion systems are required to be simple, short and compact, in addition to having good broad-range thrust-minus-drag performance. A series of convergent-divergent nozzle scale model configurations were designed and wind tunnel tested for a wide range of free stream Mach numbers and nozzle pressure ratios. The models included fixed geometry and simple variable exit area concepts. The experimental and analytical results show that the fixed geometry configurations tested have inferior off-design thrust-minus-drag performance in the transonic Mach range. A simple variable exit area configuration called the Axi-Quad nozzle, combining features of both axisymmetric and two-dimensional convergent-divergent nozzles, performed well over a broad range of operating conditions. Analytical predictions of the flow pattern as well as overall performance of the nozzles, using a fully viscous, compressible CFD code, compared very well with the test data.
Anomalous glassy dynamics in simple models of dense biological tissue
NASA Astrophysics Data System (ADS)
Sussman, Daniel M.; Paoluzzi, M.; Marchetti, M. Cristina; Manning, M. Lisa
2018-02-01
In order to understand the mechanisms for glassy dynamics in biological tissues and shed light on those in non-biological materials, we study the low-temperature disordered phase of 2D vertex-like models. Recently it has been noted that vertex models have quite unusual behavior in the zero-temperature limit, with rigidity transitions that are controlled by residual stresses and therefore exhibit very different scaling and phenomenology compared to particulate systems. Here we investigate the finite-temperature phase of two-dimensional Voronoi and Vertex models, and show that they have highly unusual, sub-Arrhenius scaling of dynamics with temperature. We connect the anomalous glassy dynamics to features of the potential energy landscape associated with zero-temperature inherent states.
Network rewiring dynamics with convergence towards a star network
Dick, G.; Parry, M.
2016-01-01
Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz (Nature 393, 440–442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach. PMID:27843396
Network rewiring dynamics with convergence towards a star network.
Whigham, P A; Dick, G; Parry, M
2016-10-01
Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz ( Nature 393 , 440-442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach.
SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.
Weight, Michael D; Harpending, Henry
2017-01-01
The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.
Doolittle, Emily L; Gingras, Bruno; Endres, Dominik M; Fitch, W Tecumseh
2014-11-18
Many human musical scales, including the diatonic major scale prevalent in Western music, are built partially or entirely from intervals (ratios between adjacent frequencies) corresponding to small-integer proportions drawn from the harmonic series. Scientists have long debated the extent to which principles of scale generation in human music are biologically or culturally determined. Data from animal "song" may provide new insights into this discussion. Here, by examining pitch relationships using both a simple linear regression model and a Bayesian generative model, we show that most songs of the hermit thrush (Catharus guttatus) favor simple frequency ratios derived from the harmonic (or overtone) series. Furthermore, we show that this frequency selection results not from physical constraints governing peripheral production mechanisms but from active selection at a central level. These data provide the most rigorous empirical evidence to date of a bird song that makes use of the same mathematical principles that underlie Western and many non-Western musical scales, demonstrating surprising convergence between human and animal "song cultures." Although there is no evidence that the songs of most bird species follow the overtone series, our findings add to a small but growing body of research showing that a preference for small-integer frequency ratios is not unique to humans. These findings thus have important implications for current debates about the origins of human musical systems and may call for a reevaluation of existing theories of musical consonance based on specific human vocal characteristics.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Prospects for improving the representation of coastal and shelf seas in global ocean models
NASA Astrophysics Data System (ADS)
Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard
2017-02-01
Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.
Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media
Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.
2000-01-01
To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.
Quantum Entanglement of Matter and Geometry in Large Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig J.
2014-12-04
Standard quantum mechanics and gravity are used to estimate the mass and size of idealized gravitating systems where position states of matter and geometry become indeterminate. It is proposed that well-known inconsistencies of standard quantum field theory with general relativity on macroscopic scales can be reconciled by nonstandard, nonlocal entanglement of field states with quantum states of geometry. Wave functions of particle world lines are used to estimate scales of geometrical entanglement and emergent locality. Simple models of entanglement predict coherent fluctuations in position of massive bodies, of Planck scale origin, measurable on a laboratory scale, and may account formore » the fact that the information density of long lived position states in Standard Model fields, which is determined by the strong interactions, is the same as that determined holographically by the cosmological constant.« less
Continuously distributed magnetization profile for millimeter-scale elastomeric undulatory swimming
NASA Astrophysics Data System (ADS)
Diller, Eric; Zhuang, Jiang; Zhan Lum, Guo; Edwards, Matthew R.; Sitti, Metin
2014-04-01
We have developed a millimeter-scale magnetically driven swimming robot for untethered motion at mid to low Reynolds numbers. The robot is propelled by continuous undulatory deformation, which is enabled by the distributed magnetization profile of a flexible sheet. We demonstrate control of a prototype device and measure deformation and speed as a function of magnetic field strength and frequency. Experimental results are compared with simple magnetoelastic and fluid propulsion models. The presented mechanism provides an efficient remote actuation method at the millimeter scale that may be suitable for further scaling down in size for micro-robotics applications in biotechnology and healthcare.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Universal scaling for second-class particles in a one-dimensional misanthrope process
NASA Astrophysics Data System (ADS)
Rákos, Attila
2010-06-01
We consider the one-dimensional Katz-Lebowitz-Spohn (KLS) model, which is a generalization of the totally asymmetric simple exclusion process (TASEP) with nearest neighbour interaction. Using a powerful mapping, the KLS model can be translated into a misanthrope process. In this model, for the repulsive case, it is possible to introduce second-class particles, the number of which is conserved. We study the distance distribution of second-class particles in this model numerically and find that for large distances it decreases as x-3/2. This agrees with a previous analytical result of Derrida et al (1993) for the TASEP, where the same asymptotic behaviour was found. We also study the dynamical scaling function of the distance distribution and find that it is universal within this family of models.
Effects of land use on lake nutrients: The importance of scale, hydrologic connectivity, and region
Soranno, Patricia A.; Cheruvelil, Kendra Spence; Wagner, Tyler; Webster, Katherine E.; Bremigan, Mary Tate
2015-01-01
Catchment land uses, particularly agriculture and urban uses, have long been recognized as major drivers of nutrient concentrations in surface waters. However, few simple models have been developed that relate the amount of catchment land use to downstream freshwater nutrients. Nor are existing models applicable to large numbers of freshwaters across broad spatial extents such as regions or continents. This research aims to increase model performance by exploring three factors that affect the relationship between land use and downstream nutrients in freshwater: the spatial extent for measuring land use, hydrologic connectivity, and the regional differences in both the amount of nutrients and effects of land use on them. We quantified the effects of these three factors that relate land use to lake total phosphorus (TP) and total nitrogen (TN) in 346 north temperate lakes in 7 regions in Michigan, USA. We used a linear mixed modeling framework to examine the importance of spatial extent, lake hydrologic class, and region on models with individual lake nutrients as the response variable, and individual land use types as the predictor variables. Our modeling approach was chosen to avoid problems of multi-collinearity among predictor variables and a lack of independence of lakes within regions, both of which are common problems in broad-scale analyses of freshwaters. We found that all three factors influence land use-lake nutrient relationships. The strongest evidence was for the effect of lake hydrologic connectivity, followed by region, and finally, the spatial extent of land use measurements. Incorporating these three factors into relatively simple models of land use effects on lake nutrients should help to improve predictions and understanding of land use-lake nutrient interactions at broad scales.
Effects of Land Use on Lake Nutrients: The Importance of Scale, Hydrologic Connectivity, and Region
Soranno, Patricia A.; Cheruvelil, Kendra Spence; Wagner, Tyler; Webster, Katherine E.; Bremigan, Mary Tate
2015-01-01
Catchment land uses, particularly agriculture and urban uses, have long been recognized as major drivers of nutrient concentrations in surface waters. However, few simple models have been developed that relate the amount of catchment land use to downstream freshwater nutrients. Nor are existing models applicable to large numbers of freshwaters across broad spatial extents such as regions or continents. This research aims to increase model performance by exploring three factors that affect the relationship between land use and downstream nutrients in freshwater: the spatial extent for measuring land use, hydrologic connectivity, and the regional differences in both the amount of nutrients and effects of land use on them. We quantified the effects of these three factors that relate land use to lake total phosphorus (TP) and total nitrogen (TN) in 346 north temperate lakes in 7 regions in Michigan, USA. We used a linear mixed modeling framework to examine the importance of spatial extent, lake hydrologic class, and region on models with individual lake nutrients as the response variable, and individual land use types as the predictor variables. Our modeling approach was chosen to avoid problems of multi-collinearity among predictor variables and a lack of independence of lakes within regions, both of which are common problems in broad-scale analyses of freshwaters. We found that all three factors influence land use-lake nutrient relationships. The strongest evidence was for the effect of lake hydrologic connectivity, followed by region, and finally, the spatial extent of land use measurements. Incorporating these three factors into relatively simple models of land use effects on lake nutrients should help to improve predictions and understanding of land use-lake nutrient interactions at broad scales. PMID:26267813
Localized Enzymatic Degradation of Polymers: Physics and Scaling Laws
NASA Astrophysics Data System (ADS)
Lalitha Sridhar, Shankar; Vernerey, Franck
2018-03-01
Biodegradable polymers are naturally abundant in living matter and have led to great advances in controlling environmental pollution due to synthetic polymer products, harnessing renewable energy from biofuels, and in the field of biomedicine. One of the most prevalent mechanisms of biodegradation involves enzyme-catalyzed depolymerization by biological agents. Despite numerous studies dedicated to understanding polymer biodegradation in different environments, a simple model that predicts the macroscopic behavior (mass and structural loss) in terms of microphysical processes (enzyme transport and reaction) is lacking. An interesting phenomenon occurs when an enzyme source (released by a biological agent) attacks a tight polymer mesh that restricts free diffusion. A fuzzy interface separating the intact and fully degraded polymer propagates away from the source and into the polymer as the enzymes diffuse and react in time. Understanding the characteristics of this interface will provide crucial insight into the biodegradation process and potential ways to precisely control it. In this work, we present a centrosymmetric model of biodegradation by characterizing the moving fuzzy interface in terms of its speed and width. The model predicts that the characteristics of this interface are governed by two time scales, namely the polymer degradation and enzyme transport times, which in turn depend on four main polymer and enzyme properties. A key finding of this work is simple scaling laws that can be used to guide biodegradation of polymers in different applications.
Comparing wave shoaling methods used in large-scale coastal evolution modeling
NASA Astrophysics Data System (ADS)
Limber, P. W.; Adams, P. N.; Murray, A.
2013-12-01
A variety of methods are available to simulate wave propagation from the deep ocean to the surf zone. They range from simple and computationally fast (e.g. linear wave theory applied to shore-parallel bathymetric contours) to complicated and computationally intense (e.g., Delft's ';Simulating WAves Nearshore', or SWAN, model applied to complex bathymetry). Despite their differences, the goal of each method is the same with respect to coastline evolution modeling: to link offshore waves with rates of (and gradients in) alongshore sediment transport. Choosing a shoaling technique for modeling coastline evolution should be partly informed by the spatial and temporal scales of the model, as well as the model's intent (is it simulating a specific coastline, or exploring generic coastline dynamics?). However, the particular advantages and disadvantages of each technique, and how the advantages/disadvantages vary over different model spatial and temporal scales, are not always clear. We present a wave shoaling model that simultaneously computes breaking wave heights and angles using three increasingly complex wave shoaling routines: the most basic approach assuming shore-parallel bathymetric contours, a wave ray tracing method that includes wave energy convergence and divergence and non-shore-parallel contours, and a spectral wave model (SWAN). Initial results show reasonable agreement between wave models along a flat shoreline for small (1 m) wave heights, low wave angles (0 to 10 degrees), and simple bathymetry. But, as wave heights and angles increase, bathymetry becomes more variable, and the shoreline shape becomes sinuous, the model results begin to diverge. This causes different gradients in alongshore sediment transport between model runs employing different shoaling techniques and, therefore, different coastline behavior. Because SWAN does not approximate wave breaking (which drives alongshore sediment transport) we use a routine to extract grid cells from SWAN output where wave height is approximately one-half of the water depth (a standard wave breaking threshold). The goal of this modeling exercise is to understand under what conditions a simple wave model is sufficient for simulating coastline evolution, and when using a more complex shoaling routine can optimize a coastline model. The Coastline Evolution Model (CEM; Ashton and Murray, 2006) is used to show how different shoaling routines affect modeled coastline behavior. The CEM currently includes the most basic wave shoaling approach to simulate cape and spit formation. We will instead couple it to SWAN, using the insight from the comprehensive wave model (above) to guide its application. This will allow waves transformed over complex bathymetry, such as cape-associated shoals and ridges, to be input for the CEM so that large-scale coastline behavior can be addressed in less idealized environments. Ashton, A., and Murray, A.B., 2006, High-angle wave instability and emergent shoreline shapes: 1. Modeling of sand waves, flying spits, and capes: Journal of Geophysical Research, v. 111, p. F04011, doi:10.1029/2005JF000422.
Symmetry Breaking, Unification, and Theories Beyond the Standard Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, Yasunori
2009-07-31
A model was constructed in which the supersymmetric fine-tuning problem is solved without extending the Higgs sector at the weak scale. We have demonstrated that the model can avoid all the phenomenological constraints, while avoiding excessive fine-tuning. We have also studied implications of the model on dark matter physics and collider physics. I have proposed in an extremely simple construction for models of gauge mediation. We found that the {mu} problem can be simply and elegantly solved in a class of models where the Higgs fields couple directly to the supersymmetry breaking sector. We proposed a new way of addressingmore » the flavor problem of supersymmetric theories. We have proposed a new framework of constructing theories of grand unification. We constructed a simple and elegant model of dark matter which explains excess flux of electrons/positrons. We constructed a model of dark energy in which evolving quintessence-type dark energy is naturally obtained. We studied if we can find evidence of the multiverse.« less
NASA Astrophysics Data System (ADS)
Paredes-Miranda, G.; Arnott, W. P.; Moosmuller, H.
2010-12-01
The global trend toward urbanization and the resulting increase in city population has directed attention toward air pollution in megacities. A closely related question of importance for urban planning and attainment of air quality standards is how pollutant concentrations scale with city population. In this study, we use measurements of light absorption and light scattering coefficients as proxies for primary (i.e., black carbon; BC) and total (i.e., particulate matter; PM) pollutant concentration, to start addressing the following questions: What patterns and generalizations are emerging from our expanding data sets on urban air pollution? How does the per-capita air pollution vary with economic, geographic, and meteorological conditions of an urban area? Does air pollution provide an upper limit on city size? Diurnal analysis of black carbon concentration measurements in suburban Mexico City, Mexico, Las Vegas, NV, USA, and Reno, NV, USA for similar seasons suggests that commonly emitted primary air pollutant concentrations scale approximately as the square root of the urban population N, consistent with a simple 2-d box model. The measured absorption coefficient Babs is approximately proportional to the BC concentration (primary pollution) and thus scales with the square root of population (N). Since secondary pollutants form through photochemical reactions involving primary pollutants, they scale also with square root of N. Therefore the scattering coefficient Bsca, a proxy for PM concentration is also expected to scale with square root of N. Here we present light absorption and scattering measurements and data on meteorological conditions and compare the population scaling of these pollutant measurements with predictions from the simple 2-d box model. We find that these basin cities are connected by the square root of N dependence. Data from other cities will be discussed as time permits.
On being the right size: scaling effects in designing a human-on-a-chip
Moraes, Christopher; Labuz, Joseph M.; Leung, Brendan M.; Inoue, Mayumi; Chun, Tae-Hwa; Takayama, Shuichi
2013-01-01
Developing a human-on-a-chip by connecting multiple model organ systems would provide an intermediate screen for therapeutic efficacy and toxic side effects of drugs prior to conducting expensive clinical trials. However, correctly designing individual organs and scaling them relative to each other to make a functional microscale human analog is challenging, and a generalized approach has yet to be identified. In this work, we demonstrate the importance of rational design of both the individual organ and its relationship with other organs, using a simple two-compartment system simulating insulin-dependent glucose uptake in adipose tissues. We demonstrate that inter-organ scaling laws depend on both the number of cells, and on the spatial arrangement of those cells within the microfabricated construct. We then propose a simple and novel inter-organ ‘metabolically-supported functional scaling’ approach predicated on maintaining in vivo cellular basal metabolic rates, by limiting resources available to cells on the chip. This approach leverages findings from allometric scaling models in mammals that limited resources in vivo prompts cells to behave differently than in resource-rich in vitro cultures. Although applying scaling laws directly to tissues can result in systems that would be quite challenging to implement, engineering workarounds may be used to circumvent these scaling issues. Specific workarounds discussed include the limited oxygen carrying capacity of cell culture media when used as a blood substitute and the ability to engineer non-physiological structures to augment organ function, to create the transport-accessible, yet resource-limited environment necessary for cells to mimic in vivo functionality. Furthermore, designing the structure of individual tissues in each organ compartment may be a useful strategy to bypass scaling concerns at the inter-organ level. PMID:23925524
The necessity of feedback physics in setting the peak of the initial mass function
NASA Astrophysics Data System (ADS)
Guszejnov, Dávid; Krumholz, Mark R.; Hopkins, Philip F.
2016-05-01
A popular theory of star formation is gravito-turbulent fragmentation, in which self-gravitating structures are created by turbulence-driven density fluctuations. Simple theories of isothermal fragmentation successfully reproduce the core mass function (CMF) which has a very similar shape to the initial mass function (IMF) of stars. However, numerical simulations of isothermal turbulent fragmentation thus far have not succeeded in identifying a fragment mass scale that is independent of the simulation resolution. Moreover, the fluid equations for magnetized, self-gravitating, isothermal turbulence are scale-free, and do not predict any characteristic mass. In this paper we show that, although an isothermal self-gravitating flow does produce a CMF with a mass scale imposed by the initial conditions, this scale changes as the parent cloud evolves. In addition, the cores that form undergo further fragmentation and after sufficient time forget about their initial conditions, yielding a scale-free pure power-law distribution dN/dM ∝ M-2 for the stellar IMF. We show that this problem can be alleviated by introducing additional physics that provides a termination scale for the cascade. Our candidate for such physics is a simple model for stellar radiation feedback. Radiative heating, powered by accretion on to forming stars, arrests the fragmentation cascade and imposes a characteristic mass scale that is nearly independent of the time-evolution or initial conditions in the star-forming cloud, and that agrees well with the peak of the observed IMF. In contrast, models that introduce a stiff equation of state for denser clouds but that do not explicitly include the effects of feedback do not yield an invariant IMF.
The statistical overlap theory of chromatography using power law (fractal) statistics.
Schure, Mark R; Davis, Joe M
2011-12-30
The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small β) is shown to generate more uniform chromatograms. Large scale range chromatograms (large β) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Milani, G.; Bertolesi, E.
2017-07-01
A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.
NASA Astrophysics Data System (ADS)
Verma, Aman; Mahesh, Krishnan
2012-08-01
The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.
NASA Astrophysics Data System (ADS)
Shea, Thomas; Krimer, Daniel; Costa, Fidel; Hammer, Julia
2014-05-01
One of the achievements in recent years in volcanology is the determination of time-scales of magmatic processes via diffusion in minerals and its addition to the petrologists' and volcanologists' toolbox. The method typically requires one-dimensional modeling of randomly cut crystals from two-dimensional thin sections. Here we address the question whether using 1D (traverse) or 2D (surface) datasets exploited from randomly cut 3D crystals introduces a bias or dispersion in the time-scales estimated, and how this error can be improved or eliminated. Computational simulations were performed using a concentration-dependent, finite-difference solution to the diffusion equation in 3D. The starting numerical models involved simple geometries (spheres, parallelepipeds), Mg/Fe zoning patterns (either normal or reverse), and isotropic diffusion coefficients. Subsequent models progressively incorporated more complexity, 3D olivines possessing representative polyhedral morphologies, diffusion anisotropy along the different crystallographic axes, and more intricate core-rim zoning patterns. Sections and profiles used to compare 1, 2 and 3D diffusion models were selected to be (1) parallel to the crystal axes, (2) randomly oriented but passing through the olivine center, or (3) randomly oriented and sectioned. Results show that time-scales estimated on randomly cut traverses (1D) or surfaces (2D) can be widely distributed around the actual durations of 3D diffusion (~0.2 to 10 times the true diffusion time). The magnitude over- or underestimations of duration are a complex combination of the geometry of the crystal, the zoning pattern, the orientation of the cuts with respect to the crystallographic axes, and the degree of diffusion anisotropy. Errors on estimated time-scales retrieved from such models may thus be significant. Drastic reductions in the uncertainty of calculated diffusion times can be obtained by following some simple guidelines during the course of data collection (i.e. selection of crystals and concentration profiles, acquisition of crystallographic orientation data), thus allowing derivation of robust time-scales.
NASA Astrophysics Data System (ADS)
Ioannidi, P. I.; Le Pourhiet, L.; Moreno, M.; Agard, P.; Oncken, O.; Angiboust, S.
2017-12-01
The physical nature of plate locking and its relation to surface deformation patterns at different time scales (e.g. GPS displacements during the seismic cycle) can be better understood by determining the rheological parameters of the subduction interface. However, since direct rheological measurements are not possible, finite element modelling helps to determine the effective rheological parameters of the subduction interface. We used the open source finite element code pTatin to create 2D models, starting with a homogeneous medium representing shearing at the subduction interface. We tested several boundary conditions that mimic simple shear and opted for the one that best describes the Grigg's type simple shear experiments. After examining different parameters, such as shearing velocity, temperature and viscosity, we added complexity to the geometry by including a second phase. This arises from field observations, where shear zone outcrops are often composites of multiple phases: stronger crustal blocks embedded within a sedimentary and/or serpentinized matrix have been reported for several exhumed subduction zones. We implemented a simplified model to simulate simple shearing of a two-phase medium in order to quantify the effect of heterogeneous rheology on stress and strain localization. Preliminary results show different strength in the models depending on the block-to-matrix ratio. We applied our method to outcrop scale block-in-matrix geometries and by sampling at different depths along exhumed former subduction interfaces, we expect to be able to provide effective friction and viscosity of a natural interface. In a next step, these effective parameters will be used as input into seismic cycle deformation models in an attempt to assess the possible signature of field geometries on the slip behaviour of the plate interface.
Frank, Steven A.
2010-01-01
We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344
Global-Scale Hydrology: Simple Characterization of Complex Simulation
NASA Technical Reports Server (NTRS)
Koster, Randal D.
1999-01-01
Atmospheric general circulation models (AGCMS) are unique and valuable tools for the analysis of large-scale hydrology. AGCM simulations of climate provide tremendous amounts of hydrological data with a spatial and temporal coverage unmatched by observation systems. To the extent that the AGCM behaves realistically, these data can shed light on the nature of the real world's hydrological cycle. In the first part of the seminar, I will describe the hydrological cycle in a typical AGCM, with some emphasis on the validation of simulated precipitation against observations. The second part of the seminar will focus on a key goal in large-scale hydrology studies, namely the identification of simple, overarching controls on hydrological behavior hidden amidst the tremendous amounts of data produced by the highly complex AGCM parameterizations. In particular, I will show that a simple 50-year-old climatological relation (and a recent extension we made to it) successfully predicts, to first order, both the annual mean and the interannual variability of simulated evaporation and runoff fluxes. The seminar will conclude with an example of a practical application of global hydrology studies. The accurate prediction of weather statistics several months in advance would have tremendous societal benefits, and conventional wisdom today points at the use of coupled ocean-atmosphere-land models for such seasonal-to-interannual prediction. Understanding the hydrological cycle in AGCMs is critical to establishing the potential for such prediction. Our own studies show, among other things, that soil moisture retention can lead to significant precipitation predictability in many midlatitude and tropical regions.
Våge, Selina; Thingstad, T Frede
2015-01-01
Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales.
Våge, Selina; Thingstad, T. Frede
2015-01-01
Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales. PMID:26648929
Liquid-vapor rectilinear diameter revisited
NASA Astrophysics Data System (ADS)
Garrabos, Y.; Lecoutre, C.; Marre, S.; Beysens, D.; Hahn, I.
2018-02-01
In the modern theory of critical phenomena, the liquid-vapor density diameter in simple fluids is generally expected to deviate from a rectilinear law approaching the critical point. However, by performing precise scannerlike optical measurements of the position of the SF6 liquid-vapor meniscus, in an approach much closer to criticality in temperature and density than earlier measurements, no deviation from a rectilinear diameter can be detected. The observed meniscus position from far (10 K ) to extremely close (1 mK ) to the critical temperature is analyzed using recent theoretical models to predict the complete scaling consequences of a fluid asymmetry. The temperature dependence of the meniscus position appears consistent with the law of rectilinear diameter. The apparent absence of the critical hook in SF6 therefore seemingly rules out the need for the pressure scaling field contribution in the complete scaling theoretical framework in this SF6 analysis. More generally, this work suggests a way to clarify the experimental ambiguities in the simple fluids for the near-critical singularities in the density diameter.
Dark neutrino interactions make gravitational waves blue
NASA Astrophysics Data System (ADS)
Ghosh, Subhajit; Khatri, Rishi; Roy, Tuhin S.
2018-03-01
New interactions of neutrinos can stop them from free-streaming in the early Universe even after the weak decoupling epoch. This results in the enhancement of the primordial gravitational wave amplitude on small scales compared to the standard Λ CDM prediction. In this paper, we calculate the effect of dark matter neutrino interactions in CMB tensor B -modes spectrum. We show that the effect of new neutrino interactions generates a scale- or ℓ-dependent imprint in the CMB B -modes power spectrum at ℓ≳100 . In the event that primordial B -modes are detected by future experiments, a departure from scale invariance, with a blue spectrum, may not necessarily mean failure of simple inflationary models but instead may be a sign of nonstandard interactions of relativistic particles. New interactions of neutrinos also induce a phase shift in the CMB B -mode power spectrum which cannot be mimicked by simple modifications of the primordial tensor power spectrum. There is rich information hidden in the CMB B -modes spectrum beyond just the tensor-to-scalar ratio.
A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets
NASA Astrophysics Data System (ADS)
Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.
2009-12-01
The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone. Bayesian inversion is then applied to assign scaling factors that align the surface fluxes with the CO2 time series. Our project demonstrates how bottom-up and top-down techniques can be reconciled to arrive at a more robust and balanced spatial carbon budget. We will show how to evaluate existing flux products through regionally representative atmospheric observations, i.e. how well the underlying model assumptions represent processes on the regional scale. Adapting process model parameterizations sets for e.g. sub-regions, disturbance regimes, or land cover classes, in order to optimize the agreement between surface fluxes and atmospheric observations can lead to improved understanding of the underlying flux mechanisms, and reduces uncertainties in the regional carbon budgets.
A Simple Relationship Between Short- and Long-term Seed Predation Rates
USDA-ARS?s Scientific Manuscript database
Weed seed predation is an important ecosystem service supporting weed management in low-external-input agroecosystems. For convenience, measurements of seed predation are often made at very short time scales (< 3 d). However, one of the primary uses of such data, the parameterization of models of cr...
Alternative Analysis of the Michaelis-Menten Equations
ERIC Educational Resources Information Center
Krogstad, Harald E.; Dawed, Mohammed Yiha; Tegegne, Tadele Tesfa
2011-01-01
Courses in mathematical modelling are always in need of simple, illustrative examples. The Michaelis-Menten reaction kinetics equations have been considered to be a basic example of scaling and singular perturbation. However, the leading order approximations do not easily show the expected behaviour, and this note proposes a different perturbation…
Responses of timber rattlesnakes to fire: Lessons from two prescribed burns
Steven J. Beaupre; Lara E. Douglas
2012-01-01
Timber rattlesnakes (Crotalus horridus) are excellent model organisms for understanding the effects of large scale habitat manipulations because of their low-energy lifestyle, rapid response to changes in resource environment, uniform diet (small mammals), and simple behaviors. We present two case studies that illustrate interactions between timber...
A Simple Boyle's Law Experiment.
ERIC Educational Resources Information Center
Lewis, Don L.
1997-01-01
Describes an experiment to demonstrate Boyle's law that provides pressure measurements in a familiar unit (psi) and makes no assumptions concerning atmospheric pressure. Items needed include bathroom scales and a 60-ml syringe, castor oil, disposable 3-ml syringe and needle, modeling clay, pliers, and a wooden block. Commercial devices use a…
Cyanobacterial Biofuels: Strategies and Developments on Network and Modeling.
Klanchui, Amornpan; Raethong, Nachon; Prommeenate, Peerada; Vongsangnak, Wanwipa; Meechai, Asawin
Cyanobacteria, the phototrophic microorganisms, have attracted much attention recently as a promising source for environmentally sustainable biofuels production. However, barriers for commercial markets of cyanobacteria-based biofuels concern the economic feasibility. Miscellaneous strategies for improving the production performance of cyanobacteria have thus been developed. Among these, the simple ad hoc strategies resulting in failure to optimize fully cell growth coupled with desired product yield are explored. With the advancement of genomics and systems biology, a new paradigm toward systems metabolic engineering has been recognized. In particular, a genome-scale metabolic network reconstruction and modeling is a crucial systems-based tool for whole-cell-wide investigation and prediction. In this review, the cyanobacterial genome-scale metabolic models, which offer a system-level understanding of cyanobacterial metabolism, are described. The main process of metabolic network reconstruction and modeling of cyanobacteria are summarized. Strategies and developments on genome-scale network and modeling through the systems metabolic engineering approach are advanced and employed for efficient cyanobacterial-based biofuels production.
Steps to reconcile inflationary tensor and scalar spectra
NASA Astrophysics Data System (ADS)
Miranda, Vinícius; Hu, Wayne; Adshead, Peter
2014-05-01
The recent BICEP2 B-mode polarization determination of an inflationary tensor-scalar ratio r=0.2-0.05+0.07 is in tension with simple scale-free models of inflation due to a lack of a corresponding low multipole excess in the temperature power spectrum which places a limit of r0.002<0.11 (95% C.L.) on such models. Single-field inflationary models that reconcile these two observations, even those where the tilt runs substantially, introduce a scale into the scalar power spectrum. To cancel the tensor excess, and simultaneously remove the excess already present without tensors, ideally the model should introduce this scale as a relatively sharp transition in the tensor-scalar ratio around the horizon at recombination. We consider models which generate such a step in this quantity and find that they can improve the joint fit to the temperature and polarization data by up to 2ΔlnL≈-14 without changing cosmological parameters. Precision E-mode polarization measurements should be able to test this explanation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitani, Akira; Tsubota, Makoto
2006-07-01
The energy spectrum of decaying quantum turbulence at T=0 obeys Kolmogorov's law. In addition to this, recent studies revealed that the vortex-length distribution (VLD), meaning the size distribution of the vortices, in decaying Kolmogorov quantum turbulence also obeys a power law. This power-law VLD suggests that the decaying turbulence has scale-free structure in real space. Unfortunately, however, there has been no practical study that answers the following important question: why can quantum turbulence acquire a scale-free VLD? We propose here a model to study the origin of the power law of the VLD from a generic point of view. Themore » nature of quantized vortices allows one to describe the decay of quantum turbulence with a simple model that is similar to the Barabasi-Albert model, which explains the scale-invariance structure of large networks. We show here that such a model can reproduce the power law of the VLD well.« less
Molecular dynamics of conformational substates for a simplified protein model
NASA Astrophysics Data System (ADS)
Grubmüller, Helmut; Tavan, Paul
1994-09-01
Extended molecular dynamics simulations covering a total of 0.232 μs have been carried out on a simplified protein model. Despite its simplified structure, that model exhibits properties similar to those of more realistic protein models. In particular, the model was found to undergo transitions between conformational substates at a time scale of several hundred picoseconds. The computed trajectories turned out to be sufficiently long as to permit a statistical analysis of that conformational dynamics. To check whether effective descriptions neglecting memory effects can reproduce the observed conformational dynamics, two stochastic models were studied. A one-dimensional Langevin effective potential model derived by elimination of subpicosecond dynamical processes could not describe the observed conformational transition rates. In contrast, a simple Markov model describing the transitions between but neglecting dynamical processes within conformational substates reproduced the observed distribution of first passage times. These findings suggest, that protein dynamics generally does not exhibit memory effects at time scales above a few hundred picoseconds, but confirms the existence of memory effects at a picosecond time scale.
Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Bremner, Paul
2014-01-01
This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.
Bai, Jie; Liu, He; Yin, Bo; Ma, Huijun; Chen, Xinchun
2017-02-01
Anaerobic acidogenic fermentation with high-solid sludge is a promising method for volatile fatty acid (VFA) production to realize resource recovery. In this study, to model inhibition by free ammonia in high-solid sludge fermentation, the anaerobic digestion model No. 1 (ADM1) was modified to simulate the VFA generation in batch, semi-continuous and full scale sludge. The ADM1 was operated on the platform AQUASIM 2.0. Three kinds of inhibition forms, e.g., simple inhibition, Monod and non-inhibition forms, were integrated into the ADM1 and tested with the real experimental data for batch and semi-continuous fermentation, respectively. The improved particle swarm optimization technique was used for kinetic parameter estimation using the software MATLAB 7.0. In the modified ADM1, the K s of acetate is 0.025, the k m,ac is 12.51, and the K I_NH3 is 0.02, respectively. The results showed that the simple inhibition model could simulate the VFA generation accurately while the Monod model was the better inhibition kinetics form in semi-continuous fermentation at pH10.0. Finally, the modified ADM1 could successfully describe the VFA generation and ammonia accumulation in a 30m 3 full-scale sludge fermentation reactor, indicating that the developed model can be applicable in high-solid sludge anaerobic fermentation. Copyright © 2016. Published by Elsevier B.V.
The Role of Wakes in Modelling Tidal Current Turbines
NASA Astrophysics Data System (ADS)
Conley, Daniel; Roc, Thomas; Greaves, Deborah
2010-05-01
The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.
A Simple Parameterization of Mixing of Passive Scalars in Turbulent Flows
NASA Astrophysics Data System (ADS)
Nithianantham, Ajithshanthar; Venayagamoorthy, Karan
2015-11-01
A practical model for quantifying the turbulent diascalar diffusivity is proposed as Ks = 1 . 1γ'LTk 1 / 2 , where LT is defined as the Thorpe length scale, k is the turbulent kinetic energy and γ' is one-half of the mechanical to scalar time scale ratio, which was shown by previous researchers to be approximately 0 . 7 . The novelty of the proposed model lies in the use of LT, which is a widely used length scale in stably stratified flows (almost exclusively used in oceanography), for quantifying turbulent mixing in unstratified flows. LT can be readily obtained in the field using a Conductivity, Temperature and Depth (CTD) profiler. The turbulent kinetic energy is mostly contained in the large scales of the flow field and hence can be measured in the field or modeled in numerical simulations. Comparisons using DNS data show remarkably good agreement between the predicted and exact diffusivities. Office of Naval Research and National Science Foundation.
From Buckets to Basins: Scaling up from the CZO to the NOAA National Water Model
NASA Astrophysics Data System (ADS)
Dugger, A. L.; Gochis, D.; Cosgrove, B.; Sampson, K. M.; McCreight, J. L.; Rafieeinasab, A.
2017-12-01
NOAA's National Water Model (NWM) is generating terabytes of data on current and future states of water in streams, soils, snowpacks, lakes, and floodplains across the U.S. Altogether there are approximately 2.7 million stream reaches in the NWM and land cells distributed every 250-m (soil moisture, inundation) and 1-km (snow, evapotranspiration). Water predictions span the next hour to the next 30 days. Flood forecasting is an obvious NWM priority in the near term, but longer-range plans extend to water supply planning, drought forecasting, and water quality. An obvious question posed to a model operating across this many dimensions of space, time, and variables is: are you including the right processes and parameterizations to capture the hydrologic behaviors you are designed for? To answer this question, we generally rely on networks of in-situ observations to constrain models via parameter estimation or evaluate alternate process representations. While this gets us part of the way there, the question remains how well these in-situ characterizations scale up in the context of a national-scale model. The WRF-Hydro community hydrologic modeling system provides the initial backbone for the NWM, driving simulation of water and energy within the critical zone - vertical energy and water fluxes, lateral redistribution of surface and subsurface water, simple deep groundwater dynamics, and channel routing. In this study, we first present baseline performance of the NWM over US-wide networks of streamflow (USGS), soil moisture (CRN, SCAN), and evapotranspiration (Ameriflux) observations at a range of spatial and temporal scales. We conduct a series of simple experiments using different submodel combinations of WRF-Hydro at high-resolution to predict water storage and partitioning behavior at 3 well-instrumented catchments, with the goal of optimizing combined performance of snowpack, soil moisture, ET, and streamflow prediction. We scale-up the optimal physics suites and parameters to the Omernik Level 3 Ecoregion at the NWM scale and assess changes in water storage and partitioning at all gages within the ecoregion. While this is a fairly limited experiment, we hope to engage the critical zone research community in considering how we can leverage the CZO networks to inform NWM model improvement.
NASA Technical Reports Server (NTRS)
Stordal, Frode; Garcia, Rolando R.
1987-01-01
The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.
Compensation for large tensor modes with iso-curvature perturbations in CMB anisotropies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawasaki, Masahiro; Yokoyama, Shuichiro, E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: shu@icrr.u-tokyo.ac.jp
Recently, BICEP2 has reported the large tensor-to-scalar ratio r = 0.2{sup +0.07}{sub −0.05} from the observation of the cosmic microwave background (CMB) B-mode at degree-scales. Since tensor modes induce not only CMB B-mode but also the temperature fluctuations on large scales, to realize the consistent temperature fluctuations with the Planck result we should consider suppression of scalar perturbations on corresponding large scales. To realize such a suppression, we consider anti-correlated iso-curvature perturbations which could be realized in the simple curvaton model.
Petterson, S R; Stenström, T A
2015-09-01
To support the implementation of quantitative microbial risk assessment (QMRA) for managing infectious risks associated with drinking water systems, a simple modeling approach for quantifying Log10 reduction across a free chlorine disinfection contactor was developed. The study was undertaken in three stages: firstly, review of the laboratory studies published in the literature; secondly, development of a conceptual approach to apply the laboratory studies to full-scale conditions; and finally implementation of the calculations for a hypothetical case study system. The developed model explicitly accounted for variability in residence time and pathogen specific chlorine sensitivity. Survival functions were constructed for a range of pathogens relying on the upper bound of the reported data transformed to a common metric. The application of the model within a hypothetical case study demonstrated the importance of accounting for variable residence time in QMRA. While the overall Log10 reduction may appear high, small parcels of water with short residence time can compromise the overall performance of the barrier. While theoretically simple, the approach presented is of great value for undertaking an initial assessment of a full-scale disinfection contactor based on limited site-specific information.
Interactive coupling of regional climate and sulfate aerosol models over eastern Asia
NASA Astrophysics Data System (ADS)
Qian, Yun; Giorgi, Filippo
1999-03-01
The NCAR regional climate model (RegCM) is interactively coupled to a simple radiatively active sulfate aerosol model over eastern Asia. Both direct and indirect aerosol effects are represented. The coupled model system is tested for two simulation periods, November 1994 and July 1995, with aerosol sources representative of present-day anthropogenic sulfur emissions. The model sensitivity to the intensity of the aerosol source is also studied. The main conclusions from our work are as follows: (1) The aerosol distribution and cycling processes show substantial regional spatial variability, and temporal variability varying on a range of scales, from the diurnal scale of boundary layer and cumulus cloud evolution to the 3-10 day scale of synoptic scale events and the interseasonal scale of general circulation features; (2) both direct and indirect aerosol forcings have regional effects on surface climate; (3) the regional climate response to the aerosol forcing is highly nonlinear, especially during the summer, due to the interactions with cloud and precipitation processes; (4) in our simulations the role of the aerosol indirect effects is dominant over that of direct effects; (5) aerosol-induced feedback processes can affect the aerosol burdens at the subregional scale. This work constitutes the first step in a long term research project aimed at coupling a hierarchy of chemistry/aerosol models to the RegCM over the eastern Asia region.
NASA Astrophysics Data System (ADS)
De Domenico, Manlio
2018-03-01
Biological systems, from a cell to the human brain, are inherently complex. A powerful representation of such systems, described by an intricate web of relationships across multiple scales, is provided by complex networks. Recently, several studies are highlighting how simple networks - obtained by aggregating or neglecting temporal or categorical description of biological data - are not able to account for the richness of information characterizing biological systems. More complex models, namely multilayer networks, are needed to account for interdependencies, often varying across time, of biological interacting units within a cell, a tissue or parts of an organism.
NASA Astrophysics Data System (ADS)
Honarmand, M.; Moradi, M.
2018-06-01
In this paper, by using scaled boundary finite element method (SBFM), a perfect nanographene sheet or cracked ones were simulated for the first time. In this analysis, the atomic carbon bonds were modeled by simple bar elements with circular cross-sections. Despite of molecular dynamics (MD), the results obtained from SBFM analysis are quite acceptable for zero degree cracks. For all angles except zero, Griffith criterion can be applied for the relation between critical stress and crack length. Finally, despite the simplifications used in nanographene analysis, obtained results can simulate the mechanical behavior with high accuracy compared with experimental and MD ones.
NASA Technical Reports Server (NTRS)
Hoffman, P. F.
1986-01-01
A prograding (direction unspecified) trench-arc system is favored as a simple yet comprehensive model for crustal generation in a 250,000 sq km granite-greenstone terrain. The model accounts for the evolutionary sequence of volcanism, sedimentation, deformation, metamorphism and plutonism, observed througout the Slave province. Both unconformable (trench inner slope) and subconformable (trench outer slope) relations between the volcanics and overlying turbidities; and the existence of relatively minor amounts of pre-greenstone basement (microcontinents) and syn-greenstone plutons (accreted arc roots) are explained. Predictions include: a varaiable gap between greenstone volcanism and trench turbidite sedimentation (accompanied by minor volcanism) and systematic regional variations in age span of volcanism and plutonism. Implications of the model will be illustrated with reference to a 1:1 million scale geological map of the Slave Province (and its bounding 1.0 Ga orogens).
Mate Finding, Sexual Spore Production, and the Spread of Fungal Plant Parasites.
Hamelin, Frédéric M; Castella, François; Doli, Valentin; Marçais, Benoît; Ravigné, Virginie; Lewis, Mark A
2016-04-01
Sexual reproduction and dispersal are often coupled in organisms mixing sexual and asexual reproduction, such as fungi. The aim of this study is to evaluate the impact of mate limitation on the spreading speed of fungal plant parasites. Starting from a simple model with two coupled partial differential equations, we take advantage of the fact that we are interested in the dynamics over large spatial and temporal scales to reduce the model to a single equation. We obtain a simple expression for speed of spread, accounting for both sexual and asexual reproduction. Taking Black Sigatoka disease of banana plants as a case study, the model prediction is in close agreement with the actual spreading speed (100 km per year), whereas a similar model without mate limitation predicts a wave speed one order of magnitude greater. We discuss the implications of these results to control parasites in which sexual reproduction and dispersal are intrinsically coupled.
NASA Astrophysics Data System (ADS)
Guenet, Bertrand; Esteban Moyano, Fernando; Peylin, Philippe; Ciais, Philippe; Janssens, Ivan A.
2016-03-01
Priming of soil carbon decomposition encompasses different processes through which the decomposition of native (already present) soil organic matter is amplified through the addition of new organic matter, with new inputs typically being more labile than the native soil organic matter. Evidence for priming comes from laboratory and field experiments, but to date there is no estimate of its impact at global scale and under the current anthropogenic perturbation of the carbon cycle. Current soil carbon decomposition models do not include priming mechanisms, thereby introducing uncertainty when extrapolating short-term local observations to ecosystem and regional to global scale. In this study we present a simple conceptual model of decomposition priming, called PRIM, able to reproduce laboratory (incubation) and field (litter manipulation) priming experiments. Parameters for this model were first optimized against data from 20 soil incubation experiments using a Bayesian framework. The optimized parameter values were evaluated against another set of soil incubation data independent from the ones used for calibration and the PRIM model reproduced the soil incubations data better than the original, CENTURY-type soil decomposition model, whose decomposition equations are based only on first-order kinetics. We then compared the PRIM model and the standard first-order decay model incorporated into the global land biosphere model ORCHIDEE (Organising Carbon and Hydrology In Dynamic Ecosystems). A test of both models was performed at ecosystem scale using litter manipulation experiments from five sites. Although both versions were equally able to reproduce observed decay rates of litter, only ORCHIDEE-PRIM could simulate the observed priming (R2 = 0.54) in cases where litter was added or removed. This result suggests that a conceptually simple and numerically tractable representation of priming adapted to global models is able to capture the sign and magnitude of the priming of litter and soil organic matter.
NASA Astrophysics Data System (ADS)
Guenet, B.; Moyano, F. E.; Peylin, P.; Ciais, P.; Janssens, I. A.
2015-10-01
Priming of soil carbon decomposition encompasses different processes through which the decomposition of native (already present) soil organic matter is amplified through the addition of new organic matter, with new inputs typically being more labile than the native soil organic matter. Evidence for priming comes from laboratory and field experiments, but to date there is no estimate of its impact at global scale and under the current anthropogenic perturbation of the carbon cycle. Current soil carbon decomposition models do not include priming mechanisms, thereby introducing uncertainty when extrapolating short-term local observations to ecosystem and regional to global scale. In this study we present a simple conceptual model of decomposition priming, called PRIM, able to reproduce laboratory (incubation) and field (litter manipulation) priming experiments. Parameters for this model were first optimized against data from 20 soil incubation experiments using a Bayesian framework. The optimized parameter values were evaluated against another set of soil incubation data independent from the ones used for calibration and the PRIM model reproduced the soil incubations data better than the original, CENTURY-type soil decomposition model, whose decomposition equations are based only on first order kinetics. We then compared the PRIM model and the standard first order decay model incorporated into the global land biosphere model ORCHIDEE. A test of both models was performed at ecosystem scale using litter manipulation experiments from 5 sites. Although both versions were equally able to reproduce observed decay rates of litter, only ORCHIDEE-PRIM could simulate the observed priming (R2 = 0.54) in cases where litter was added or removed. This result suggests that a conceptually simple and numerically tractable representation of priming adapted to global models is able to capture the sign and magnitude of the priming of litter and soil organic matter.
Towards a simple representation of chalk hydrology in land surface modelling
NASA Astrophysics Data System (ADS)
Rahman, Mostaquimur; Rosolem, Rafael
2017-01-01
Modelling and monitoring of hydrological processes in the unsaturated zone of chalk, a porous medium with fractures, is important to optimize water resource assessment and management practices in the United Kingdom (UK). However, incorporating the processes governing water movement through a chalk unsaturated zone in a numerical model is complicated mainly due to the fractured nature of chalk that creates high-velocity preferential flow paths in the subsurface. In general, flow through a chalk unsaturated zone is simulated using the dual-porosity concept, which often involves calibration of a relatively large number of model parameters, potentially undermining applications to large regions. In this study, a simplified parameterization, namely the Bulk Conductivity (BC) model, is proposed for simulating hydrology in a chalk unsaturated zone. This new parameterization introduces only two additional parameters (namely the macroporosity factor and the soil wetness threshold parameter for fracture flow activation) and uses the saturated hydraulic conductivity from the chalk matrix. The BC model is implemented in the Joint UK Land Environment Simulator (JULES) and applied to a study area encompassing the Kennet catchment in the southern UK. This parameterization is further calibrated at the point scale using soil moisture profile observations. The performance of the calibrated BC model in JULES is assessed and compared against the performance of both the default JULES parameterization and the uncalibrated version of the BC model implemented in JULES. Finally, the model performance at the catchment scale is evaluated against independent data sets (e.g. runoff and latent heat flux). The results demonstrate that the inclusion of the BC model in JULES improves simulated land surface mass and energy fluxes over the chalk-dominated Kennet catchment. Therefore, the simple approach described in this study may be used to incorporate the flow processes through a chalk unsaturated zone in large-scale land surface modelling applications.
Generating clustered scale-free networks using Poisson based localization of edges
NASA Astrophysics Data System (ADS)
Türker, İlker
2018-05-01
We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.
Highly Fluorescent Noble Metal Quantum Dots
Zheng, Jie; Nicovich, Philip R.; Dickson, Robert M.
2009-01-01
Highly fluorescent, water-soluble, few-atom noble metal quantum dots have been created that behave as multi-electron artificial atoms with discrete, size-tunable electronic transitions throughout the visible and near IR. These “molecular metals” exhibit highly polarizable transitions and scale in size according to the simple relation, Efermi/N1/3, predicted by the free electron model of metallic behavior. This simple scaling indicates that fluorescence arises from intraband transitions of free electrons and that these conduction electron transitions are the low number limit of the plasmon – the collective dipole oscillations occurring when a continuous density of states is reached. Providing the “missing link” between atomic and nanoparticle behavior in noble metals, these emissive, water-soluble Au nanoclusters open new opportunities for biological labels, energy transfer pairs, and light emitting sources in nanoscale optoelectronics. PMID:17105412
Self-organization of cosmic radiation pressure instability. II - One-dimensional simulations
NASA Technical Reports Server (NTRS)
Hogan, Craig J.; Woods, Jorden
1992-01-01
The clustering of statistically uniform discrete absorbing particles moving solely under the influence of radiation pressure from uniformly distributed emitters is studied in a simple one-dimensional model. Radiation pressure tends to amplify statistical clustering in the absorbers; the absorbing material is swept into empty bubbles, the biggest bubbles grow bigger almost as they would in a uniform medium, and the smaller ones get crushed and disappear. Numerical simulations of a one-dimensional system are used to support the conjecture that the system is self-organizing. Simple statistics indicate that a wide range of initial conditions produce structure approaching the same self-similar statistical distribution, whose scaling properties follow those of the attractor solution for an isolated bubble. The importance of the process for large-scale structuring of the interstellar medium is briefly discussed.
Scaling of drizzle virga depth with cloud thickness for marine stratocumulus clouds
Yang, Fan; Luke, Edward P.; Kollias, Pavlos; ...
2018-04-20
Drizzle plays a crucial role in cloud lifetime and radiation properties of marine stratocumulus clouds. Understanding where drizzle exists in the sub-cloud layer, which depends on drizzle virga depth, can help us better understand where below-cloud scavenging and evaporative cooling and moisturizing occur. In this study, we examine the statistical properties of drizzle frequency and virga depth of marine stratocumulus based on unique ground-based remote sensing data. Results show that marine stratocumulus clouds are drizzling nearly all the time. In addition, we derive a simple scaling analysis between drizzle virga thickness and cloud thickness. Our analytical expression agrees with themore » observational data reasonable well, which suggests that our formula provides a simple parameterization for drizzle virga of stratocumulus clouds suitable for use in other models.« less
Scaling of drizzle virga depth with cloud thickness for marine stratocumulus clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Fan; Luke, Edward P.; Kollias, Pavlos
Drizzle plays a crucial role in cloud lifetime and radiation properties of marine stratocumulus clouds. Understanding where drizzle exists in the sub-cloud layer, which depends on drizzle virga depth, can help us better understand where below-cloud scavenging and evaporative cooling and moisturizing occur. In this study, we examine the statistical properties of drizzle frequency and virga depth of marine stratocumulus based on unique ground-based remote sensing data. Results show that marine stratocumulus clouds are drizzling nearly all the time. In addition, we derive a simple scaling analysis between drizzle virga thickness and cloud thickness. Our analytical expression agrees with themore » observational data reasonable well, which suggests that our formula provides a simple parameterization for drizzle virga of stratocumulus clouds suitable for use in other models.« less
Balkányi, László
2002-01-01
To develop information systems (IS) in the changing environment of the health sector, a simple but throughout model, avoiding the techno-jargon of informatics, might be useful for the top management. A platform neutral, extensible, transparent conceptual model should be established. Limitations of current methods lead to a simple, but comprehensive mapping, in the form of a three-dimensional cube. The three 'orthogonal' views are (a) organization functionality, (b) organizational structures and (c) information technology. Each of the cube-sides is described according to its nature. This approach enables to define any kind of an IS component as a certain point/layer/domain of the cube and enables also the management to label all IS components independently form any supplier(s) and/or any specific platform. The model handles changes in organization structure, business functionality and the serving info-system independently form each other. Practical application extends to (a) planning complex, new ISs, (b) guiding development of multi-vendor, multi-site ISs, (c) supporting large-scale public procurement procedures and the contracting, implementation phase by establishing a platform neutral reference, (d) keeping an exhaustive inventory of an existing large-scale system, that handles non-tangible aspects of the IS.
Kullback-Leibler divergence measure of intermittency: Application to turbulence
NASA Astrophysics Data System (ADS)
Granero-Belinchón, Carlos; Roux, Stéphane G.; Garnier, Nicolas B.
2018-01-01
For generic systems exhibiting power law behaviors, and hence multiscale dependencies, we propose a simple tool to analyze multifractality and intermittency, after noticing that these concepts are directly related to the deformation of a probability density function from Gaussian at large scales to non-Gaussian at smaller scales. Our framework is based on information theory and uses Shannon entropy and Kullback-Leibler divergence. We provide an extensive application to three-dimensional fully developed turbulence, seen here as a paradigmatic complex system where intermittency was historically defined and the concepts of scale invariance and multifractality were extensively studied and benchmarked. We compute our quantity on experimental Eulerian velocity measurements, as well as on synthetic processes and phenomenological models of fluid turbulence. Our approach is very general and does not require any underlying model of the system, although it can probe the relevance of such a model.
NASA Astrophysics Data System (ADS)
Kourdis, Panayotis D.; Steuer, Ralf; Goussis, Dimitris A.
2010-09-01
Large-scale models of cellular reaction networks are usually highly complex and characterized by a wide spectrum of time scales, making a direct interpretation and understanding of the relevant mechanisms almost impossible. We address this issue by demonstrating the benefits provided by model reduction techniques. We employ the Computational Singular Perturbation (CSP) algorithm to analyze the glycolytic pathway of intact yeast cells in the oscillatory regime. As a primary object of research for many decades, glycolytic oscillations represent a paradigmatic candidate for studying biochemical function and mechanisms. Using a previously published full-scale model of glycolysis, we show that, due to fast dissipative time scales, the solution is asymptotically attracted on a low dimensional manifold. Without any further input from the investigator, CSP clarifies several long-standing questions in the analysis of glycolytic oscillations, such as the origin of the oscillations in the upper part of glycolysis, the importance of energy and redox status, as well as the fact that neither the oscillations nor cell-cell synchronization can be understood in terms of glycolysis as a simple linear chain of sequentially coupled reactions.
NASA Astrophysics Data System (ADS)
Fan, Ying; Miguez-Macho, Gonzalo; Weaver, Christopher P.; Walko, Robert; Robock, Alan
2007-05-01
Soil moisture is a key participant in land-atmosphere interactions and an important determinant of terrestrial climate. In regions where the water table is shallow, soil moisture is coupled to the water table. This paper is the first of a two-part study to quantify this coupling and explore its implications in the context of climate modeling. We examine the observed water table depth in the lower 48 states of the United States in search of salient spatial and temporal features that are relevant to climate dynamics. As a means to interpolate and synthesize the scattered observations, we use a simple two-dimensional groundwater flow model to construct an equilibrium water table as a result of long-term climatic and geologic forcing. Model simulations suggest that the water table depth exhibits spatial organization at watershed, regional, and continental scales, which may have implications for the spatial organization of soil moisture at similar scales. The observations suggest that water table depth varies at diurnal, event, seasonal, and interannual scales, which may have implications for soil moisture memory at these scales.
Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.
Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H
2008-05-01
The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.
On nonlocally interacting metrics, and a simple proposal for cosmic acceleration
NASA Astrophysics Data System (ADS)
Vardanyan, Valeri; Akrami, Yashar; Amendola, Luca; Silvestri, Alessandra
2018-03-01
We propose a simple, nonlocal modification to general relativity (GR) on large scales, which provides a model of late-time cosmic acceleration in the absence of the cosmological constant and with the same number of free parameters as in standard cosmology. The model is motivated by adding to the gravity sector an extra spin-2 field interacting nonlocally with the physical metric coupled to matter. The form of the nonlocal interaction is inspired by the simplest form of the Deser-Woodard (DW) model, α R1/squareR, with one of the Ricci scalars being replaced by a constant m2, and gravity is therefore modified in the infrared by adding a simple term of the form m21/squareR to the Einstein-Hilbert term. We study cosmic expansion histories, and demonstrate that the new model can provide background expansions consistent with observations if m is of the order of the Hubble expansion rate today, in contrast to the simple DW model with no viable cosmology. The model is best fit by w0~‑1.075 and wa~0.045. We also compare the cosmology of the model to that of Maggiore and Mancarella (MM), m2R1/square2R, and demonstrate that the viable cosmic histories follow the standard-model evolution more closely compared to the MM model. We further demonstrate that the proposed model possesses the same number of physical degrees of freedom as in GR. Finally, we discuss the appearance of ghosts in the local formulation of the model, and argue that they are unphysical and harmless to the theory, keeping the physical degrees of freedom healthy.
NASA Astrophysics Data System (ADS)
Brunger, M. J.; Thorn, P. A.; Campbell, L.; Kato, H.; Kawahara, H.; Hoshino, M.; Tanaka, H.; Kim, Y.-K.
2008-05-01
We consider the efficacy of the BEf-scaling approach, in calculating reliable integral cross sections for electron impact excitation of dipole-allowed electronic states in molecules. We will demonstrate, using specific examples in H2, CO and H2O, that this relatively simple procedure can generate quite accurate integral cross sections which compare well with available experimental data. Finally, we will briefly consider the ramifications of this to atmospheric and other types of modelling studies.
Conformational free energy of melts of ring-linear polymer blends.
Subramanian, Gopinath; Shanbhag, Sachin
2009-10-01
The conformational free energy of ring polymers in a blend of ring and linear polymers is investigated using the bond-fluctuation model. Previously established scaling relationships for the free energy of a ring polymer are shown to be valid only in the mean-field sense, and alternative functional forms are investigated. It is shown that it may be difficult to accurately express the total free energy of a ring polymer by a simple scaling argument, or in closed form.
NASA Astrophysics Data System (ADS)
Yuan, Cadmus C. A.
2015-12-01
Optical ray tracing modeling applied Beer-Lambert method in the single luminescence material system to model the white light pattern from blue LED light source. This paper extends such algorithm to a mixed multiple luminescence material system by introducing the equivalent excitation and emission spectrum of individual luminescence materials. The quantum efficiency numbers of individual material and self-absorption of the multiple luminescence material system are considered as well. By this combination, researchers are able to model the luminescence characteristics of LED chip-scaled packaging (CSP), which provides simple process steps and the freedom of the luminescence material geometrical dimension. The method will be first validated by the experimental results. Afterward, a further parametric investigation has been then conducted.
NASA Technical Reports Server (NTRS)
Smialek, James L.
2002-01-01
An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.
Finding the strong CP problem at the LHC
NASA Astrophysics Data System (ADS)
D'Agnolo, Raffaele Tito; Hook, Anson
2016-11-01
We show that a class of parity based solutions to the strong CP problem predicts new colored particles with mass at the TeV scale, due to constraints from Planck suppressed operators. The new particles are copies of the Standard Model quarks and leptons. The new quarks can be produced at the LHC and are either collider stable or decay into Standard Model quarks through a Higgs, a W or a Z boson. We discuss some simple but generic predictions of the models for the LHC and find signatures not related to the traditional solutions of the hierarchy problem. We thus provide alternative motivation for new physics searches at the weak scale. We also briefly discuss the cosmological history of these models and how to obtain successful baryogenesis.
Simple scale interpolator facilitates reading of graphs
NASA Technical Reports Server (NTRS)
Fetterman, D. E., Jr.
1965-01-01
Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.
A reflection mechanism for aft fan tone noise from turbofan engines
NASA Astrophysics Data System (ADS)
Topol, D. A.; Holhubner, S. C.; Mathews, D. C.
1987-10-01
A fan tone noise mechanism is proposed which results from reflections from the fan of forward propagating rotor wake/fan exit guide vane interaction tone noise. These fan noise tones are often more dominant out of the rear than out of the front of an engine. To simulate this effect a simple qualitative prediction model was formulated and a scaled model test program was conducted. Results from each of these investigations are compared with each other and with full-scale engine data. These comparisons substantiate the potential importance of this mechanism. Further support is provided by mode measurement data from full-scale testing. This study concluded that for certain vane/blade ratios and tip Mach numbers the contribution of the reflection noise mechanism is significant.
The origin of the structure of large-scale magnetic fields in disc galaxies
NASA Astrophysics Data System (ADS)
Nixon, C. J.; Hands, T. O.; King, A. R.; Pringle, J. E.
2018-07-01
The large-scale magnetic fields observed in spiral disc galaxies are often thought to result from dynamo action in the disc plane. However, the increasing importance of Faraday depolarization along any line of sight towards the galactic plane suggests that the strongest polarization signal may come from well above (˜0.3-1 kpc) this plane, from the vicinity of the warm interstellar medium (WIM)/halo interface. We propose (see also Henriksen & Irwin 2016) that the observed spiral fields (polarization patterns) result from the action of vertical shear on an initially poloidal field. We show that this simple model accounts for the main observed properties of large-scale fields. We speculate as to how current models of optical spiral structure may generate the observed arm/interarm spiral polarization patterns.
Black holes from large N singlet models
NASA Astrophysics Data System (ADS)
Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico
2018-03-01
The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.
Non-monotonicity and divergent time scale in Axelrod model dynamics
NASA Astrophysics Data System (ADS)
Vazquez, F.; Redner, S.
2007-04-01
We study the evolution of the Axelrod model for cultural diversity, a prototypical non-equilibrium process that exhibits rich dynamics and a dynamic phase transition between diversity and an inactive state. We consider a simple version of the model in which each individual possesses two features that can assume q possibilities. Within a mean-field description in which each individual has just a few interaction partners, we find a phase transition at a critical value qc between an active, diverse state for q < qc and a frozen state. For q lesssim qc, the density of active links is non-monotonic in time and the asymptotic approach to the steady state is controlled by a time scale that diverges as (q-qc)-1/2.
Mesoscopic model for binary fluids
NASA Astrophysics Data System (ADS)
Echeverria, C.; Tucci, K.; Alvarez-Llamoza, O.; Orozco-Guillén, E. E.; Morales, M.; Cosenza, M. G.
2017-10-01
We propose a model for studying binary fluids based on the mesoscopic molecular simulation technique known as multiparticle collision, where the space and state variables are continuous, and time is discrete. We include a repulsion rule to simulate segregation processes that does not require calculation of the interaction forces between particles, so binary fluids can be described on a mesoscopic scale. The model is conceptually simple and computationally efficient; it maintains Galilean invariance and conserves the mass and energy in the system at the micro- and macro-scale, whereas momentum is conserved globally. For a wide range of temperatures and densities, the model yields results in good agreement with the known properties of binary fluids, such as the density profile, interface width, phase separation, and phase growth. We also apply the model to the study of binary fluids in crowded environments with consistent results.
Pattern formation in individual-based systems with time-varying parameters
NASA Astrophysics Data System (ADS)
Ashcroft, Peter; Galla, Tobias
2013-12-01
We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.
New analytic results for speciation times in neutral models.
Gernhard, Tanja
2008-05-01
In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.
On wildfire complexity, simple models and environmental templates for fire size distributions
NASA Astrophysics Data System (ADS)
Boer, M. M.; Bradstock, R.; Gill, M.; Sadler, R.
2012-12-01
Vegetation fires affect some 370 Mha annually. At global and continental scales, fire activity follows predictable spatiotemporal patterns driven by gradients and seasonal fluctuations of primary productivity and evaporative demand that set constraints for fuel accumulation rates and fuel dryness, two key ingredients of fire. At regional scales, fires are also known to affect some landscapes more than others and within landscapes to occur preferentially in some sectors (e.g. wind-swept ridges) and rarely in others (e.g. wet gullies). Another common observation is that small fires occur relatively frequent yet collectively burn far less country than relatively infrequent large fires. These patterns of fire activity are well known to management agencies and consistent with their (informal) models of how the basic drivers and constraints of fire (i.e. fuels, ignitions, weather) vary in time and space across the landscape. The statistical behaviour of these landscape fire patterns has excited the (academic) research community by showing some consistency with that of complex dynamical systems poised at a phase transition. The common finding that the frequency-size distributions of actual fires follow power laws that resemble those produced by simple cellular models from statistical mechanics has been interpreted as evidence that flammable landscapes operate as self-organising systems with scale invariant fire size distributions emerging 'spontaneously' from simple rules of contagious fire spread and a strong feedback between fires and fuel patterns. In this paper we argue that the resemblance of simulated and actual fire size distributions is an example of equifinality, that is fires in model landscapes and actual landscapes may show similar statistical behaviour but this is reached by qualitatively different pathways or controlling mechanisms. We support this claim with two key findings regarding simulated fire spread mechanisms and fire-fuel feedbacks. Firstly, we demonstrate that the power law behaviour of fire size distributions in the widely used Drossel and Schwabl (1992) Forest Fire Model (FFM) is strictly conditional on simulating fire spread as a cell-to-cell contagion over a fixed distance; the invariant scaling of fire sizes breaks down under the slightest variation in that distance, suggesting that pattern formation in the FFM is irreconcilable with the reality of disparate rates and modes of fire spread observed in the field. Secondly, we review field evidence showing that fuel age effects on the probability of fire spread, a key assumption in simulation models like the FFM, do not generally apply across flammable environments. Finally, we explore alternative explanations for the formation of scale invariant fire sizes in real landscapes. Using observations from southern Australian forest regions we demonstrate that the spatiotemporal patterns of fuel dryness and magnitudes of fire driving weather events set strong environmental templates for regional fire size distributions.
Maximum efficiency of state-space models of nanoscale energy conversion devices
NASA Astrophysics Data System (ADS)
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
NASA Astrophysics Data System (ADS)
Allahverdi, Rouzbeh; Dev, P. S. Bhupal; Dutta, Bhaskar
2018-04-01
We study a simple TeV-scale model of baryon number violation which explains the observed proximity of the dark matter and baryon abundances. The model has constraints arising from both low and high-energy processes, and in particular, predicts a sizable rate for the neutron-antineutron (n - n bar) oscillation at low energy and the monojet signal at the LHC. We find an interesting complementarity among the constraints arising from the observed baryon asymmetry, ratio of dark matter and baryon abundances, n - n bar oscillation lifetime and the LHC monojet signal. There are regions in the parameter space where the n - n bar oscillation lifetime is found to be more constraining than the LHC constraints, which illustrates the importance of the next-generation n - n bar oscillation experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, William
Over the 21 years of funding we have pursued several projects related to earthquakes, damage and nucleation. We developed simple models of earthquake faults which we studied to understand Gutenburg-Richter scaling, foreshocks and aftershocks, the effect of spatial structure of the faults and its interaction with underlying self organization and phase transitions. In addition we studied the formation of amorphous solids via the glass transition. We have also studied nucleation with a particular concentration on transitions in systems with a spatial symmetry change. In addition we investigated the nucleation process in models that mimic rock masses. We obtained the structuremore » of the droplet in both homogeneous and heterogeneous nucleation. We also investigated the effect of defects or asperities on the nucleation of failure in simple models of earthquake faults.« less
Predictive power of food web models based on body size decreases with trophic complexity.
Jonsson, Tomas; Kaartinen, Riikka; Jonsson, Mattias; Bommarco, Riccardo
2018-05-01
Food web models parameterised using body size show promise to predict trophic interaction strengths (IS) and abundance dynamics. However, this remains to be rigorously tested in food webs beyond simple trophic modules, where indirect and intraguild interactions could be important and driven by traits other than body size. We systematically varied predator body size, guild composition and richness in microcosm insect webs and compared experimental outcomes with predictions of IS from models with allometrically scaled parameters. Body size was a strong predictor of IS in simple modules (r 2 = 0.92), but with increasing complexity the predictive power decreased, with model IS being consistently overestimated. We quantify the strength of observed trophic interaction modifications, partition this into density-mediated vs. behaviour-mediated indirect effects and show that model shortcomings in predicting IS is related to the size of behaviour-mediated effects. Our findings encourage development of dynamical food web models explicitly including and exploring indirect mechanisms. © 2018 John Wiley & Sons Ltd/CNRS.
Forgetting in immediate serial recall: decay, temporal distinctiveness, or interference?
Oberauer, Klaus; Lewandowsky, Stephan
2008-07-01
Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively. The models were fit to 2 experiments investigating the effect of filled delays between items at encoding or at recall. Short delays between items, filled with articulatory suppression, led to massive impairment of memory relative to a no-delay baseline. Extending the delays had little additional effect, suggesting that the passage of time alone does not cause forgetting. Adding a choice reaction task in the delay periods to block attention-based rehearsal did not change these results. The interference-based SOB fit the data best; the primacy model overpredicted the effect of lengthening delays, and SIMPLE was unable to explain the effect of delays at encoding. The authors conclude that purely temporal views of forgetting are inadequate. Copyright (c) 2008 APA, all rights reserved.
Leetaru, H.E.; Frailey, S.M.; Damico, J.; Mehnert, E.; Birkholzer, J.; Zhou, Q.; Jordan, P.D.
2009-01-01
Large scale geologic sequestration tests are in the planning stages around the world. The liability and safety issues of the migration of CO2 away from the primary injection site and/or reservoir are of significant concerns for these sequestration tests. Reservoir models for simulating single or multi-phase fluid flow are used to understand the migration of CO2 in the subsurface. These models can also help evaluate concerns related to brine migration and basin-scale pressure increases that occur due to the injection of additional fluid volumes into the subsurface. The current paper presents different modeling examples addressing these issues, ranging from simple geometric models to more complex reservoir fluid models with single-site and basin-scale applications. Simple geometric models assuming a homogeneous geologic reservoir and piston-like displacement have been used for understanding pressure changes and fluid migration around each CO2 storage site. These geometric models are useful only as broad approximations because they do not account for the variation in porosity, permeability, asymmetry of the reservoir, and dip of the beds. In addition, these simple models are not capable of predicting the interference between different injection sites within the same reservoir. A more realistic model of CO2 plume behavior can be produced using reservoir fluid models. Reservoir simulation of natural gas storage reservoirs in the Illinois Basin Cambrian-age Mt. Simon Sandstone suggest that reservoir heterogeneity will be an important factor for evaluating storage capacity. The Mt. Simon Sandstone is a thick sandstone that underlies many significant coal fired power plants (emitting at least 1 million tonnes per year) in the midwestern United States including the states of Illinois, Indiana, Kentucky, Michigan, and Ohio. The initial commercial sequestration sites are expected to inject 1 to 2 million tonnes of CO2 per year. Depending on the geologic structure and permeability anisotropy, the CO2 injected into the Mt. Simon are expected to migrate less than 3 km. After 30 years of continuous injection followed by 100 years of shut-in, the plume from a 1 million tonnes a year injection rate is expected to migrate 1.6 km for a 0 degree dip reservoir and over 3 km for a 5 degree dip reservoir. The region where reservoir pressure increases in response to CO2 injection is typically much larger than the CO2 plume. It can thus be anticipated that there will be basin wide interactions between different CO2 injection sources if multiple, large volume sites are developed. This interaction will result in asymmetric plume migration that may be contrary to reservoir dip. A basin- scale simulation model is being developed to predict CO2 plume migration, brine displacement, and pressure buildup for a possible future sequestration scenario featuring multiple CO2 storage sites within the Illinois Basin Mt. Simon Sandstone. Interactions between different sites will be evaluated with respect to impacts on pressure and CO2 plume migration patterns. ?? 2009 Elsevier Ltd. All rights reserved.
Modeling of two-phase porous flow with damage
NASA Astrophysics Data System (ADS)
Cai, Z.; Bercovici, D.
2009-12-01
Two-phase dynamics has been broadly studied in Earth Science in a convective system. We investigate the basic physics of compaction with damage theory and present preliminary results of both steady state and time-dependent transport when melt migrates through porous medium. In our simple 1-D model, damage would play an important role when we consider the ascent of melt-rich mixture at constant velocity. Melt segregation becomes more difficult so that porosity is larger than that in simple compaction in the steady-state compaction profile. Scaling analysis for compaction equation is performed to predict the behavior of melt segregation with damage. The time-dependent of the compacting system is investigated by looking at solitary wave solutions to the two-phase model. We assume that the additional melt is injected to the fracture material through a single pulse with determined shape and velocity. The existence of damage allows the pulse to keep moving further than that in simple compaction. Therefore more melt could be injected to the two-phase mixture and future application such as carbon dioxide injection is proposed.
NASA Astrophysics Data System (ADS)
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
Natural Length Scales Shape Liquid Phase Continuity in Unsaturated Flows
NASA Astrophysics Data System (ADS)
Assouline, S.; Lehmann, P. G.; Or, D.
2015-12-01
Unsaturated flows supporting soil evaporation and internal drainage play an important role in various hydrologic and climatic processes manifested at a wide range of scales. We study inherent natural length scales that govern these flow processes and constrain the spatial range of their representation by continuum models. These inherent length scales reflect interactions between intrinsic porous medium properties that affect liquid phase continuity, and the interplay among forces that drive and resist unsaturated flow. We have defined an intrinsic length scale for hydraulic continuity based on pore size distribution that controls soil evaporation dynamics (i.e., stage 1 to stage 2 transition). This simple metric may be used to delineate upper bounds for regional evaporative losses or the depth of soil-atmosphere interactions (in the absence of plants). A similar length scale governs the dynamics of internal redistribution towards attainment of field capacity, again through its effect on hydraulic continuity in the draining porous medium. The study provides a framework for guiding numerical and mathematical models for capillary flows across different scales considering the necessary conditions for coexistence of stationarity (REV), hydraulic continuity and intrinsic capillary gradients.
Fractional solubility of aerosol iron: Synthesis of a global-scale data set
NASA Astrophysics Data System (ADS)
Sholkovitz, Edward R.; Sedwick, Peter N.; Church, Thomas M.; Baker, Alexander R.; Powell, Claire F.
2012-07-01
Aerosol deposition provides a major input of the essential micronutrient iron to the open ocean. A critical parameter with respect to biological availability is the proportion of aerosol iron that enters the oceanic dissolved iron pool - the so-called fractional solubility of aerosol iron (%FeS). Here we present a global-scale compilation of total aerosol iron loading (FeT) and estimated %FeS values for ∼1100 samples collected over the open ocean, the coastal ocean, and some continental sites, including a new data set from the Atlantic Ocean. Despite the wide variety of methods that have been used to define 'soluble' aerosol iron, our global-scale compilation reveals a remarkably consistent trend in the fractional solubility of aerosol iron as a function of total aerosol iron loading, with the great bulk of the data defining an hyperbolic trend. The hyperbolic trends that we observe for both global- and regional-scale data are adequately described by a simple two-component mixing model, whereby the fractional solubility of iron in the bulk aerosol reflects the conservative mixing of 'lithogenic' mineral dust (high FeT and low %FeS) and non-lithogenic 'combustion' aerosols (low FeT and high %FeS). An increasing body of empirical and model-based evidence points to anthropogenic fuel combustion as the major source of these non-lithogenic 'combustion' aerosols, implying that human emissions are a major determinant of the fractional solubility of iron in marine aerosols. The robust global-scale relationship between %FeS and FeT provides a simple heuristic method for estimating aerosol iron solubility at the regional to global scale.
A simple model of intraseasonal oscillations
NASA Astrophysics Data System (ADS)
Fuchs, Željka; Raymond, David J.
2017-06-01
The intraseasonal oscillations and in particular the MJO have been and still remain a "holy grail" of today's atmospheric science research. Why does the MJO propagate eastward? What makes it unstable? What is the scaling for the MJO, i.e., why does it prefer long wavelengths or planetary wave numbers 1-3? What is the westward moving component of the intraseasonal oscillation? Though linear WISHE has long been discounted as a plausible model for intraseasonal oscillations and the MJO, the version we have developed explains many of the observed features of those phenomena, in particular, the preference for large zonal scale. In this model version, the moisture budget and the increase of precipitation with tropospheric humidity lead to a "moisture mode." The destabilization of the large-scale moisture mode occurs via WISHE only and there is no need to postulate large-scale radiatively induced instability or negative effective gross moist stability. Our WISHE-moisture theory leads to a large-scale unstable eastward propagating mode in n = -1 case and a large-scale unstable westward propagating mode in n = 1 case. We suggest that the n = -1 case might be connected to the MJO and the observed westward moving disturbance to the observed equatorial Rossby mode.
Doolittle, Emily L.; Gingras, Bruno; Endres, Dominik M.; Fitch, W. Tecumseh
2014-01-01
Many human musical scales, including the diatonic major scale prevalent in Western music, are built partially or entirely from intervals (ratios between adjacent frequencies) corresponding to small-integer proportions drawn from the harmonic series. Scientists have long debated the extent to which principles of scale generation in human music are biologically or culturally determined. Data from animal “song” may provide new insights into this discussion. Here, by examining pitch relationships using both a simple linear regression model and a Bayesian generative model, we show that most songs of the hermit thrush (Catharus guttatus) favor simple frequency ratios derived from the harmonic (or overtone) series. Furthermore, we show that this frequency selection results not from physical constraints governing peripheral production mechanisms but from active selection at a central level. These data provide the most rigorous empirical evidence to date of a bird song that makes use of the same mathematical principles that underlie Western and many non-Western musical scales, demonstrating surprising convergence between human and animal “song cultures.” Although there is no evidence that the songs of most bird species follow the overtone series, our findings add to a small but growing body of research showing that a preference for small-integer frequency ratios is not unique to humans. These findings thus have important implications for current debates about the origins of human musical systems and may call for a reevaluation of existing theories of musical consonance based on specific human vocal characteristics. PMID:25368163
Comparison of Alcator C data with the Rebut-Lallia-Watkins critical gradient scaling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, I.H.
The critical temperature gradient model of Rebut, Lallia and Watkins is compared with data from Alcator C. The predicted central electron temperature is derived from the model, and a simple analytic formula is given. It is found to be in quite good agreement with the observed temperatures on Alcator C under ohmic heating conditions. However, the thermal diffusivity postulated in the model for gradients that exceed the critical is not consistent with the observed electron heating by Lower Hybrid waves.
A comparison of self-oscillating phonation models
NASA Astrophysics Data System (ADS)
McPhail, Michael; Campo, Elizabeth; Walters, Gage; Krane, Michael
2017-11-01
This talk presents a comparison of self-oscillating models of phonation. The goal is to assess how well synthetic rubber vocal folds reproduce the gross behavior of phonation. Data from molded rubber folds and a variety of excised mammalian larynges were collected from the literature and from the authors' physical model. Gross trends are discussed and a simple scaling is presented that appears to collapse these data. Finally, comparisons between molded rubber folds and excised larynges are highlighted. Acknowledge support from NIH DC R01005642-11.
NASA Astrophysics Data System (ADS)
Hinton, Courtney; Punjabi, Alkesh; Ali, Halima
2009-11-01
The simple map is the simplest map that has topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007)]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)]. Action-angle coordinates for simple map cannot be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories cannot cross separatrix [op cit]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to the low mn magnetic perturbation with mode numbers m=1, and n=±1. The width of stochastic layer near the X-point scales as 0.63 power of the amplitude δ of low mn perturbation, toroidal flux loss scales as 1.16 power of δ, and poloidal flux loss scales as 1.26 power of δ. Scaling of width deviates from Boozer-Rechester scaling by 26% [A. Boozer, and A. Rechester, Phys. Fluids 21, 682 (1978)]. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.
Planning and assessment in land and water resource management are evolving from simple, local-scale problems toward complex, spatially explicit regional ones. Such problems have to be addressed with distributed models that can compute runoff and erosion at different spatial and t...
Can the ionosphere regulate magnetospheric convection.
NASA Technical Reports Server (NTRS)
Coroniti, F. V.; Kennel, C. F.
1973-01-01
A simple model is outlined that relates the dayside magnetopause displacement to the currents feeding the polar cap ionosphere, from which the ionospheric electric field and the flux return rate may be estimated as a function of magnetopause displacement. Then, flux conservation arguments make possible an estimate of the time scale on which convection increases.
Convective Propagation Characteristics Using a Simple Representation of Convective Organization
NASA Astrophysics Data System (ADS)
Neale, R. B.; Mapes, B. E.
2016-12-01
Observed equatorial wave propagation is intimately linked to convective organization and it's coupling to features of the larger-scale flow. In this talk we a use simple 4 level model to accommodate vertical modes of a mass flux convection scheme (shallow, mid-level and deep). Two paradigms of convection are used to represent convective processes. One that has only both random (unorganized) diagnosed fluctuations of convective properties and one with organized fluctuations of convective properties that are amplified by previously existing convection and has an explicit moistening impact on the local convecting environment We show a series of model simulations in single-column, 2D and 3D configurations, where the role of convective organization in wave propagation is shown to be fundamental. For the optimal choice of parameters linking organization to local atmospheric state, a broad array of convective wave propagation emerges. Interestingly the key characteristics of propagating modes are the low-level moistening followed by deep convection followed by mature 'large-scale' heating. This organization structure appears to hold firm across timescales from 5-day wave disturbances to MJO-like wave propagation.
A solvable model of Vlasov-kinetic plasma turbulence in Fourier-Hermite phase space
NASA Astrophysics Data System (ADS)
Adkins, T.; Schekochihin, A. A.
2018-02-01
A class of simple kinetic systems is considered, described by the one-dimensional Vlasov-Landau equation with Poisson or Boltzmann electrostatic response and an energy source. Assuming a stochastic electric field, a solvable model is constructed for the phase-space turbulence of the particle distribution. The model is a kinetic analogue of the Kraichnan-Batchelor model of chaotic advection. The solution of the model is found in Fourier-Hermite space and shows that the free-energy flux from low to high Hermite moments is suppressed, with phase mixing cancelled on average by anti-phase-mixing (stochastic plasma echo). This implies that Landau damping is an ineffective route to dissipation (i.e. to thermalisation of electric energy via velocity space). The full Fourier-Hermite spectrum is derived. Its asymptotics are -3/2$ at low wavenumbers and high Hermite moments ( ) and -1/2k-2$ at low Hermite moments and high wavenumbers ( ). These conclusions hold at wavenumbers below a certain cutoff (analogue of Kolmogorov scale), which increases with the amplitude of the stochastic electric field and scales as inverse square of the collision rate. The energy distribution and flows in phase space are a simple and, therefore, useful example of competition between phase mixing and nonlinear dynamics in kinetic turbulence, reminiscent of more realistic but more complicated multi-dimensional systems that have not so far been amenable to complete analytical solution.
Paleoclimate diagnostics: consistent large-scale temperature responses in warm and cold climates
NASA Astrophysics Data System (ADS)
Izumi, Kenji; Bartlein, Patrick; Harrison, Sandy
2015-04-01
The CMIP5 model simulations of the large-scale temperature responses to increased raditative forcing include enhanced land-ocean contrast, stronger response at higher latitudes than in the tropics, and differential responses in warm and cool season climates to uniform forcing. Here we show that these patterns are also characteristic of CMIP5 model simulations of past climates. The differences in the responses over land as opposed to over the ocean, between high and low latitudes, and between summer and winter are remarkably consistent (proportional and nearly linear) across simulations of both cold and warm climates. Similar patterns also appear in historical observations and paleoclimatic reconstructions, implying that such responses are characteristic features of the climate system and not simple model artifacts, thereby increasing our confidence in the ability of climate models to correctly simulate different climatic states. We also show the possibility that a small set of common mechanisms control these large-scale responses of the climate system across multiple states.
Minimal model for a hydrodynamic fingering instability in microroller suspensions
NASA Astrophysics Data System (ADS)
Delmotte, Blaise; Donev, Aleksandar; Driscoll, Michelle; Chaikin, Paul
2017-11-01
We derive a minimal continuum model to investigate the hydrodynamic mechanism behind the fingering instability recently discovered in a suspension of microrollers near a floor [M. Driscoll et al., Nat. Phys. 13, 375 (2017), 10.1038/nphys3970]. Our model, consisting of two continuous lines of rotlets, exhibits a linear instability driven only by hydrodynamic interactions and reproduces the length-scale selection observed in large-scale particle simulations and in experiments. By adjusting only one parameter, the distance between the two lines, our dispersion relation exhibits quantitative agreement with the simulations and qualitative agreement with experimental measurements. Our linear stability analysis indicates that this instability is caused by the combination of the advective and transverse flows generated by the microrollers near a no-slip surface. Our simple model offers an interesting formalism to characterize other hydrodynamic instabilities that have not been well understood, such as size scale selection in suspensions of particles sedimenting adjacent to a wall, or the recently observed formations of traveling phonons in systems of confined driven particles.
Inferring Soil Moisture Memory from Streamflow Observations Using a Simple Water Balance Model
NASA Technical Reports Server (NTRS)
Orth, Rene; Koster, Randal Dean; Seneviratne, Sonia I.
2013-01-01
Soil moisture is known for its integrative behavior and resulting memory characteristics. Soil moisture anomalies can persist for weeks or even months into the future, making initial soil moisture a potentially important contributor to skill in weather forecasting. A major difficulty when investigating soil moisture and its memory using observations is the sparse availability of long-term measurements and their limited spatial representativeness. In contrast, there is an abundance of long-term streamflow measurements for catchments of various sizes across the world. We investigate in this study whether such streamflow measurements can be used to infer and characterize soil moisture memory in respective catchments. Our approach uses a simple water balance model in which evapotranspiration and runoff ratios are expressed as simple functions of soil moisture; optimized functions for the model are determined using streamflow observations, and the optimized model in turn provides information on soil moisture memory on the catchment scale. The validity of the approach is demonstrated with data from three heavily monitored catchments. The approach is then applied to streamflow data in several small catchments across Switzerland to obtain a spatially distributed description of soil moisture memory and to show how memory varies, for example, with altitude and topography.
NASA Astrophysics Data System (ADS)
Roubinet, D.; Russian, A.; Dentz, M.; Gouze, P.
2017-12-01
Characterizing and modeling hydrodynamic reactive transport in fractured rock are critical challenges for various research fields and applications including environmental remediation, geological storage, and energy production. To this end, we consider a recently developed time domain random walk (TDRW) approach, which is adapted to reproduce anomalous transport behaviors and capture heterogeneous structural and physical properties. This method is also very well suited to optimize numerical simulations by memory-shared massive parallelization and provide numerical results at various scales. So far, the TDRW approach has been applied for modeling advective-diffusive transport with mass transfer between mobile and immobile regions and simple (theoretical) reactions in heterogeneous porous media represented as single continuum domains. We extend this approach to dual-continuum representations considering a highly permeable fracture network embedded into a poorly permeable rock matrix with heterogeneous geochemical reactions occurring in both geological structures. The resulting numerical model enables us to extend the range of the modeled heterogeneity scales with an accurate representation of solute transport processes and no assumption on the Fickianity of these processes. The proposed model is compared to existing particle-based methods that are usually used to model reactive transport in fractured rocks assuming a homogeneous surrounding matrix, and is used to evaluate the impact of the matrix heterogeneity on the apparent reaction rates for different 2D and 3D simple-to-complex fracture network configurations.
Havens, Karl E; Harwell, Matthew C; Brady, Mark A; Sharfstein, Bruce; East, Therese L; Rodusky, Andrew J; Anson, Daniel; Maki, Ryan P
2002-04-09
A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV) over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS) technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m), and variable sediment types. Based on sampling carried out in August-September 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat). A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.
Energy and time determine scaling in biological and computer designs
Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-01-01
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524
Energy and time determine scaling in biological and computer designs.
Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-08-19
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
Jorgensen, Scott W.; Johnson, Terry A.; Payzant, E. Andrew; ...
2016-06-11
Deuterium desorption in an automotive-scale hydrogen storage tube was studied in-situ using neutron diffraction. Gradients in the concentration of the various alanate phases were observed along the length of the tube but no significant radial anisotropy was present. In addition, neutron radiography and computed tomography showed large scale cracks and density fluctuations, confirming the presence of these structures in an undisturbed storage system. These results demonstrate that large scale storage structures are not uniform even after many absorption/desorption cycles and that movement of gaseous hydrogen cannot be properly modeled by a simple porous bed model. In addition, the evidence indicatesmore » that there is slow transformation of species at one end of the tube indicating loss of catalyst functionality. These observations explain the unusually fast movement of hydrogen in a full scale system and shows that loss of capacity is not occurring uniformly in this type of hydrogen-storage system.« less
Geomorphic and climate influences on soil organic carbon concentration at large catchment scales
NASA Astrophysics Data System (ADS)
Hancock, G. R.; Martinez, C.; Wells, T.; Dever, C.; Willgoose, G. R.; Bissett, A.
2013-12-01
Soils represent the largest terrestrial sink of carbon on Earth. Managing the soil organic carbon (SOC) pool is becoming increasingly important in light of growing concerns over global food security and the climatic effects of anthropogenic CO2 emissions. The development of accurate predictive SOC models are an important step for both land resource managers and policy makers alike. Presently, a number of SOC models are available which incorporate environmental data to produce SOC estimates. The accuracy of these models varies significantly over a range of landscapes due to the highly complex nature of SOC dynamics. Fundamental gaps exist in our understanding of SOC controls. To date, studies of SOC controls, and the subsequent models derived from their findings have focussed mainly on North American and European landscapes. Additionally, SOC studies often focus on the paddock to small catchment scale. Consequently, information about SOC in Australian landscapes and at the larger scale is limited. This study examines controls over SOC across a large catchment of approximately 600 km2 in the Upper Hunter Valley, New South Wales, Australia. The aim was to develop a predictive model for use across a range of catchment sizes and climate. Here it was found that elevation (derived from DEMs) and vegetation (above ground biomass quantified by remote sensing were the primary controls of SOC. SOC was seen to increase with elevation and NDVI. This relationship is believed to be a reflection of rainfall patterns across the study area and plant growth potential. Further, a relationship was observed between SOC and the environmental tracer 137Cs which suggests that SOC and 137Cs move through catchment via similar sediment transport mechanisms. Therefore loss of SOC by erosion and gain by deposition may be necessary to be accounted for in any SOC budget. Model validation indicated that the use of simple linear relationships could predict SOC based on rainfall and vegetation (above ground biomass as quantified by remote sensing). The results suggest that simple landscape and climate models have the potential to predict the spatial distribution of SOC. The findings of this study emphasise the importance of tailoring SOC models to the appropriate scale.
A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations
Miller, Scot M.; Miller, Charles E.; Commane, Roisin; Chang, Rachel Y.-W.; Dinardo, Steven J.; Henderson, John M.; Karion, Anna; Lindaas, Jakob; Melton, Joe R.; Miller, John B.; Sweeney, Colm; Wofsy, Steven C.; Michalak, Anna M.
2016-01-01
Methane (CH4) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH4 fluxes across Alaska for 2012–2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH4 observations. In addition, we find that CH4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH4 (for May–Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH4 fluxes than in process estimates. Lastly, we find that the seasonality of CH4 fluxes varied during 2012–2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH4 fluxes from the state. PMID:28066129
A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations.
Miller, Scot M; Miller, Charles E; Commane, Roisin; Chang, Rachel Y-W; Dinardo, Steven J; Henderson, John M; Karion, Anna; Lindaas, Jakob; Melton, Joe R; Miller, John B; Sweeney, Colm; Wofsy, Steven C; Michalak, Anna M
2016-10-01
Methane (CH 4 ) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH 4 fluxes across Alaska for 2012-2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH 4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH 4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH 4 observations. In addition, we find that CH 4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH 4 ( for May-Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH 4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH 4 fluxes than in process estimates. Lastly, we find that the seasonality of CH 4 fluxes varied during 2012-2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH 4 fluxes from the state.
NASA Technical Reports Server (NTRS)
Malcolm, G. N.; Schiff, L. B.
1985-01-01
Two rotary balance apparatuses were developed for testing airplane models in a coning motion. A large scale apparatus, developed for use in the 12-Foot Pressure Wind tunnel primarily to permit testing at high Reynolds numbers, was recently used to investigate the aerodynamics of 0.05-scale model of the F-15 fighter aircraft. Effects of Reynolds number, spin rate parameter, model attitude, presence of a nose boom, and model/sting mounting angle were investigated. A smaller apparatus, which investigates the aerodynamics of bodies of revolution in a coning motion, was used in the 6-by-6 foot Supersonic Wind Tunnel to investigate the aerodynamic behavior of a simple representation of a modern fighter, the Standard Dynamic Model (SDM). Effects of spin rate parameter and model attitude were investigated. A description of the two rigs and a discussion of some of the results obtained in the respective test are presented.
NASA Astrophysics Data System (ADS)
Sun, Yudong; Vadakkan, Tegy; Bassler, Kevin
2007-03-01
We study the universality and robustness of variants of the simple model of superconducting vortex dynamics first introduced by Bassler and Paczuski in Phys. Rev. Lett. 81, 3761 (1998). The model is a coarse-grained model that captures the essential features of the plastic vortex motion. It accounts for the repulsive interaction between vortices, the pining of vortices at quenched disordered locations in the material, and the over-damped dynamics of the vortices that leads to tearing of the flux line lattice. We report the results of extensive simulations of the critical ``Bean state" dynamics of the model. We find a phase diagram containing four distinct phases of dynamical behavior, including two phases with distinct Self Organized Critical (SOC) behavior. Exponents describing the avalanche scaling behavior in the two SOC phases are determined using finite-size scaling. The exponents are found to be robust within each phase and for different variants of the model. The difference of the scaling behavior in the two phases is also observed in the morphology of the avalanches.
Measuring water fluxes in forests: The need for integrative platforms of analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Eric J.
To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less
Measuring water fluxes in forests: The need for integrative platforms of analysis
Ward, Eric J.
2016-08-09
To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less
Agent Based Modeling: Fine-Scale Spatio-Temporal Analysis of Pertussis
NASA Astrophysics Data System (ADS)
Mills, D. A.
2017-10-01
In epidemiology, spatial and temporal variables are used to compute vaccination efficacy and effectiveness. The chosen resolution and scale of a spatial or spatio-temporal analysis will affect the results. When calculating vaccination efficacy, for example, a simple environment that offers various ideal outcomes is often modeled using coarse scale data aggregated on an annual basis. In contrast to the inadequacy of this aggregated method, this research uses agent based modeling of fine-scale neighborhood data centered around the interactions of infants in daycare and their families to demonstrate an accurate reflection of vaccination capabilities. Despite being able to prevent major symptoms, recent studies suggest that acellular Pertussis does not prevent the colonization and transmission of Bordetella Pertussis bacteria. After vaccination, a treated individual becomes a potential asymptomatic carrier of the Pertussis bacteria, rather than an immune individual. Agent based modeling enables the measurable depiction of asymptomatic carriers that are otherwise unaccounted for when calculating vaccination efficacy and effectiveness. Using empirical data from a Florida Pertussis outbreak case study, the results of this model demonstrate that asymptomatic carriers bias the calculated vaccination efficacy and reveal a need for reconsidering current methods that are widely used for calculating vaccination efficacy and effectiveness.
High performance cellular level agent-based simulation with FLAME for the GPU.
Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela
2010-05-01
Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.
Porter, Mark L.; Plampin, Michael; Pawar, Rajesh; ...
2014-12-31
The physicochemical processes associated with CO 2 leakage into shallow aquifer systems are complex and span multiple spatial and time scales. Continuum-scale numerical models that faithfully represent the underlying pore-scale physics are required to predict the long-term behavior and aid in risk analysis regarding regulatory and management decisions. This study focuses on benchmarking the numerical simulator, FEHM, with intermediate-scale column experiments of CO 2 gas evolution in homogeneous and heterogeneous sand configurations. Inverse modeling was conducted to calibrate model parameters and determine model sensitivity to the observed steady-state saturation profiles. It is shown that FEHM is a powerful tool thatmore » is capable of capturing the experimentally observed out ow rates and saturation profiles. Moreover, FEHM captures the transition from single- to multi-phase flow and CO 2 gas accumulation at interfaces separating sands. We also derive a simple expression, based on Darcy's law, for the pressure at which CO 2 free phase gas is observed and show that it reliably predicts the location at which single-phase flow transitions to multi-phase flow.« less
A nested observation and model approach to non linear groundwater surface water interactions.
NASA Astrophysics Data System (ADS)
van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.
2009-04-01
Surface water quality measurements in The Netherlands are scattered in time and space. Therefore, water quality status and its variations and trends are difficult to determine. In order to reach the water quality goals according to the European Water Framework Directive, we need to improve our understanding of the dynamics of surface water quality and the processes that affect it. In heavily drained lowland catchment groundwater influences the discharge towards the surface water network in many complex ways. Especially a strong seasonal contracting and expanding system of discharging ditches and streams affects discharge and solute transport. At a tube drained field site the tube drain flux and the combined flux of all other flow routes toward a stretch of 45 m of surface water have been measured for a year. Also the groundwater levels at various locations in the field and the discharge at two nested catchment scales have been monitored. The unique reaction of individual flow routes on rainfall events at the field site allowed us to separate the discharge at a 4 ha catchment and at a 6 km2 into flow route contributions. The results of this nested experimental setup combined with the results of a distributed hydrological model has lead to the formulation of a process model approach that focuses on the spatial variability of discharge generation driven by temporal and spatial variations in groundwater levels. The main idea of this approach is that discharge is not generated by catchment average storages or groundwater heads, but is mainly generated by points scale extremes i.e. extreme low permeability, extreme high groundwater heads or extreme low surface elevations, all leading to catchment discharge. We focused on describing the spatial extremes in point scale storages and this led to a simple and measurable expression that governs the non-linear groundwater surface water interaction. We will present the analysis of the field site data to demonstrate the potential of nested-scale, high frequency observations. The distributed hydrological model results will be used to show transient catchment scale relations between groundwater levels and discharges. These analyses lead to a simple expression that can describe catchment scale groundwater surface water interactions.
Diverging patterns with endogenous labor migration.
Reichlin, P; Rustichini, A
1998-05-05
"The standard neoclassical model cannot explain persistent migration flows and lack of cross-country convergence when capital and labor are mobile. Here we present a model where both phenomena may take place.... Our model is based on the Arrow-Romer approach to endogenous growth theory. We single out the importance of a (however weak) scale effect from the size of the workforce.... The main conclusion of this simple model is that lack of convergence, or even divergence, among countries is possible, even with perfect capital mobility and labor mobility." excerpt
Unity of quarks and leptons at the TeV scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foot, R.; Lew, H.
1990-08-01
The gauge group (SU(3)){sup 2}{direct product}(SU(2)){sup 2}{direct product}(U(1){sub {ital Y}{prime}}){sup 3} supplemented by quark-lepton, left-right, and generation discrete symmetries represents a new approach to the understanding of the particle content of the standard model. In particular, as a result of the large number of symmetries, the fermion sector of the model is very simple. After symmetry breaking, the standard model can be shown to emerge from this highly symmetric model at low energies.
Don Quixote Pond: A Small Scale Model of Weathering and Salt Accumulation
NASA Technical Reports Server (NTRS)
Englert, P.; Bishop, J. L.; Patel, S. N.; Gibson, E. K.; Koeberl, C.
2015-01-01
The formation of Don Quixote Pond in the North Fork of Wright Valley, Antarctica, is a model for unique terrestrial calcium, chlorine, and sulfate weathering, accumulation, and distribution processes. The formation of Don Quixote Pond by simple shallow and deep groundwater contrasts more complex models for Don Juan Pond in the South Fork of Wright Valley. Our study intends to understand the formation of Don Quixote Pond as unique terrestrial processes and as a model for Ca, C1, and S weathering and distribution on Mars.
Quantifying predictability in a model with statistical features of the atmosphere
Kleeman, Richard; Majda, Andrew J.; Timofeyev, Ilya
2002-01-01
The Galerkin truncated inviscid Burgers equation has recently been shown by the authors to be a simple model with many degrees of freedom, with many statistical properties similar to those occurring in dynamical systems relevant to the atmosphere. These properties include long time-correlated, large-scale modes of low frequency variability and short time-correlated “weather modes” at smaller scales. The correlation scaling in the model extends over several decades and may be explained by a simple theory. Here a thorough analysis of the nature of predictability in the idealized system is developed by using a theoretical framework developed by R.K. This analysis is based on a relative entropy functional that has been shown elsewhere by one of the authors to measure the utility of statistical predictions precisely. The analysis is facilitated by the fact that most relevant probability distributions are approximately Gaussian if the initial conditions are assumed to be so. Rather surprisingly this holds for both the equilibrium (climatological) and nonequilibrium (prediction) distributions. We find that in most cases the absolute difference in the first moments of these two distributions (the “signal” component) is the main determinant of predictive utility variations. Contrary to conventional belief in the ensemble prediction area, the dispersion of prediction ensembles is generally of secondary importance in accounting for variations in utility associated with different initial conditions. This conclusion has potentially important implications for practical weather prediction, where traditionally most attention has focused on dispersion and its variability. PMID:12429863
What Quasars Really Look Like: Unification of the Emission and Absorption Line Regions
NASA Technical Reports Server (NTRS)
Elvis, Martin
2000-01-01
We propose a simple unifying structure for the inner regions of quasars and AGN. This empirically derived model links together the broad absorption line (BALS), the narrow UV/X-ray ionized absorbers, the BELR, and the 5 Compton scattering/fluorescing regions into a single structure. The model also suggests an alternative origin for the large-scale bi-conical outflows. Some other potential implications of this structure are discussed.
Signal transmission competing with noise in model excitable brains
NASA Astrophysics Data System (ADS)
Marro, J.; Mejias, J. F.; Pinamonti, G.; Torres, J. J.
2013-01-01
This is a short review of recent studies in our group on how weak signals may efficiently propagate in a system with noise-induced excitation-inhibition competition which adapts to the activity at short-time scales and thus induces excitable conditions. Our numerical results on simple mathematical models should hold for many complex networks in nature, including some brain cortical areas. In particular, they serve us here to interpret available psycho-technical data.
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
Application of empirical and dynamical closure methods to simple climate models
NASA Astrophysics Data System (ADS)
Padilla, Lauren Elizabeth
This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.
Microfabrication of hierarchical structures for engineered mechanical materials
NASA Astrophysics Data System (ADS)
Vera Canudas, Marc
Materials found in nature present, in some cases, unique properties from their constituents that are of great interest in engineered materials for applications ranging from structural materials for the construction of bridges, canals and buildings to the fabrication of new lightweight composites for airplane and automotive bodies, to protective thin film coatings, amongst other fields. Research in the growing field of biomimetic materials indicates that the micro-architectures present in natural materials are critical to their macroscopic mechanical properties. A better understanding of the effect that structure and hierarchy across scales have on the material properties will enable engineered materials with enhanced properties. At the moment, very few theoretical models predict mechanical properties of simple materials based on their microstructures. Moreover these models are based on observations from complex biological systems. One way to overcome this challenge is through the use of microfabrication techniques to design and fabricate simple materials, more appropriate for the study of hierarchical organizations and microstructured materials. Arrays of structures with controlled geometry and dimension can be designed and fabricated at different length scales, ranging from a few hundred nanometers to centimeters, in order to mimic similar systems found in nature. In this thesis, materials have been fabricated in order to gain fundamental insight into the complex hierarchical materials found in nature and to engineer novel materials with enhanced mechanical properties. The materials fabricated here were mechanically characterized and compared to simple mechanics models to describe their behavior with the goal of applying the knowledge acquired to the design and synthesis of future engineered materials with novel properties.
Scales and scaling in turbulent ocean sciences; physics-biology coupling
NASA Astrophysics Data System (ADS)
Schmitt, Francois
2015-04-01
Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.
A comparison of hydrologic models for ecological flows and water availability
Caldwell, Peter V; Kennen, Jonathan G.; Sun, Ge; Kiang, Julie E.; Butcher, John B; Eddy, Michelle C; Hay, Lauren E.; LaFontaine, Jacob H.; Hain, Ernie F.; Nelson, Stacy C; McNulty, Steve G
2015-01-01
Robust hydrologic models are needed to help manage water resources for healthy aquatic ecosystems and reliable water supplies for people, but there is a lack of comprehensive model comparison studies that quantify differences in streamflow predictions among model applications developed to answer management questions. We assessed differences in daily streamflow predictions by four fine-scale models and two regional-scale monthly time step models by comparing model fit statistics and bias in ecologically relevant flow statistics (ERFSs) at five sites in the Southeastern USA. Models were calibrated to different extents, including uncalibrated (level A), calibrated to a downstream site (level B), calibrated specifically for the site (level C) and calibrated for the site with adjusted precipitation and temperature inputs (level D). All models generally captured the magnitude and variability of observed streamflows at the five study sites, and increasing level of model calibration generally improved performance. All models had at least 1 of 14 ERFSs falling outside a +/−30% range of hydrologic uncertainty at every site, and ERFSs related to low flows were frequently over-predicted. Our results do not indicate that any specific hydrologic model is superior to the others evaluated at all sites and for all measures of model performance. Instead, we provide evidence that (1) model performance is as likely to be related to calibration strategy as it is to model structure and (2) simple, regional-scale models have comparable performance to the more complex, fine-scale models at a monthly time step.
Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.
Lv, Jie; Havlak, Paul; Putnam, Nicholas H
2011-10-05
Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.
Model validation of simple-graph representations of metabolism
Holme, Petter
2009-01-01
The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012
A Simple Model for the Orbital Debris Environment in GEO
NASA Astrophysics Data System (ADS)
Anilkumar, A. K.; Ananthasayanam, M. R.; Subba Rao, P. V.
The increase of space debris and its threat to commercial space activities in the Geosynchronous Earth Orbit (GEO) predictably cause concern regarding the environment over the long term. A variety of studies regarding space debris such as detection, modeling, protection and mitigation measures, is being pursued for the past couple of decades. Due to the absence of atmospheric drag to remove debris in GEO and the increasing number of utility satellites therein, the number of objects in GEO will continue to increase. The characterization of the GEO environment is critical for risk assessment and protection of future satellites and also to incorporate effective debris mitigation measures in the design and operations. The debris measurements in GEO have been limited to objects with size more than 60 cm. This paper provides an engineering model of the GEO environment by utilizing the philosophy and approach as laid out for the SIMPLE model proposed recently for LEO by the authors. The present study analyses the statistical characteristics of the GEO catalogued objects in order to arrive at a model for the GEO space debris environment. It is noted that the catalogued objects, as of now of around 800, by USSPACECOM across the years 1998 to 2004 have the same semi major axis mode (highest number density) around 35750 km above the earth. After removing the objects in the small bin around the mode, (35700, 35800) km containing around 40 percent (a value that is nearly constant across the years) of the objects, the number density of the other objects follow a single Laplace distribution with two parameters, namely location and scale. Across the years the location parameter of the above distribution does not significantly vary but the scale parameter shows a definite trend. These observations are successfully utilized in proposing a simple model for the GEO debris environment. References Ananthasayanam, M. R., Anil Kumar, A. K., and Subba Rao, P. V., ``A New Stochastic Impressionistic Low Earth (SIMPLE) Model of the Space Debris Scenario'', Conference Abstract COSPAR 02-A-01772, 2002. Ananthasayanam, M. R., Anilkumar, A. K., Subba Rao, P. V., and V. Adimurthy, ``Characterization of Eccentricity and Ballistic Coefficients of Space Debris in Altitude and Perigee Bins'', IAC-03-IAA5.p.04, Presented at the IAF Conference, Bremen, October 2003 and also to be published in the Proceedings of IAF Conference, Science and Technology Series, 2003.
On the structure of contact binaries. I - The contact discontinuity
NASA Technical Reports Server (NTRS)
Shu, F. H.; Lubow, S. H.; Anderson, L.
1976-01-01
The problem of the interior structure of contact binaries is reviewed, and a simple resolution of the difficulties which plague the theory is suggested. It is proposed that contact binaries contain a contact discontinuity between the lower surface of the common envelope and the Roche lobe of the cooler star. This discontinuity is maintained against thermal diffusion by fluid flow, and the transition layer is thin to the extent that the dynamical time scale is short in comparison with the thermal time scale. The idealization that the transition layer has infinitesimal thickness allows a simple formulation of the structure equations which are closed by appropriate jump conditions across the discontinuity. The further imposition of the standard boundary conditions suffices to define a unique model for the system once the chemical composition, the masses of the two stars, and the orbital separation are specified.
Long-term forecasting of internet backbone traffic.
Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe
2005-09-01
We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.
NASA Astrophysics Data System (ADS)
Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.
2015-08-01
Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics.
Experimental and Numerical Correlation of Gravity Sag in Solar Sail Quality Membranes
NASA Technical Reports Server (NTRS)
Black, Jonathan T.; Leifer, Jack; DeMoss, Joshua A.; Walker, Eric N.; Belvin, W. Keith
2004-01-01
Solar sails are among the most studied members of the ultra-lightweight and inflatable (Gossamer) space structures family due to their potential to provide propellentless propulsion. They are comprised of ultra-thin membrane panels that, to date, have proven very difficult to experimentally characterize and numerically model due to their reflectivity and flexibility, and the effects of gravity sag and air damping. Numerical models must be correlated with experimental measurements of sub-scale solar sails to verify that the models can be scaled up to represent full-sized solar sails. In this paper, the surface shapes of five horizontally supported 25 micron thick aluminized Kapton membranes were measured to a 1.0 mm resolution using photogrammetry. Several simple numerical models closely match the experimental data, proving the ability of finite element simulations to predict actual behavior of solar sails.
NASA Astrophysics Data System (ADS)
Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi
2018-03-01
This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.
Allen, Craig R.; Holling, Crawford S.; Garmestani, Ahjond S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.
2013-01-01
The scaling of physical, biological, ecological and social phenomena is a major focus of efforts to develop simple representations of complex systems. Much of the attention has been on discovering universal scaling laws that emerge from simple physical and geometric processes. However, there are regular patterns of departures both from those scaling laws and from continuous distributions of attributes of systems. Those departures often demonstrate the development of self-organized interactions between living systems and physical processes over narrower ranges of scale.
A scalable multi-process model of root nitrogen uptake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Anthony P.
This article is a Commentary on McMurtrie & Näsholm et al., 218: 119–130. Roots are represented in Terrestrial Ecosystem Models (TEMs) in much less detail than their equivalent above-ground resource acquisition organs – leaves. Often roots in TEMs are simply resource sinks, and below-ground resource acquisition is commonly simulated without any relationship to root dynamics at all, though there are exceptions (e.g. Zaehle & Friend, 2010). The representation of roots as carbon (C) and nitrogen (N) sinks without complementary source functions can lead to strange sensitivities in a model. For example, reducing root lifespans in the Community Land Model (versionmore » 4.5) increases plant production as N cycles more rapidly through the ecosystem without loss of plant function (D. M. Ricciuto, unpublished). The primary reasons for the poorer representation of roots compared with leaves in TEMs are three-fold: (1) data are much harder won, especially in the field; (2) no simple mechanistic models of root function are available; and (3) scaling root function from an individual root to a root system lags behind methods of scaling leaf function to a canopy. Here in this issue of New Phytologist, McMurtrie & Näsholm (pp. 119–130) develop a relatively simple model for root N uptake that mechanistically accounts for processes of N supply (mineralization and transport by diffusion and mass flow) and N demand (root uptake and microbial immobilization).« less
A scalable multi-process model of root nitrogen uptake
Walker, Anthony P.
2018-02-28
This article is a Commentary on McMurtrie & Näsholm et al., 218: 119–130. Roots are represented in Terrestrial Ecosystem Models (TEMs) in much less detail than their equivalent above-ground resource acquisition organs – leaves. Often roots in TEMs are simply resource sinks, and below-ground resource acquisition is commonly simulated without any relationship to root dynamics at all, though there are exceptions (e.g. Zaehle & Friend, 2010). The representation of roots as carbon (C) and nitrogen (N) sinks without complementary source functions can lead to strange sensitivities in a model. For example, reducing root lifespans in the Community Land Model (versionmore » 4.5) increases plant production as N cycles more rapidly through the ecosystem without loss of plant function (D. M. Ricciuto, unpublished). The primary reasons for the poorer representation of roots compared with leaves in TEMs are three-fold: (1) data are much harder won, especially in the field; (2) no simple mechanistic models of root function are available; and (3) scaling root function from an individual root to a root system lags behind methods of scaling leaf function to a canopy. Here in this issue of New Phytologist, McMurtrie & Näsholm (pp. 119–130) develop a relatively simple model for root N uptake that mechanistically accounts for processes of N supply (mineralization and transport by diffusion and mass flow) and N demand (root uptake and microbial immobilization).« less
Drainage fracture networks in elastic solids with internal fluid generation
NASA Astrophysics Data System (ADS)
Kobchenko, Maya; Hafver, Andreas; Jettestuen, Espen; Galland, Olivier; Renard, François; Meakin, Paul; Jamtveit, Bjørn; Dysthe, Dag K.
2013-06-01
Experiments in which CO2 gas was generated by the yeast fermentation of sugar in an elastic layer of gelatine gel confined between two glass plates are described and analyzed theoretically. The CO2 gas pressure causes the gel layer to fracture. The gas produced is drained on short length scales by diffusion and on long length scales by flow in a fracture network, which has topological properties that are intermediate between river networks and hierarchical-fracture networks. A simple model for the experimental system with two parameters that characterize the disorder and the intermediate (river-fracture) topology of the network was developed and the results of the model were compared with the experimental results.
Kinoshita, Shuichi; Yoshioka, Shinya; Kawagoe, Kenji
2002-01-01
Structural colour in the Morpho butterfly originates from submicron structure within a scale and, for over a century, its colour and reflectivity have been explained as interference of light due to the multilayer of cuticle and air. However, this model fails to explain the extraordinarily uniform colour of the wing with respect to the observation direction. We have performed microscopic, optical and theoretical investigations, and have found that the separate lamellar structure with irregular heights is extremely important. Using a simple model, we have shown that the combined action of interference and diffraction is essential for the structural colour of the Morpho butterfly. PMID:12137569
Coagulation-Fragmentation Model for Animal Group-Size Statistics
NASA Astrophysics Data System (ADS)
Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.
2017-04-01
We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.
Double Scaling in the Relaxation Time in the β -Fermi-Pasta-Ulam-Tsingou Model
NASA Astrophysics Data System (ADS)
Lvov, Yuri V.; Onorato, Miguel
2018-04-01
We consider the original β -Fermi-Pasta-Ulam-Tsingou system; numerical simulations and theoretical arguments suggest that, for a finite number of masses, a statistical equilibrium state is reached independently of the initial energy of the system. Using ensemble averages over initial conditions characterized by different Fourier random phases, we numerically estimate the time scale of equipartition and we find that for very small nonlinearity it matches the prediction based on exact wave-wave resonant interaction theory. We derive a simple formula for the nonlinear frequency broadening and show that when the phenomenon of overlap of frequencies takes place, a different scaling for the thermalization time scale is observed. Our result supports the idea that the Chirikov overlap criterion identifies a transition region between two different relaxation time scalings.
The propagation of sound in tunnels
NASA Astrophysics Data System (ADS)
Li, Kai Ming; Iu, King Kwong
2002-11-01
The sound propagation in tunnels is addressed theoretically and experimentally. In many previous studies, the image source method is frequently used. However, these early theoretical models are somewhat inadequate because the effect of multiple reflections in long enclosures is often modeled by the incoherent summation of contributions from all image sources. Ignoring the phase effect, these numerical models are unlikely to be satisfactory for predicting the intricate interference patterns due to contributions from each image source. In the present paper, the interference effect is incorporated by summing the contributions from the image sources coherently. To develop a simple numerical model, tunnels are represented by long rectangular enclosures with either geometrically reflecting or impedance boundaries. Scale model experiments are conducted for the validation of the numerical model. In some of the scale model experiments, the enclosure walls are lined with a carpet for simulating the impedance boundary condition. Large-scale outdoor measurements have also been conducted in two tunnels designed originally for road traffic use. It has been shown that the proposed numerical model agrees reasonably well with experimental data. [Work supported by the Research Grants Council, The Industry Department, NAP Acoustics (Far East) Ltd., and The Hong Kong Polytechnic University.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, D.-C.; Stojkovic, Dejan; Dutta, Sourish
2009-09-15
We examine a dark energy model where a scalar unparticle degree of freedom plays the role of quintessence. In particular, we study a model where the unparticle degree of freedom has a standard kinetic term and a simple mass potential, the evolution is slowly rolling and the field value is of the order of the unparticle energy scale ({lambda}{sub u}). We study how the evolution of w depends on the parameters B (a function of unparticle scaling dimension d{sub u}), the initial value of the field {phi}{sub i} (or equivalently, {lambda}{sub u}) and the present matter density {omega}{sub m0}. Wemore » use observational data from type Ia supernovae, baryon acoustic oscillations and the cosmic microwave background to constrain the model parameters and find that these models are not ruled out by the observational data. From a theoretical point of view, unparticle dark energy model is very attractive, since unparticles (being bound states of fundamental fermions) are protected from radiative corrections. Further, coupling of unparticles to the standard model fields can be arbitrarily suppressed by raising the fundamental energy scale M{sub F}, making the unparticle dark energy model free of most of the problems that plague conventional scalar field quintessence models.« less
Scaling exponent and dispersity of polymers in solution by diffusion NMR.
Williamson, Nathan H; Röding, Magnus; Miklavcic, Stanley J; Nydén, Magnus
2017-05-01
Molecular mass distribution measurements by pulsed gradient spin echo nuclear magnetic resonance (PGSE NMR) spectroscopy currently require prior knowledge of scaling parameters to convert from polymer self-diffusion coefficient to molecular mass. Reversing the problem, we utilize the scaling relation as prior knowledge to uncover the scaling exponent from within the PGSE data. Thus, the scaling exponent-a measure of polymer conformation and solvent quality-and the dispersity (M w /M n ) are obtainable from one simple PGSE experiment. The method utilizes constraints and parametric distribution models in a two-step fitting routine involving first the mass-weighted signal and second the number-weighted signal. The method is developed using lognormal and gamma distribution models and tested on experimental PGSE attenuation of the terminal methylene signal and on the sum of all methylene signals of polyethylene glycol in D 2 O. Scaling exponent and dispersity estimates agree with known values in the majority of instances, leading to the potential application of the method to polymers for which characterization is not possible with alternative techniques. Copyright © 2017 Elsevier Inc. All rights reserved.
Houghton, Bruce F.; Swanson, Don; Rausch, J.; Carey, R.J.; Fagents, S.A.; Orr, Tim R.
2013-01-01
Estimating the mass, volume, and dispersal of the deposits of very small and/or extremely weak explosive eruptions is difficult, unless they can be sampled on eruption. During explosive eruptions of Halema‘uma‘u Crater (Kīlauea, Hawaii) in 2008, we constrained for the first time deposits of bulk volumes as small as 9–300 m3 (1 × 104 to 8 × 105 kg) and can demonstrate that they show simple exponential thinning with distance from the vent. There is no simple fit for such products within classifications such as the Volcanic Explosivity Index (VEI). The VEI is being increasingly used as the measure of magnitude of explosive eruptions, and as an input for both hazard modeling and forecasting of atmospheric dispersal of tephra. The 2008 deposits demonstrate a problem for the use of the VEI, as originally defined, which classifies small, yet ballistic-producing, explosive eruptions at Kīlauea and other basaltic volcanoes as nonexplosive. We suggest a simple change to extend the scale in a fashion inclusive of such very small deposits, and to make the VEI more consistent with other magnitude scales such as the Richter scale for earthquakes. Eruptions of this magnitude constitute a significant risk at Kīlauea and elsewhere because of their high frequency and the growing number of “volcano tourists” visiting basaltic volcanoes.
Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*
Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...
2014-02-24
The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less
Censored rainfall modelling for estimation of fine-scale extremes
NASA Astrophysics Data System (ADS)
Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro
2018-01-01
Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.
NASA Astrophysics Data System (ADS)
Stamps, S.; Bangerth, W.; Hager, B. H.
2014-12-01
The East African Rift System (EARS) is an active divergent plate boundary with slow, approximately E-W extension rates ranging from <1-6 mm/yr. Previous work using thin-sheet modeling indicates lithospheric buoyancy dominates the force balance driving large-scale Nubia-Somalia divergence, however GPS observations within the Western Branch of the EARS show along-rift motions that contradict this simple model. Here, we test the role of mantle flow at the rift-scale using our new, regional 3D numerical model based on the open-source code ASPECT. We define a thermal lithosphere with thicknesses that are systematically changed for generic models or based on geophysical constraints in the Western branch (e.g. melting depths, xenoliths, seismic tomography). Preliminary results suggest existing variations in lithospheric thicknesses along-rift in the Western Branch can drive upper mantle flow that is consistent with geodetic observations.
Global scale groundwater flow model
NASA Astrophysics Data System (ADS)
Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc
2013-04-01
As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik
2015-01-16
We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many ofmore » the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.« less
Changing skewness: an early warning signal of regime shifts in ecosystems.
Guttal, Vishwesha; Jayaprakash, Ciriyam
2008-05-01
Empirical evidence for large-scale abrupt changes in ecosystems such as lakes and vegetation of semi-arid regions is growing. Such changes, called regime shifts, can lead to degradation of ecological services. We study simple ecological models that show a catastrophic transition as a control parameter is varied and propose a novel early warning signal that exploits two ubiquitous features of ecological systems: nonlinearity and large external fluctuations. Either reduced resilience or increased external fluctuations can tip ecosystems to an alternative stable state. It is shown that changes in asymmetry in the distribution of time series data, quantified by changing skewness, is a model-independent and reliable early warning signal for both routes to regime shifts. Furthermore, using model simulations that mimic field measurements and a simple analysis of real data from abrupt climate change in the Sahara, we study the feasibility of skewness calculations using data available from routine monitoring.
Evolution of energy-containing turbulent eddies in the solar wind
NASA Technical Reports Server (NTRS)
Matthaeus, William H.; Oughton, Sean; Pontius, Duane H., Jr.; Zhou, YE
1994-01-01
Previous theoretical treatments of fluid-scale turbulence in the solar wind have concentrated on describing the state and dynamical evolution of fluctuations in the inertial range, which are characterized by power law energy spectra. In the present paper a model for the evolution of somewhat larger, more energetic magnetohydrodynamic (MHD) fluctuations is developed by analogy with classical hydrodynamic turbulence in the quasi-equilibrium range. The model is constructed by assembling and extending existing phenomenologies of homogeneous MHD turbulence, as well as simple two-length-scale models for transport of MHD turbulence in a weekly inhomogeneous medium. A set of equations is presented for the evolution of the turbulence, including the transport and nonlinear evolution of magnetic and kinetic energy, cross helicity, and their correlation scales. Two versions of the model are derived, depending on whether the fluctuations are distributed isotropically in three dimensions or restricted to the two-dimensional plane perpendicular to the mean magnetic field. This model includes a number of potentially important physical effects that have been neglected in previous discussions of transport of solar wind turbulence.
Scaling of chew cycle duration in primates.
Ross, Callum F; Reed, David A; Washington, Rhyan L; Eckhardt, Alison; Anapol, Fred; Shahnoor, Nazima
2009-01-01
The biomechanical determinants of the scaling of chew cycle duration are important components of models of primate feeding systems at all levels, from the neuromechanical to the ecological. Chew cycle durations were estimated in 35 species of primates and analyzed in conjunction with data on morphological variables of the feeding system estimating moment of inertia of the mandible and force production capacity of the chewing muscles. Data on scaling of primate chew cycle duration were compared with the predictions of simple pendulum and forced mass-spring system models of the feeding system. The gravity-driven pendulum model best predicts the observed cycle duration scaling but is rejected as biomechanically unrealistic. The forced mass-spring model predicts larger increases in chew cycle duration with size than observed, but provides reasonable predictions of cycle duration scaling. We hypothesize that intrinsic properties of the muscles predict spring-like behavior of the jaw elevator muscles during opening and fast close phases of the jaw cycle and that modulation of stiffness by the central nervous system leads to spring-like properties during the slow close/power stroke phase. Strepsirrhines show no predictable relationship between chew cycle duration and jaw length. Anthropoids have longer chew cycle durations than nonprimate mammals with similar mandible lengths, possibly due to their enlarged symphyses, which increase the moment of inertia of the mandible. Deviations from general scaling trends suggest that both scaling of the jaw muscles and the inertial properties of the mandible are important in determining the scaling of chew cycle duration in primates.
NASA Astrophysics Data System (ADS)
Kayler, Z. E.; Nitzsche, K. N.; Gessler, A.; Kaiser, M. L.; Hoffmann, C.; Premke, K.; Ellerbrock, R.
2016-12-01
Steep environmental gradients develop across the interface between terrestrial and aquatic domains that influence organic matter (OM) retention. In NE Germany, kettle holes are small water bodies found in high density across managed landscapes. Kettle hole water budgets are generally fed through precipitation and overland flow and are temporarily connected to groundwater resulting in distinct hydroperiods. We took advantage of the range of environmental conditions created by the fluctuating shoreline to investigate patterns of OM stability along transects spanning from hilltops to sediments within a single kettle hole. We physically and chemically separated OM fractions that are expected to be loosely bound, such as particulate organic matter, to those that are tightly bound, such as OM associated with mineral or metal surfaces. The study design allowed us to investigate stabilization processes at the aggregate, transect, and kettle hole catchment scale. At the aggregate scale, we analyzed soil characteristics (texture, pH, extractable Al, Fe, Ca) to contribute to our understanding of OM stabilization. At the transect scale, we compared isotopic trends in the different fractions against a simple Rayleigh distillation model to infer disruption of the transfer of material, for example erosion, by land management such as tillage or the addition of OM through fertilization. At the kettle hole catchment scale, we correlated our findings with plant productivity, landform properties, and soil wetness proxies. Aggregate scale patterns of OM 13C and 15N were fraction dependent; however, we observed a convergence in isotopic patterns with soil properties from OM of more stabilized fractions. At the transect scale, loosely bound fractions did not conform to the simple model, suggesting these fractions are more dynamic and influenced by land management. The stabilized fractions did follow the Rayleigh model, which implies that transfer processes play a larger role in these fractions. At the kettle hole catchment scale, we found that the terrestrial-aquatic transition zone and other areas with high soil moisture correlated with isotopic patterns of the OM fractions. Kettle hole sediment OM fraction patterns were consistently different despite receiving substantial material from the surrounding landscape.
MacGregor, Hayley; McKenzie, Andrew; Jacobs, Tanya; Ullauri, Angelica
2018-04-25
In 2011, a decision was made to scale up a pilot innovation involving 'adherence clubs' as a form of differentiated care for HIV positive people in the public sector antiretroviral therapy programme in the Western Cape Province of South Africa. In 2016 we were involved in the qualitative aspect of an evaluation of the adherence club model, the overall objective of which was to assess the health outcomes for patients accessing clubs through epidemiological analysis, and to conduct a health systems analysis to evaluate how the model of care performed at scale. In this paper we adopt a complex adaptive systems lens to analyse planned organisational change through intervention in a state health system. We explore the challenges associated with taking to scale a pilot that began as a relatively simple innovation by a non-governmental organisation. Our analysis reveals how a programme initially representing a simple, unitary system in terms of management and clinical governance had evolved into a complex, differentiated care system. An innovation that was assessed as an excellent idea and received political backing, worked well whilst supported on a small scale. However, as scaling up progressed, challenges have emerged at the same time as support has waned. We identified a 'tipping point' at which the system was more likely to fail, as vulnerabilities magnified and the capacity for adaptation was exceeded. Yet the study also revealed the impressive capacity that a health system can have for catalysing novel approaches. We argue that innovation in largescale, complex programmes in health systems is a continuous process that requires ongoing support and attention to new innovation as challenges emerge. Rapid scaling up is also likely to require recourse to further resources, and a culture of iterative learning to address emerging challenges and mitigate complex system errors. These are necessary steps to the future success of adherence clubs as a cornerstone of differentiated care. Further research is needed to assess the equity and quality outcomes of a differentiated care model and to ensure the inclusive distribution of the benefits to all categories of people living with HIV.
Use of satellite and modeled soil moisture data for predicting event soil loss at plot scale
NASA Astrophysics Data System (ADS)
Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.
2015-09-01
The potential of coupling soil moisture and a Universal Soil Loss Equation-based (USLE-based) model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e., the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the USLE enhances the capability of the model to account for variations in event soil losses, the soil moisture being an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to ~ 0.35 and a root mean square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.
Use of satellite and modelled soil moisture data for predicting event soil loss at plot scale
NASA Astrophysics Data System (ADS)
Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.
2015-03-01
The potential of coupling soil moisture and a~USLE-based model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in Central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e. the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the RUSLE/USLE, enhances the capability of the model to account for variations in event soil losses, being the soil moisture an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to of ~ 0.35 and a root-mean-square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.
Walther, Andreas; Bjurhager, Ingela; Malho, Jani-Markus; Pere, Jaakko; Ruokolainen, Janne; Berglund, Lars A; Ikkala, Olli
2010-08-11
Although remarkable success has been achieved to mimic the mechanically excellent structure of nacre in laboratory-scale models, it remains difficult to foresee mainstream applications due to time-consuming sequential depositions or energy-intensive processes. Here, we introduce a surprisingly simple and rapid methodology for large-area, lightweight, and thick nacre-mimetic films and laminates with superior material properties. Nanoclay sheets with soft polymer coatings are used as ideal building blocks with intrinsic hard/soft character. They are forced to rapidly self-assemble into aligned nacre-mimetic films via paper-making, doctor-blading or simple painting, giving rise to strong and thick films with tensile modulus of 45 GPa and strength of 250 MPa, that is, partly exceeding nacre. The concepts are environmentally friendly, energy-efficient, and economic and are ready for scale-up via continuous roll-to-roll processes. Excellent gas barrier properties, optical translucency, and extraordinary shape-persistent fire-resistance are demonstrated. We foresee advanced large-scale biomimetic materials, relevant for lightweight sustainable construction and energy-efficient transportation.
On the nonlinearity of spatial scales in extreme weather attribution statements
NASA Astrophysics Data System (ADS)
Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; Alexander, Lisa V.; Wehner, Michael; Shiogama, Hideo; Wolski, Piotr; Ciavarella, Andrew; Christidis, Nikolaos
2018-04-01
In the context of ongoing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporal scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.
On the nonlinearity of spatial scales in extreme weather attribution statements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah
In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less
On the nonlinearity of spatial scales in extreme weather attribution statements
Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; ...
2017-06-17
In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less
Appplication of statistical mechanical methods to the modeling of social networks
NASA Astrophysics Data System (ADS)
Strathman, Anthony Robert
With the recent availability of large-scale social data sets, social networks have become open to quantitative analysis via the methods of statistical physics. We examine the statistical properties of a real large-scale social network, generated from cellular phone call-trace logs. We find this network, like many other social networks to be assortative (r = 0.31) and clustered (i.e., strongly transitive, C = 0.21). We measure fluctuation scaling to identify the presence of internal structure in the network and find that structural inhomogeneity effectively disappears at the scale of a few hundred nodes, though there is no sharp cutoff. We introduce an agent-based model of social behavior, designed to model the formation and dissolution of social ties. The model is a modified Metropolis algorithm containing agents operating under the basic sociological constraints of reciprocity, communication need and transitivity. The model introduces the concept of a social temperature. We go on to show that this simple model reproduces the global statistical network features (incl. assortativity, connected fraction, mean degree, clustering, and mean shortest path length) of the real network data and undergoes two phase transitions, one being from a "gas" to a "liquid" state and the second from a liquid to a glassy state as function of this social temperature.
Microscale nutrient patches produced by zooplankton
Lehman, John T.; Scavia, Donald
1982-01-01
Both track autoradiography and grain-density autoradiography show that individual zooplankton create miniature patches of dissolved nutrients and that algae exploit those regions to absorb phosphate. The patches are short lived and can be dispersed artificially by small-scale turbulence. Our data support a simple model of encounters between algae and nutrient plumes produced by swimming zooplankton. PMID:16593218
Who Has to Pay for Their Education? Evidence from European Tertiary Education
ERIC Educational Resources Information Center
Lim, Gieyoung; Kim, Chong-Uk
2013-01-01
In this article, we investigate a positive tertiary education externality in 18 European countries. Using a simple Cobb-Douglas-type production function with constant returns to scale, we find that there are positive spillover effects from tertiary education in European countries. According to our model prediction, on average, 72,000 new employed…
A Pictorial Version of the RIASEC Scales of the Personal Globe Inventory
ERIC Educational Resources Information Center
Enke, Serena
2009-01-01
Holland's theory of six work personalities has become a staple of vocational psychology, providing a robust and simple model for understanding the structure of vocational interests. Though Holland's types provide a common vocabulary for vocational psychologists working with a variety of populations, until this point there has not been a measure of…
Scale-dependency of effective hydraulic conductivity on fire-affected hillslopes
NASA Astrophysics Data System (ADS)
Langhans, Christoph; Lane, Patrick N. J.; Nyman, Petter; Noske, Philip J.; Cawson, Jane G.; Oono, Akiko; Sheridan, Gary J.
2016-07-01
Effective hydraulic conductivity (Ke) for Hortonian overland flow modeling has been defined as a function of rainfall intensity and runon infiltration assuming a distribution of saturated hydraulic conductivities (Ks). But surface boundary condition during infiltration and its interactions with the distribution of Ks are not well represented in models. As a result, the mean value of the Ks distribution (KS¯), which is the central parameter for Ke, varies between scales. Here we quantify this discrepancy with a large infiltration data set comprising four different methods and scales from fire-affected hillslopes in SE Australia using a relatively simple yet widely used conceptual model of Ke. Ponded disk (0.002 m2) and ring infiltrometers (0.07 m2) were used at the small scales and rainfall simulations (3 m2) and small catchments (ca 3000 m2) at the larger scales. We compared KS¯ between methods measured at the same time and place. Disk and ring infiltrometer measurements had on average 4.8 times higher values of KS¯ than rainfall simulations and catchment-scale estimates. Furthermore, the distribution of Ks was not clearly log-normal and scale-independent, as supposed in the conceptual model. In our interpretation, water repellency and preferential flow paths increase the variance of the measured distribution of Ks and bias ponding toward areas of very low Ks during rainfall simulations and small catchment runoff events while areas with high preferential flow capacity remain water supply-limited more than the conceptual model of Ke predicts. The study highlights problems in the current theory of scaling runoff generation.
Evidence of complex contagion of information in social media: An experiment using Twitter bots.
Mønsted, Bjarke; Sapieżyński, Piotr; Ferrara, Emilio; Lehmann, Sune
2017-01-01
It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.
NASA Astrophysics Data System (ADS)
Holway, Kevin; Thaxton, Christopher S.; Calantoni, Joseph
2012-11-01
Morphodynamic models of coastal evolution require relatively simple parameterizations of sediment transport for application over larger scales. Calantoni and Thaxton (2008) [6] presented a transport parameterization for bimodal distributions of coarse quartz grains derived from detailed boundary layer simulations for sheet flow and near sheet flow conditions. The simulation results, valid over a range of wave forcing conditions and large- to small-grain diameter ratios, were successfully parameterized with a simple power law that allows for the prediction of the transport rates of each size fraction. Here, we have applied the simple power law to a two-dimensional cellular automaton to simulate sheet flow transport. Model results are validated with experiments performed in the small oscillating flow tunnel (S-OFT) at the Naval Research Laboratory at Stennis Space Center, MS, in which sheet flow transport was generated with a bed composed of a bimodal distribution of non-cohesive grains. The work presented suggests that, under the conditions specified, algorithms that incorporate the power law may correctly reproduce laboratory bed surface measurements of bimodal sheet flow transport while inherently incorporating vertical mixing by size.
Ultrafast studies of shock induced chemistry-scaling down the size by turning up the heat
NASA Astrophysics Data System (ADS)
McGrane, Shawn
2015-06-01
We will discuss recent progress in measuring time dependent shock induced chemistry on picosecond time scales. Data on the shock induced chemistry of liquids observed through picosecond interferometric and spectroscopic measurements will be reconciled with shock induced chemistry observed on orders of magnitude larger time and length scales from plate impact experiments reported in the literature. While some materials exhibit chemistry consistent with simple thermal models, other materials, like nitromethane, seem to have more complex behavior. More detailed measurements of chemistry and temperature across a broad range of shock conditions, and therefore time and length scales, will be needed to achieve a real understanding of shock induced chemistry, and we will discuss efforts and opportunities in this direction.
Relationship between the Arctic oscillation and surface air temperature in multi-decadal time-scale
NASA Astrophysics Data System (ADS)
Tanaka, Hiroshi L.; Tamura, Mina
2016-09-01
In this study, a simple energy balance model (EBM) was integrated in time, considering a hypothetical long-term variability in ice-albedo feedback mimicking the observed multi-decadal temperature variability. A natural variability was superimposed on a linear warming trend due to the increasing radiative forcing of CO2. The result demonstrates that the superposition of the natural variability and the background linear trend can offset with each other to show the warming hiatus for some period. It is also stressed that the rapid warming during 1970-2000 can be explained by the superposition of the natural variability and the background linear trend at least within the simple model. The key process of the fluctuating planetary albedo in multi-decadal time scale is investigated using the JRA-55 reanalysis data. It is found that the planetary albedo increased for 1958-1970, decreased for 1970-2000, and increased for 2000-2012, as expected by the simple EBM experiments. The multi-decadal variability in the planetary albedo is compared with the time series of the AO mode and Barents Sea mode of surface air temperature. It is shown that the recent AO negative pattern showing warm Arctic and cold mid-latitudes is in good agreement with planetary albedo change indicating negative anomaly in high latitudes and positive anomaly in mid-latitudes. Moreover, the Barents Sea mode with the warm Barents Sea and cold mid-latitudes shows long-term variability similar to planetary albedo change. Although further studies are needed, the natural variabilities of both the AO mode and Barents Sea mode indicate some possible link to the planetary albedo as suggested by the simple EBM to cause the warming hiatus in recent years.
Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun
2016-01-01
We previously presented a group theoretical model that describes psychiatric patient states or clinical data in a graded vector-like format based on modulo groups. Meanwhile, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5, the current version), is frequently used for diagnosis in daily psychiatric treatments and biological research. The diagnostic criteria of DSM-5 contain simple binominal items relating to the presence or absence of specific symptoms. In spite of its simple form, the practical structure of the DSM-5 system is not sufficiently systemized for data to be treated in a more rationally sophisticated way. To view the disease states in terms of symmetry in the manner of abstract algebra is considered important for the future systematization of clinical medicine. We provide a simple idea for the practical treatment of the psychiatric diagnosis/score of DSM-5 using depressive symptoms in line with our previously proposed method. An expression is given employing modulo-2 and -7 arithmetic (in particular, additive group theory) for Criterion A of a 'major depressive episode' that must be met for the diagnosis of 'major depressive disorder' in DSM-5. For this purpose, the novel concept of an imaginary value 0 that can be recognized as an explicit 0 or implicit 0 was introduced to compose the model. The zeros allow the incorporation or deletion of an item between any other symptoms if they are ordered appropriately. Optionally, a vector-like expression can be used to rate/select only specific items when modifying the criterion/scale. Simple examples are illustrated concretely. Further development of the proposed method for the criteria/scale of a disease is expected to raise the level of formalism of clinical medicine to that of other fields of natural science.
NASA Astrophysics Data System (ADS)
Sawangwit, U.; Shanks, T.; Abdalla, F. B.; Cannon, R. D.; Croom, S. M.; Edge, A. C.; Ross, Nicholas P.; Wake, D. A.
2011-10-01
We present the angular correlation function measured from photometric samples comprising 1562 800 luminous red galaxies (LRGs). Three LRG samples were extracted from the Sloan Digital Sky Survey (SDSS) imaging data, based on colour-cut selections at redshifts, z≈ 0.35, 0.55 and 0.7 as calibrated by the spectroscopic surveys, SDSS-LRG, 2dF-SDSS LRG and QSO (quasi-stellar object) (2SLAQ) and the AAΩ-LRG survey. The galaxy samples cover ≈7600 deg2 of sky, probing a total cosmic volume of ≈5.5 h-3 Gpc3. The small- and intermediate-scale correlation functions generally show significant deviations from a single power-law fit with a well-detected break at ≈1 h-1 Mpc, consistent with the transition scale between the one- and two-halo terms in halo occupation models. For galaxy separations 1-20 h-1 Mpc and at fixed luminosity, we see virtually no evolution of the clustering with redshift and the data are consistent with a simple high peaks biasing model where the comoving LRG space density is constant with z. At fixed z, the LRG clustering amplitude increases with luminosity in accordance with the simple high peaks model, with a typical LRG dark matter halo mass 1013-1014 h-1 M⊙. For r < 1 h-1 Mpc, the evolution is slightly faster and the clustering decreases towards high redshift consistent with a virialized clustering model. However, assuming the halo occupation distribution (HOD) and Λ cold dark matter (ΛCDM) halo merger frameworks, ˜2-3 per cent/Gyr of the LRGs are required to merge in order to explain the small scales clustering evolution, consistent with previous results. At large scales, our result shows good agreement with the SDSS-LRG result of Eisenstein et al. but we find an apparent excess clustering signal beyond the baryon acoustic oscillations (BAO) scale. Angular power spectrum analyses of similar LRG samples also detect a similar apparent large-scale clustering excess but more data are required to check for this feature in independent galaxy data sets. Certainly, if the ΛCDM model were correct then we would have to conclude that this excess was caused by systematics at the level of Δw≈ 0.001-0.0015 in the photometric AAΩ-LRG sample.
Titius-Bode laws in the solar system. 2: Build your own law from disk models
NASA Astrophysics Data System (ADS)
Dubrulle, B.; Graner, F.
1994-02-01
Simply respecting both scale and rotational invariance, it is easy to construct an endless collection of theoretical models predicting a Titius-Bode law, irrespective to their physical content. Due to the numerous ways to get the law and its intrinsic arbitrariness, it is not a useful constraint on theories of solar system formation. To illustrate the simple elegance of scale-invariant methods, we explicitly cook up one of the simplest examples, an infinitely thin cold gaseous disk rotating around a central object. In that academic case, the Titius-Bode law holds during the linear stage of the gravitational instability. The time scale of the instability is of the order of a self-gravitating time scale, (G rhod)-1/2, where rhod is the disk density. This model links the separation between different density maxima with the ratio MD/MC of the masses of the disk and the central object; for instance, MD/MC of the order of 0.18 roughly leads to the observed separation between the planets. We discuss the boundary conditions and the limit of the Wentzel-Kramer-Brillouin (WKB) approximation.
Simple Scaling of Mulit-Stream Jet Plumes for Aeroacoustic Modeling
NASA Technical Reports Server (NTRS)
Bridges, James
2016-01-01
When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more coannular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a best approximation determined and the shortcomings of the model highlighted.
Simple Scaling of Multi-Stream Jet Plumes for Aeroacoustic Modeling
NASA Technical Reports Server (NTRS)
Bridges, James
2015-01-01
When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more co-annular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV (Particle Image Velocimetry) data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a 'best' approximation determined and the shortcomings of the model highlighted.
NASA Astrophysics Data System (ADS)
Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki
2014-05-01
Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.
Patterns and multi-scale drivers of phytoplankton species richness in temperate peri-urban lakes.
Catherine, Arnaud; Selma, Maloufi; Mouillot, David; Troussellier, Marc; Bernard, Cécile
2016-07-15
Local species richness (SR) is a key characteristic affecting ecosystem functioning. Yet, the mechanisms regulating phytoplankton diversity in freshwater ecosystems are not fully understood, especially in peri-urban environments where anthropogenic pressures strongly impact the quality of aquatic ecosystems. To address this issue, we sampled the phytoplankton communities of 50 lakes in the Paris area (France) characterized by a large gradient of physico-chemical and catchment-scale characteristics. We used large phytoplankton datasets to describe phytoplankton diversity patterns and applied a machine-learning algorithm to test the degree to which species richness patterns are potentially controlled by environmental factors. Selected environmental factors were studied at two scales: the lake-scale (e.g. nutrients concentrations, water temperature, lake depth) and the catchment-scale (e.g. catchment, landscape and climate variables). Then, we used a variance partitioning approach to evaluate the interaction between lake-scale and catchment-scale variables in explaining local species richness. Finally, we analysed the residuals of predictive models to identify potential vectors of improvement of phytoplankton species richness predictive models. Lake-scale and catchment-scale drivers provided similar predictive accuracy of local species richness (R(2)=0.458 and 0.424, respectively). Both models suggested that seasonal temperature variations and nutrient supply strongly modulate local species richness. Integrating lake- and catchment-scale predictors in a single predictive model did not provide increased predictive accuracy; therefore suggesting that the catchment-scale model probably explains observed species richness variations through the impact of catchment-scale variables on in-lake water quality characteristics. Models based on catchment characteristics, which include simple and easy to obtain variables, provide a meaningful way of predicting phytoplankton species richness in temperate lakes. This approach may prove useful and cost-effective for the management and conservation of aquatic ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mohan, Nisha
Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches. A fundamental question that motivates the modeling of foams is `how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?' A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,"Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes," J. Mech.Phys. Solids, 59, pp. 2227--2237, Erratum 60, 1753-1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like 1) The initial linear elastic response. 2) One or more nonlinear instabilities, yielding, and hardening. The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.
Accelerating universe with time variation of G and Λ
NASA Astrophysics Data System (ADS)
Darabi, F.
2012-03-01
We study a gravitational model in which scale transformations play the key role in obtaining dynamical G and Λ. We take a non-scale invariant gravitational action with a cosmological constant and a gravitational coupling constant. Then, by a scale transformation, through a dilaton field, we obtain a new action containing cosmological and gravitational coupling terms which are dynamically dependent on the dilaton field with Higgs type potential. The vacuum expectation value of this dilaton field, through spontaneous symmetry breaking on the basis of anthropic principle, determines the time variations of G and Λ. The relevance of these time variations to the current acceleration of the universe, coincidence problem, Mach's cosmological coincidence and those problems of standard cosmology addressed by inflationary models, are discussed. The current acceleration of the universe is shown to be a result of phase transition from radiation toward matter dominated eras. No real coincidence problem between matter and vacuum energy densities exists in this model and this apparent coincidence together with Mach's cosmological coincidence are shown to be simple consequences of a new kind of scale factor dependence of the energy momentum density as ρ˜ a -4. This model also provides the possibility for a super fast expansion of the scale factor at very early universe by introducing exotic type matter like cosmic strings.
Prognostic accuracy of five simple scales in childhood bacterial meningitis.
Pelkonen, Tuula; Roine, Irmeli; Monteiro, Lurdes; Cruzeiro, Manuel Leite; Pitkäranta, Anne; Kataja, Matti; Peltola, Heikki
2012-08-01
In childhood acute bacterial meningitis, the level of consciousness, measured with the Glasgow coma scale (GCS) or the Blantyre coma scale (BCS), is the most important predictor of outcome. The Herson-Todd scale (HTS) was developed for Haemophilus influenzae meningitis. Our objective was to identify prognostic factors, to form a simple scale, and to compare the predictive accuracy of these scales. Seven hundred and twenty-three children with bacterial meningitis in Luanda were scored by GCS, BCS, and HTS. The simple Luanda scale (SLS), based on our entire database, comprised domestic electricity, days of illness, convulsions, consciousness, and dyspnoea at presentation. The Bayesian Luanda scale (BLS) added blood glucose concentration. The accuracy of the 5 scales was determined for 491 children without an underlying condition, against the outcomes of death, severe neurological sequelae or death, or a poor outcome (severe neurological sequelae, death, or deafness), at hospital discharge. The highest accuracy was achieved with the BLS, whose area under the curve (AUC) for death was 0.83, for severe neurological sequelae or death was 0.84, and for poor outcome was 0.82. Overall, the AUCs for SLS were ≥0.79, for GCS were ≥0.76, for BCS were ≥0.74, and for HTS were ≥0.68. Adding laboratory parameters to a simple scoring system, such as the SLS, improves the prognostic accuracy only little in bacterial meningitis.
Nonlinear evolution of f(R) cosmologies. II. Power spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyaizu, Hiroaki; Hu, Wayne; Department of Astronomy and Astrophysics, University of Chicago, Chicago Illinois 60637
2008-12-15
We carry out a suite of cosmological simulations of modified action f(R) models where cosmic acceleration arises from an alteration of gravity instead of dark energy. These models introduce an extra scalar degree of freedom which enhances the force of gravity below the inverse mass or Compton scale of the scalar. The simulations exhibit the so-called chameleon mechanism, necessary for satisfying local constraints on gravity, where this scale depends on environment, in particular, the depth of the local gravitational potential. We find that the chameleon mechanism can substantially suppress the enhancement of power spectrum in the nonlinear regime if themore » background field value is comparable to or smaller than the depth of the gravitational potentials of typical structures. Nonetheless power spectrum enhancements at intermediate scales remain at a measurable level for models even when the expansion history is indistinguishable from a cosmological constant, cold dark matter model. Simple scaling relations that take the linear power spectrum into a nonlinear spectrum fail to capture the modifications of f(R) due to the change in collapsed structures, the chameleon mechanism, and the time evolution of the modifications.« less
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
A two-scale model for correlation between B cell VDJ usage in zebrafish
NASA Astrophysics Data System (ADS)
Pan, Keyao; Deem, Michael
2011-03-01
The zebrafish (Danio rerio) is one of the model animals for study of immunology. The dynamics of the adaptive immune system in zebrafish is similar to that in higher animals. In this work, we built a two-scale model to simulate the dynamics of B cells in primary and secondary immune reactions in zebrafish and to explain the reported correlation between VDJ usage of B cell repertoires in distinct zebrafish. The first scale of the model consists of a generalized NK model to simulate the B cell maturation process in the 10-day primary immune response. The second scale uses a delay ordinary differential equation system to model the immune responses in the 6-month lifespan of zebrafish. The generalized NK model shows that mature B cells specific to one antigen mostly possess a single VDJ recombination. The probability that mature B cells in two zebrafish have the same VDJ recombination increases with the B cell population size or the B cell selection intensity and decreases with the B cell hypermutation rate. The ODE model shows a distribution of correlation in the VDJ usage of the B cell repertoires in two six-month-old zebrafish that is highly similar to that from experiment. This work presents a simple theory to explain the experimentally observed correlation in VDJ usage of distinct zebrafish B cell repertoires after an immune response.
Realpe, Alba; Adams, Ann; Wall, Peter; Griffin, Damian; Donovan, Jenny L
2016-08-01
How a randomized controlled trial (RCT) is explained to patients is a key determinant of recruitment to that trial. This study developed and implemented a simple six-step model to fully inform patients and to support them in deciding whether to take part or not. Ninety-two consultations with 60 new patients were recorded and analyzed during a pilot RCT comparing surgical and nonsurgical interventions for hip impingement. Recordings were analyzed using techniques of thematic analysis and focused conversation analysis. Early findings supported the development of a simple six-step model to provide a framework for good recruitment practice. Model steps are as follows: (1) explain the condition, (2) reassure patients about receiving treatment, (3) establish uncertainty, (4) explain the study purpose, (5) give a balanced view of treatments, and (6) Explain study procedures. There are also two elements throughout the consultation: (1) responding to patients' concerns and (2) showing confidence. The pilot study was successful, with 70% (n = 60) of patients approached across nine centers agreeing to take part in the RCT, so that the full-scale trial was funded. The six-step model provides a promising framework for successful recruitment to RCTs. Further testing of the model is now required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Tarvainen, Lasse; Räntfors, Mats; Wallin, Göran
2015-11-01
Previous leaf-scale studies of carbon assimilation describe short-term resource-use efficiency (RUE) trade-offs where high use efficiency of one resource requires low RUE of another. However, varying resource availabilities may cause long-term RUE trade-offs to differ from the short-term patterns. This may have important implications for understanding canopy-scale resource use and allocation. We used continuous gas exchange measurements collected at five levels within a Norway spruce, Picea abies (L.) karst., canopy over 3 years to assess seasonal differences in the interactions between shoot-scale resource availability (light, water and nitrogen), net photosynthesis (An ) and the use efficiencies of light (LUE), water (WUE) and nitrogen (NUE) for carbon assimilation. The continuous data set was used to develop and evaluate multiple regression models for predicting monthly shoot-scale An . These models showed that shoot-scale An was strongly dependent on light availability and was generally well described with simple one- or two-parameter models. WUE peaked in spring, NUE in summer and LUE in autumn. However, the relative importance of LUE for carbon assimilation increased with canopy depth at all times. Our results suggest that accounting for seasonal and within-canopy trade-offs may be important for RUE-based modelling of canopy carbon uptake. © 2015 John Wiley & Sons Ltd.
Perel, Pablo; Edwards, Phil; Shakur, Haleema; Roberts, Ian
2008-11-06
Traumatic brain injury (TBI) is an important cause of acquired disability. In evaluating the effectiveness of clinical interventions for TBI it is important to measure disability accurately. The Glasgow Outcome Scale (GOS) is the most widely used outcome measure in randomised controlled trials (RCTs) in TBI patients. However GOS measurement is generally collected at 6 months after discharge when loss to follow up could have occurred. The objectives of this study were to evaluate the association and predictive validity between a simple disability scale at hospital discharge, the Oxford Handicap Scale (OHS), and the GOS at 6 months among TBI patients. The study was a secondary analysis of a randomised clinical trial among TBI patients (MRC CRASH Trial). A Spearman correlation was estimated to evaluate the association between the OHS and GOS. The validity of different dichotomies of the OHS for predicting GOS at 6 months was assessed by calculating sensitivity, specificity and the C statistic. Uni and multivariate logistic regression models were fitted including OHS as explanatory variable. For each model we analysed its discrimination and calibration. We found that the OHS is highly correlated with GOS at 6 months (spearman correlation 0.75) with evidence of a linear relationship between the two scales. The OHS dichotomy that separates patients with severe dependency or death showed the greatest discrimination (C statistic: 84.3). Among survivors at hospital discharge the OHS showed a very good discrimination (C statistic 0.78) and excellent calibration when used to predict GOS outcome at 6 months. We have shown that the OHS, a simple disability scale available at hospital discharge can predict disability accurately, according to the GOS, at 6 months. OHS could be used to improve the design and analysis of clinical trials in TBI patients and may also provide a valuable clinical tool for physicians to improve communication with patients and relatives when assessing a patient's prognosis at hospital discharge.
General bounds in Hybrid Natural Inflation
NASA Astrophysics Data System (ADS)
Germán, Gabriel; Herrera-Aguilar, Alfredo; Hidalgo, Juan Carlos; Sussman, Roberto A.; Tapia, José
2017-12-01
Recently we have studied in great detail a model of Hybrid Natural Inflation (HNI) by constructing two simple effective field theories. These two versions of the model allow inflationary energy scales as small as the electroweak scale in one of them or as large as the Grand Unification scale in the other, therefore covering the whole range of possible energy scales. In any case the inflationary sector of the model is of the form V(phi)=V0 (1+a cos(phi/f)) where 0<= a<1 and the end of inflation is triggered by an independent waterfall field. One interesting characteristic of this model is that the slow-roll parameter epsilon(phi) is a non-monotonic function of phi presenting a maximum close to the inflection point of the potential. Because the scalar spectrum Script Ps(k) of density fluctuations when written in terms of the potential is inversely proportional to epsilon(phi) we find that Script Ps(k) presents a minimum at phimin. The origin of the HNI potential can be traced to a symmetry breaking phenomenon occurring at some energy scale f which gives rise to a (massless) Goldstone boson. Non-perturbative physics at some temperature T
An Introduction to Magnetospheric Physics by Means of Simple Models
NASA Technical Reports Server (NTRS)
Stern, D. P.
1981-01-01
The large scale structure and behavior of the Earth's magnetosphere is discussed. The model is suitable for inclusion in courses on space physics, plasmas, astrophysics or the Earth's environment, as well as for self-study. Nine quantitative problems, dealing with properties of linear superpositions of a dipole and a constant field are presented. Topics covered include: open and closed models of the magnetosphere; field line motion; the role of magnetic merging (reconnection); magnetospheric convection; and the origin of the magnetopause, polar cusps, and high latitude lobes.
Roles of dark energy perturbations in dynamical dark energy models: can we ignore them?
Park, Chan-Gyung; Hwang, Jai-chan; Lee, Jae-heon; Noh, Hyerim
2009-10-09
We show the importance of properly including the perturbations of the dark energy component in the dynamical dark energy models based on a scalar field and modified gravity theories in order to meet with present and future observational precisions. Based on a simple scaling scalar field dark energy model, we show that observationally distinguishable substantial differences appear by ignoring the dark energy perturbation. By ignoring it the perturbed system of equations becomes inconsistent and deviations in (gauge-invariant) power spectra depend on the gauge choice.
Roll plane analysis of on-aircraft antennas
NASA Technical Reports Server (NTRS)
Burnside, W. D.; Marhefka, R. J.; Byu, C. L.
1974-01-01
Roll plane radiation patterns of on-aircraft antennas are analyzed using high frequency solutions. Aircraft-antenna pattern performance in which the aircraft is modelled in its most basic form is presented. The fuselage is assumed to be a perfectly conducting elliptic cylinder with the antennas mounted near the top or bottom. The wings are simulated by arbitrarily many sided flat plates and the engines by circular cylinders. The patterns in each case are verified by measured results taken on simple models as well as scale models of actual aircraft.
Ida, Masato; Taniguchi, Nobuyuki
2003-09-01
This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.
Assessment of snow-dominated water resources: (Ir-)relevant scales for observation and modelling
NASA Astrophysics Data System (ADS)
Schaefli, Bettina; Ceperley, Natalie; Michelon, Anthony; Larsen, Joshua; Beria, Harsh
2017-04-01
High Alpine catchments play an essential role for many world regions since they 1) provide water resources to low lying and often relatively dry regions, 2) are important for hydropower production as a result of their high hydraulic heads, 3) offer relatively undisturbed habitat for fauna and flora and 4) provide a source of cold water often late into the summer season (due to snowmelt), which is essential for many downstream river ecosystems. However, the water balance of such high Alpine hydrological systems is often difficult to accurately estimate, in part because of seasonal to interannual accumulation of precipitation in the form of snow and ice and by relatively low but highly seasonal evapotranspiration rates. These processes are strongly driven by the topography and related vegetation patterns, by air temperature gradients, solar radiation and wind patterns. Based on selected examples, we will discuss how the spatial scale of these patterns dictates at which scales we can make reliable water balance assessments. Overall, this contribution will provide an overview of some of the key open questions in terms of observing and modelling the dominant hydrological processes in Alpine areas at the right scale. A particular focus will be on the observation and modelling of snow accumulation and melt processes, discussing in particular the usefulness of simple models versus fully physical models at different spatial scales and the role of observed data.
The structure of protoplanetary discs around evolving young stars
NASA Astrophysics Data System (ADS)
Bitsch, Bertram; Johansen, Anders; Lambrechts, Michiel; Morbidelli, Alessandro
2015-03-01
The formation of planets with gaseous envelopes takes place in protoplanetary accretion discs on time scales of several million years. Small dust particles stick to each other to form pebbles, pebbles concentrate in the turbulent flow to form planetesimals and planetary embryos and grow to planets, which undergo substantial radial migration. All these processes are influenced by the underlying structure of the protoplanetary disc, specifically the profiles of temperature, gas scale height, and density. The commonly used disc structure of the minimum mass solar nebula (MMSN) is a simple power law in all these quantities. However, protoplanetary disc models with both viscous and stellar heating show several bumps and dips in temperature, scale height, and density caused by transitions in opacity, which are missing in the MMSN model. These play an important role in the formation of planets, since they can act as sweet spots for forming planetesimals via the streaming instability and affect the direction and magnitude of type-I migration. We present 2D simulations of accretion discs that feature radiative cooling and viscous and stellar heating, and they are linked to the observed evolutionary stages of protoplanetary discs and their host stars. These models allow us to identify preferred planetesimal and planet formation regions in the protoplanetary disc as a function of the disc's metallicity, accretion rate, and lifetime. We derive simple fitting formulae that feature all structural characteristics of protoplanetary discs during the evolution of several Myr. These fits are straightforward for applying to modelling any growth stage of planets where detailed knowledge of the underlying disc structure is required. Appendix A is available in electronic form at http://www.aanda.org
Rakowski, Andrzej Z; Nakamura, Toshio; Pazdur, Anna
2008-10-01
Radiocarbon concentration in the atmosphere is significantly lower in areas where man-made emissions of carbon dioxide occur. This phenomenon is known as Suess effect, and is caused by the contamination of clean air with non-radioactive carbon from fossil fuel combustion. The effect is more strongly observed in industrial and densely populated urban areas. Measurements of carbon isotope concentrations in a study area can be compared to those from areas of clear air in order to estimate the amount of carbon dioxide emission from fossil fuel combustion by using a simple mathematical model. This can be calculated using the simple mathematical model. The result of the mathematical model followed in this study suggests that the use of annual rings of trees to obtain the secular variations of 14C concentration of atmospheric CO2 can be useful and efficient for environmental monitoring and modeling of the carbon distribution in local scale.
Cheng, Ryan R; Hawk, Alexander T; Makarov, Dmitrii E
2013-02-21
Recent experiments showed that the reconfiguration dynamics of unfolded proteins are often adequately described by simple polymer models. In particular, the Rouse model with internal friction (RIF) captures internal friction effects as observed in single-molecule fluorescence correlation spectroscopy (FCS) studies of a number of proteins. Here we use RIF, and its non-free draining analog, Zimm model with internal friction, to explore the effect of internal friction on the rate with which intramolecular contacts can be formed within the unfolded chain. Unlike the reconfiguration times inferred from FCS experiments, which depend linearly on the solvent viscosity, the first passage times to form intramolecular contacts are shown to display a more complex viscosity dependence. We further describe scaling relationships obeyed by contact formation times in the limits of high and low internal friction. Our findings provide experimentally testable predictions that can serve as a framework for the analysis of future studies of contact formation in proteins.
Pearse, Aaron T.; Kaminski, Richard M.; Reinecke, Kenneth J.; Dinsmore, Stephen J.
2012-01-01
Landscape features influence distribution of waterbirds throughout their annual cycle. A conceptual model, the wetland habitat complex, may be useful in conservation of wetland habitats for dabbling ducks (Anatini). The foundation of this conceptual model is that ducks seek complexes of wetlands containing diverse resources to meet dynamic physiological needs. We included flooded croplands, wetlands and ponds, public-land waterfowl sanctuary, and diversity of habitats as key components of wetland habitat complexes and compared their relative influence at two spatial scales (i.e., local, 0.25-km radius; landscape, 4-km) on dabbling ducks wintering in western Mississippi, USA during winters 2002–2004. Distribution of mallard (Anas platyrhynchos) groups was positively associated with flooded cropland at local and landscape scales. Models representing flooded croplands at the landscape scale best explained occurrence of other dabbling ducks. Habitat complexity measured at both scales best explained group size of other dabbling ducks. Flooded croplands likely provided food that had decreased in availability due to conversion of wetlands to agriculture. Wetland complexes at landscape scales were more attractive to wintering ducks than single or structurally simple wetlands. Conservation of wetland complexes at large spatial scales (≥5,000 ha) on public and private lands will require coordination among multiple stakeholders.
Economic decision making and the application of nonparametric prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2008-01-01
Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.
Acoustic Characteristics of a Model Isolated Tiltrotor in DNW
NASA Technical Reports Server (NTRS)
Booth, Earl R., Jr.; McCluer, Megan; Tadghighi, Hormoz
1999-01-01
An aeroacoustic wind tunnel test was conducted using a scaled isolated tiltrotor model. Acoustic data were acquired using an in-flow microphone wing traversed beneath the model to map the directivity of the near-field acoustic radiation of the rotor for a parametric variation of rotor angle-of-attack, tunnel speed, and rotor thrust. Acoustic metric data were examined to show trends of impulsive noise for the parametric variations. BVISPL maximum noise levels were found to increase with alpha for constant mu and C(sub T), although the maximum BVI levels were found at much higher a than for a typical helicopter. BVISPL levels were found to increase with mu for constant alpha and C(sub T. BVISPL was found to decrease with increasing CT for constant a and m, although BVISPL increased with thrust for a constant wake geometry. Metric data were also scaled for M(sub up) to evaluate how well simple power law scaling could be used to correct metric data for M(sub up) effects.
Consistency among distance measurements: transparency, BAO scale and accelerated expansion
NASA Astrophysics Data System (ADS)
Avgoustidis, Anastasios; Verde, Licia; Jimenez, Raul
2009-06-01
We explore consistency among different distance measures, including Supernovae Type Ia data, measurements of the Hubble parameter, and determination of the Baryon acoustic oscillation scale. We present new constraints on the cosmic transparency combining H(z) data together with the latest Supernovae Type Ia data compilation. This combination, in the context of a flat ΛCDM model, improves current constraints by nearly an order of magnitude although the constraints presented here are parametric rather than non-parametric. We re-examine the recently reported tension between the Baryon acoustic oscillation scale and Supernovae data in light of possible deviations from transparency, concluding that the source of the discrepancy may most likely be found among systematic effects of the modelling of the low redshift data or a simple ~ 2-σ statistical fluke, rather than in exotic physics. Finally, we attempt to draw model-independent conclusions about the recent accelerated expansion, determining the acceleration redshift to be zacc = 0.35+0.20-0.13 (1-σ).
Particle dynamics in a viscously decaying cat's eye: The effect of finite Schmidt numbers
NASA Astrophysics Data System (ADS)
Newton, P. K.; Meiburg, Eckart
1991-05-01
The dynamics and mixing of passive marker particles for the model problem of a decaying cat's eye flow is studied. The flow field corresponds to Stuart's one-parameter family of solutions [J. Fluid Mech. 29, 417 (1967)]. It is time dependent as a result of viscosity, which is modeled by allowing the free parameter to depend on time according to the self-similar solution of the Navier-Stokes equations for an isolated point vortex. Particle diffusion is numerically simulated by a random walk model. While earlier work had shown that, for small values of time over Reynolds number t/Re≪1, the interval length characterizing the formation of lobes of fluid escaping from the cat's eye scales as Re-1/2, the present study shows that, for the case of diffusive effects and t/Pe≪1, the scaling follows Pe-1/4. A simple argument, taking into account streamline convergence and divergence in different parts of the flow field, explains the Pe-1/4 scaling.
NASA Astrophysics Data System (ADS)
Laiolo, P.; Gabellani, S.; Campo, L.; Silvestro, F.; Delogu, F.; Rudari, R.; Pulvirenti, L.; Boni, G.; Fascetti, F.; Pierdicca, N.; Crapolicchio, R.; Hasenauer, S.; Puca, S.
2016-06-01
The reliable estimation of hydrological variables in space and time is of fundamental importance in operational hydrology to improve the flood predictions and hydrological cycle description. Nowadays remotely sensed data can offer a chance to improve hydrological models especially in environments with scarce ground based data. The aim of this work is to update the state variables of a physically based, distributed and continuous hydrological model using four different satellite-derived data (three soil moisture products and a land surface temperature measurement) and one soil moisture analysis to evaluate, even with a non optimal technique, the impact on the hydrological cycle. The experiments were carried out for a small catchment, in the northern part of Italy, for the period July 2012-June 2013. The products were pre-processed according to their own characteristics and then they were assimilated into the model using a simple nudging technique. The benefits on the model predictions of discharge were tested against observations. The analysis showed a general improvement of the model discharge predictions, even with a simple assimilation technique, for all the assimilation experiments; the Nash-Sutcliffe model efficiency coefficient was increased from 0.6 (relative to the model without assimilation) to 0.7, moreover, errors on discharge were reduced up to the 10%. An added value to the model was found in the rainfall season (autumn): all the assimilation experiments reduced the errors up to the 20%. This demonstrated that discharge prediction of a distributed hydrological model, which works at fine scale resolution in a small basin, can be improved with the assimilation of coarse-scale satellite-derived data.
Safak, Ilgar; List, Jeffrey; Warner, John C.; Kumar, Nirnimesh
2017-01-01
Long-term decadal-scale shoreline change is an important parameter for quantifying the stability of coastal systems. The decadal-scale coastal change is controlled by processes that occur on short time scales (such as storms) and long-term processes (such as prevailing waves). The ability to predict decadal-scale shoreline change is not well established and the fundamental physical processes controlling this change are not well understood. Here we investigate the processes that create large-scale long-term shoreline change along the Outer Banks of North Carolina, an uninterrupted 60 km stretch of coastline, using both observations and a numerical modeling approach. Shoreline positions for a 24-yr period were derived from aerial photographs of the Outer Banks. Analysis of the shoreline position data showed that, although variable, the shoreline eroded an average of 1.5 m/yr throughout this period. The modeling approach uses a three-dimensional hydrodynamics-based numerical model coupled to a spectral wave model and simulates the full 24-yr time period on a spatial grid running on a short (second scale) time-step to compute the sediment transport patterns. The observations and the model results show similar magnitudes (O(105 m3/yr)) and patterns of alongshore sediment fluxes. Both the observed and the modeled alongshore sediment transport rates have more rapid changes at the north of our section due to continuously curving coastline, and possible effects of alongshore variations in shelf bathymetry. The southern section with a relatively uniform orientation, on the other hand, has less rapid transport rate changes. Alongshore gradients of the modeled sediment fluxes are translated into shoreline change rates that have agreement in some locations but vary in others. Differences between observations and model results are potentially influenced by geologic framework processes not included in the model. Both the observations and the model results show higher rates of erosion (∼−1 m/yr) averaged over the northern half of the section as compared to the southern half where the observed and modeled averaged net shoreline changes are smaller (<0.1 m/yr). The model indicates accretion in some shallow embayments, whereas observations indicate erosion in these locations. Further analysis identifies that the magnitude of net alongshore sediment transport is strongly dominated by events associated with high wave energy. However, both big- and small- wave events cause shoreline change of the same order of magnitude because it is the gradients in transport, not the magnitude, that are controlling shoreline change. Results also indicate that alongshore momentum is not a simple balance between wave breaking and bottom stress, but also includes processes of horizontal vortex force, horizontal advection and pressure gradient that contribute to long-term alongshore sediment transport. As a comparison to a more simple approach, an empirical formulation for alongshore sediment transport is used. The empirical estimates capture the effect of the breaking term in the hydrodynamics-based model, however, other processes that are accounted for in the hydrodynamics-based model improve the agreement with the observed alongshore sediment transport.
NASA Technical Reports Server (NTRS)
Green, A. E. S.; Singhal, R. P.
1979-01-01
An analytic representation for the spatial (radial and longitudinal) yield spectra is developed in terms of a model containing three simple 'microplumes'. The model is applied to electron energy degradation in molecular nitrogen gas for 0.1 to 5 keV incident electrons. From the nature of the cross section input to this model it is expected that the scaled spatial yield spectra for other gases will be quite similar. The model indicates that each excitation, ionization, etc. plume should have its individual spatial and energy dependence. Extensions and aeronomical and radiological applications of the model are discussed.
Rotstein, Horacio G
2014-01-01
We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between the voltage and the gating variable, and they almost disappear when both equations evolve at comparable rates. In contrast, voltage responses are almost insensitive to nonlinearities located in the gating variable equation. The method we develop provides a framework for the investigation of the preferred frequency responses in three-dimensional and nonlinear neuronal models as well as simple models of coupled neurons.
The nature of the colloidal 'glass' transition.
Dawson, Kenneth A; Lawlor, A; DeGregorio, Paolo; McCullagh, Gavin D; Zaccarelli, Emanuela; Foffi, Giuseppe; Tartaglia, Piero
2003-01-01
The dynamically arrested state of matter is discussed in the context of athermal systems, such as the hard sphere colloidal arrest. We believe that the singular dynamical behaviour near arrest expressed, for example, in how the diffusion constant vanishes may be 'universal', in a sense to be discussed in the paper. Based on this we argue the merits of studying the problem with simple lattice models. This, by analogy with the the critical point of the Ising model, should lead us to clarify the questions, and begin the program of establishing the degree of universality to be expected. We deal only with 'ideal' athermal dynamical arrest transitions, such as those found for hard sphere systems. However, it is argued that dynamically available volume (DAV) is the relevant order parameter of the transition, and that universal mechanisms may be well expressed in terms of DAV. For simple lattice models we give examples of simple laws that emerge near the dynamical arrest, emphasising the idea of a near-ideal gas of 'holes', interacting to give the power law diffusion constant scaling near the arrest. We also seek to open the discussion of the possibility of an underlying weak coupling theory of the dynamical arrest transition, based on DAV.
Simple Model of Macroscopic Instability in XeCl Discharge Pumped Lasers
NASA Astrophysics Data System (ADS)
Ahmed, Belasri; Zoheir, Harrache
2003-10-01
The aim of this work is to study the development of the macroscopic non uniformity of the electron density of high pressure discharge for excimer lasers and eventually its propagation because of the medium kinetics phenomena. This study is executed using a transverse mono-dimensional model, in which the plasma is represented by a set of resistance's in parallel. This model was employed using a numerical code including three strongly coupled parts: electric circuit equations, electron Boltzmann equation, and kinetics equations (chemical kinetics model). The time variations of the electron density in each plasma element are obtained by solving a set of ordinary differential equations describing the plasma kinetics and external circuit. The use of the present model allows a good comprehension of the halogen depletion phenomena, which is the principal cause of laser ending and allows a simple study of a large-scale non uniformity in preionization density and its effects on electrical and chemical plasma properties. The obtained results indicate clearly that about 50consumed at the end of the pulse. KEY WORDS Excimer laser, XeCl, Modeling, Cold plasma, Kinetic, Halogen depletion, Macroscopic instability.
A Simple Model for the Evolution of Multi-Stranded Coronal Loops
NASA Technical Reports Server (NTRS)
Fuentes, M. C. Lopez; Klimchuk, J. A.
2010-01-01
We develop and analyze a simple cellular automaton (CA) model that reproduces the main properties of the evolution of soft X-ray coronal loops. We are motivated by the observation that these loops evolve in three distinguishable phases that suggest the development, maintainance, and decay of a self-organized system. The model is based on the idea that loops are made of elemental strands that are heated by the relaxation of magnetic stress in the form of nanoflares. In this vision, usually called "the Parker conjecture" (Parker 1988), the origin of stress is the displacement of the strand footpoints due to photospheric convective motions. Modeling the response and evolution of the plasma we obtain synthetic light curves that have the same characteristic properties (intensity, fluctuations, and timescales) as the observed cases. We study the dependence of these properties on the model parameters and find scaling laws that can be used as observational predictions of the model. We discuss the implications of our results for the interpretation of recent loop observations in different wavelengths. Subject headings: Sun: corona - Sun: flares - Sun: magnetic topology - Sun: X-rays, gamma rays
Internal Fluid Dynamics and Frequency Scaling of Sweeping Jet Fluidic Oscillators
NASA Astrophysics Data System (ADS)
Seo, Jung Hee; Salazar, Erik; Mittal, Rajat
2017-11-01
Sweeping jet fluidic oscillators (SJFOs) are devices that produce a spatially oscillating jet solely based on intrinsic flow instability mechanisms without any moving parts. Recently, SJFOs have emerged as effective actuators for flow control, but the internal fluid dynamics of the device that drives the oscillatory flow mechanism is not yet fully understood. In the current study, the internal fluid dynamics of the fluidic oscillator with feedback channels has been investigated by employing incompressible flow simulations. The study is focused on the oscillation mechanisms and scaling laws that underpin the jet oscillation. Based on the simulation results, simple phenomenological models that connect the jet deflection to the feedback flow are developed. Several geometric modifications are considered in order to explore the characteristic length scales and phase relationships associated with the jet oscillation and to assess the proposed phenomenological model. A scaling law for the jet oscillation frequency is proposed based on the detailed analysis. This research is supported by AFOSR Grant FA9550-14-1-0289 monitored by Dr. Douglas Smith.
Shift scheduling model considering workload and worker’s preference for security department
NASA Astrophysics Data System (ADS)
Herawati, A.; Yuniartha, D. R.; Purnama, I. L. I.; Dewi, LT
2018-04-01
Security department operates for 24 hours and applies shift scheduling to organize its workers as well as in hotel industry. This research has been conducted to develop shift scheduling model considering the workers physical workload using rating of perceived exertion (RPE) Borg’s Scale and workers’ preference to accommodate schedule flexibility. The mathematic model is developed in integer linear programming and results optimal solution for simple problem. Resulting shift schedule of the developed model has equally distribution shift allocation among workers to balance the physical workload and give flexibility for workers in working hours arrangement.
Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience
Hooper, R.P.
2001-01-01
A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.
Propulsion simulation for magnetically suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Beerman, Henry P.; Chen, James; Krech, Robert H.; Lintz, Andrew L.; Rosen, David I.
1990-01-01
The feasibility of simulating propulsion-induced aerodynamic effects on scaled aircraft models in wind tunnels employing Magnetic Suspension and Balance Systems. The investigation concerned itself with techniques of generating exhaust jets of appropriate characteristics. The objectives were to: (1) define thrust and mass flow requirements of jets; (2) evaluate techniques for generating propulsive gas within volume limitations imposed by magnetically-suspended models; (3) conduct simple diagnostic experiments for techniques involving new concepts; and (4) recommend experiments for demonstration of propulsion simulation techniques. Various techniques of generating exhaust jets of appropriate characteristics were evaluated on scaled aircraft models in wind tunnels with MSBS. Four concepts of remotely-operated propulsion simulators were examined. Three conceptual designs involving innovative adaptation of convenient technologies (compressed gas cylinders, liquid, and solid propellants) were developed. The fourth innovative concept, namely, the laser-assisted thruster, which can potentially simulate both inlet and exhaust flows, was found to require very high power levels for small thrust levels.
Material Model Evaluation of a Composite Honeycomb Energy Absorber
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.
2012-01-01
A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.
Critical length scale controls adhesive wear mechanisms
Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois
2016-01-01
The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270
Quantifying the influence of sediment source area sampling on detrital thermochronometer data
NASA Astrophysics Data System (ADS)
Whipp, D. M., Jr.; Ehlers, T. A.; Coutand, I.; Bookhagen, B.
2014-12-01
Detrital thermochronology offers a unique advantage over traditional bedrock thermochronology because of its sensitivity to sediment production and transportation to sample sites. In mountainous regions, modern fluvial sediment is often collected and dated to determine the past (105 to >107 year) exhumation history of the upstream drainage area. Though potentially powerful, the interpretation of detrital thermochronometer data derived from modern fluvial sediment is challenging because of spatial and temporal variations in sediment production and transport, and target mineral concentrations. Thermochronometer age prediction models provide a quantitative basis for data interpretation, but it can be difficult to separate variations in catchment bedrock ages from the effects of variable basin denudation and sediment transport. We present two examples of quantitative data interpretation using detrital thermochronometer data from the Himalaya, focusing on the influence of spatial and temporal variations in basin denudation on predicted age distributions. We combine age predictions from the 3D thermokinematic numerical model Pecube with simple models for sediment sampling in the upstream drainage basin area to assess the influence of variations in sediment production by different geomorphic processes or scaled by topographic metrics. We first consider a small catchment from the central Himalaya where bedrock landsliding appears to have affected the observed muscovite 40Ar/39Ar age distributions. Using a simple model of random landsliding with a power-law landslide frequency-area relationship we find that the sediment residence time in the catchment has a major influence on predicted age distributions. In the second case, we compare observed detrital apatite fission-track age distributions from 16 catchments in the Bhutan Himalaya to ages predicted using Pecube and scaled by various topographic metrics. Preliminary results suggest that predicted age distributions scaled by the rock uplift rate in Pecube are statistically equivalent to the observed age distributions for ~75% of the catchments, but may improve when scaled by local relief or specific stream power weighted by satellite-derived precipitation. Ongoing work is exploring the effect of scaling by other topographic metrics.
Adventures in heterotic string phenomenology
NASA Astrophysics Data System (ADS)
Dundee, George Benjamin
In this Dissertation, we consider three topics in the study of effective field theories derived from orbifold compactifications of the heterotic string. In Chapter 2 we provide a primer for those interested in building models based on orbifold compactifications of the heterotic string. In Chapter 3, we analyze gauge coupling unification in the context of heterotic strings on anisotropic orbifolds. This construction is very much analogous to effective five dimensional orbifold GUT field theories. Our analysis assumes three fundamental scales, the string scale, M S, a compactification scale, MC, and a mass scale for some of the vector-like exotics, MEX; the other exotics are assumed to get mass at MS. In the particular models analyzed, we show that gauge coupling unification is not possible with MEX = M C and in fact we require MEX << MC ˜ 3 x 1016 GeV. We find that about 10% of the parameter space has a proton lifetime (from dimension six gauge exchange) 1033 yr ≲ tau(p → pi0e+) ≲ 1036 yr, which is potentially observable by the next generation of proton decay experiments. 80% of the parameter space gives proton lifetimes below Super-K bounds. In Chapter 4, we examine the relationship between the string coupling constant, gSTRING, and the grand unified gauge coupling constant, alphaGUT, in the models of Chapter 3. We find that the requirement that the theory be perturbative provides a non-trivial constraint on these models. Interestingly, there is a correlation between the proton decay rate (due to dimension six operators) and the string coupling constant in this class of models. Finally, we make some comments concerning the extension of these models to the six (and higher) dimensional case. In Chapter 5, we discuss the issues of supersymmetry breaking and moduli stabilization within the context of E8 ⊗ E8 heterotic orbifold constructions and, in particular, we focus on the class of "mini-landscape" models. These theories contain a non-Abelian hidden gauge sector which generates a non-perturbative superpotential leading to supersymmetry breaking and moduli stabilization. We demonstrate this effect in a simple model which contains many of the features of the more general construction. In addition, we argue that once supersymmetry is broken in a restricted sector of the theory, then all moduli are stabilized by supergravity effects. Finally, we obtain the low energy superparticle spectrum resulting from this simple model.
The evolution of scaling laws in the sea ice floe size distribution
NASA Astrophysics Data System (ADS)
Horvat, Christopher; Tziperman, Eli
2017-09-01
The sub-gridscale floe size and thickness distribution (FSTD) is an emerging climate variable, playing a leading-order role in the coupling between sea ice, the ocean, and the atmosphere. The FSTD, however, is difficult to measure given the vast range of horizontal scales of individual floes, leading to the common use of power-law scaling to describe it. The evolution of a coupled mixed-layer-FSTD model of a typical marginal ice zone is explicitly simulated here, to develop a deeper understanding of how processes active at the floe scale may or may not lead to scaling laws in the floe size distribution. The time evolution of mean quantities obtained from the FSTD (sea ice concentration, mean thickness, volume) is complex even in simple scenarios, suggesting that these quantities, which affect climate feedbacks, should be carefully calculated in climate models. The emergence of FSTDs with multiple separate power-law regimes, as seen in observations, is found to be due to the combination of multiple scale-selective processes. Limitations in assuming a power-law FSTD are carefully analyzed, applying methods used in observations to FSTD model output. Two important sources of error are identified that may lead to model biases: one when observing an insufficiently small range of floe sizes, and one from the fact that floe-scale processes often do not produce power-law behavior. These two sources of error may easily lead to biases in mean quantities derived from the FSTD of greater than 100%, and therefore biases in modeled sea ice evolution.
Converging shock flows for a Mie-Grüneisen equation of state
NASA Astrophysics Data System (ADS)
Ramsey, Scott D.; Schmidt, Emma M.; Boyd, Zachary M.; Lilieholm, Jennifer F.; Baty, Roy S.
2018-04-01
Previous work has shown that the one-dimensional (1D) inviscid compressible flow (Euler) equations admit a wide variety of scale-invariant solutions (including the famous Noh, Sedov, and Guderley shock solutions) when the included equation of state (EOS) closure model assumes a certain scale-invariant form. However, this scale-invariant EOS class does not include even simple models used for shock compression of crystalline solids, including many broadly applicable representations of Mie-Grüneisen EOS. Intuitively, this incompatibility naturally arises from the presence of multiple dimensional scales in the Mie-Grüneisen EOS, which are otherwise absent from scale-invariant models that feature only dimensionless parameters (such as the adiabatic index in the ideal gas EOS). The current work extends previous efforts intended to rectify this inconsistency, by using a scale-invariant EOS model to approximate a Mie-Grüneisen EOS form. To this end, the adiabatic bulk modulus for the Mie-Grüneisen EOS is constructed, and its key features are used to motivate the selection of a scale-invariant approximation form. The remaining surrogate model parameters are selected through enforcement of the Rankine-Hugoniot jump conditions for an infinitely strong shock in a Mie-Grüneisen material. Finally, the approximate EOS is used in conjunction with the 1D inviscid Euler equations to calculate a semi-analytical Guderley-like imploding shock solution in a metal sphere and to determine if and when the solution may be valid for the underlying Mie-Grüneisen EOS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majda, Andrew J.; Xing, Yulong; Mohammadian, Majid
Determining the finite-amplitude preconditioned states in the hurricane embryo, which lead to tropical cyclogenesis, is a central issue in contemporary meteorology. In the embryo there is competition between different preconditioning mechanisms involving hydrodynamics and moist thermodynamics, which can lead to cyclogenesis. Here systematic asymptotic methods from applied mathematics are utilized to develop new simplified moist multi-scale models starting from the moist anelastic equations. Three interesting multi-scale models emerge in the analysis. The balanced mesoscale vortex (BMV) dynamics and the microscale balanced hot tower (BHT) dynamics involve simplified balanced equations without gravity waves for vertical vorticity amplification due to moist heatmore » sources and incorporate nonlinear advective fluxes across scales. The BMV model is the central one for tropical cyclogenesis in the embryo. The moist mesoscale wave (MMW) dynamics involves simplified equations for mesoscale moisture fluctuations, as well as linear hydrostatic waves driven by heat sources from moisture and eddy flux divergences. A simplified cloud physics model for deep convection is introduced here and used to study moist axisymmetric plumes in the BHT model. A simple application in periodic geometry involving the effects of mesoscale vertical shear and moist microscale hot towers on vortex amplification is developed here to illustrate features of the coupled multi-scale models. These results illustrate the use of these models in isolating key mechanisms in the embryo in a simplified content.« less
Inflation, quintessence, and the origin of mass
NASA Astrophysics Data System (ADS)
Wetterich, C.
2015-08-01
In a unified picture both inflation and present dynamical dark energy arise from the same scalar field. The history of the Universe describes a crossover from a scale invariant "past fixed point" where all particles are massless, to a "future fixed point" for which spontaneous breaking of the exact scale symmetry generates the particle masses. The cosmological solution can be extrapolated to the infinite past in physical time - the universe has no beginning. This is seen most easily in a frame where particle masses and the Planck mass are field-dependent and increase with time. In this "freeze frame" the Universe shrinks and heats up during radiation and matter domination. In the equivalent, but singular Einstein frame cosmic history finds the familiar big bang description. The vicinity of the past fixed point corresponds to inflation. It ends at a first stage of the crossover. A simple model with no more free parameters than ΛCDM predicts for the primordial fluctuations a relation between the tensor amplitude r and the spectral index n, r = 8.19 (1 - n) - 0.137. The crossover is completed by a second stage where the beyond-standard-model sector undergoes the transition to the future fixed point. The resulting increase of neutrino masses stops a cosmological scaling solution, relating the present dark energy density to the present neutrino mass. At present our simple model seems compatible with all observational tests. We discuss how the fixed points can be rooted within quantum gravity in a crossover between ultraviolet and infrared fixed points. Then quantum properties of gravity could be tested both by very early and late cosmology.
NASA Astrophysics Data System (ADS)
Bordwell, Baylee; Brown, Benjamin P.; Oishi, Jeffrey S.
2018-02-01
Disequilibrium chemical processes significantly affect the spectra of substellar objects. To study these effects, dynamical disequilibrium has been parameterized using the quench and eddy diffusion approximations, but little work has been done to explore how these approximations perform under realistic planetary conditions in different dynamical regimes. As a first step toward addressing this problem, we study the localized, small-scale convective dynamics of planetary atmospheres by direct numerical simulation of fully compressible hydrodynamics with reactive tracers using the Dedalus code. Using polytropically stratified, plane-parallel atmospheres in 2D and 3D, we explore the quenching behavior of different abstract chemical species as a function of the dynamical conditions of the atmosphere as parameterized by the Rayleigh number. We find that in both 2D and 3D, chemical species quench deeper than would be predicted based on simple mixing-length arguments. Instead, it is necessary to employ length scales based on the chemical equilibrium profile of the reacting species in order to predict quench points and perform chemical kinetics modeling in 1D. Based on the results of our simulations, we provide a new length scale, derived from the chemical scale height, that can be used to perform these calculations. This length scale is simple to calculate from known chemical data and makes reasonable predictions for our dynamical simulations.
Pattern recognition analysis and classification modeling of selenium-producing areas
Naftz, D.L.
1996-01-01
Established chemometric and geochemical techniques were applied to water quality data from 23 National Irrigation Water Quality Program (NIWQP) study areas in the Western United States. These techniques were applied to the NIWQP data set to identify common geochemical processes responsible for mobilization of selenium and to develop a classification model that uses major-ion concentrations to identify areas that contain elevated selenium concentrations in water that could pose a hazard to water fowl. Pattern recognition modeling of the simple-salt data computed with the SNORM geochemical program indicate three principal components that explain 95% of the total variance. A three-dimensional plot of PC 1, 2 and 3 scores shows three distinct clusters that correspond to distinct hydrochemical facies denoted as facies 1, 2 and 3. Facies 1 samples are distinguished by water samples without the CaCO3 simple salt and elevated concentrations of NaCl, CaSO4, MgSO4 and Na2SO4 simple salts relative to water samples in facies 2 and 3. Water samples in facies 2 are distinguished from facies 1 by the absence of the MgSO4 simple salt and the presence of the CaCO3 simple salt. Water samples in facies 3 are similar to samples in facies 2, with the absence of both MgSO4 and CaSO4 simple salts. Water samples in facies 1 have the largest selenium concentration (10 ??gl-1), compared to a median concentration of 2.0 ??gl-1 and less than 1.0 ??gl-1 for samples in facies 2 and 3. A classification model using the soft independent modeling by class analogy (SIMCA) algorithm was constructed with data from the NIWQP study areas. The classification model was successful in identifying water samples with a selenium concentration that is hazardous to some species of water-fowl from a test data set comprised of 2,060 water samples from throughout Utah and Wyoming. Application of chemometric and geochemical techniques during data synthesis analysis of multivariate environmental databases from other national-scale environmental programs such as the NIWQP could also provide useful insights for addressing 'real world' environmental problems.
Friction in debris flows: inferences from large-scale flume experiments
Iverson, Richard M.; LaHusen, Richard G.; ,
1993-01-01
A recently constructed flume, 95 m long and 2 m wide, permits systematic experimentation with unsteady, nonuniform flows of poorly sorted geological debris. Preliminary experiments with water-saturated mixtures of sand and gravel show that they flow in a manner consistent with Coulomb frictional behavior. The Coulomb flow model of Savage and Hutter (1989, 1991), modified to include quasi-static pore-pressure effects, predicts flow-front velocities and flow depths reasonably well. Moreover, simple scaling analyses show that grain friction, rather than liquid viscosity or grain collisions, probably dominates shear resistance and momentum transport in the experimental flows. The same scaling indicates that grain friction is also important in many natural debris flows.
Scaled SFS method for Lambertian surface 3D measurement under point source lighting.
Ma, Long; Lyu, Yi; Pei, Xin; Hu, Yan Min; Sun, Feng Ming
2018-05-28
A Lambertian surface is a kind of very important assumption in shape from shading (SFS), which is widely used in many measurement cases. In this paper, a novel scaled SFS method is developed to measure the shape of a Lambertian surface with dimensions. In which, a more accurate light source model is investigated under the illumination of a simple point light source, the relationship between surface depth map and the recorded image grayscale is established by introducing the camera matrix into the model. Together with the constraints of brightness, smoothness and integrability, the surface shape with dimensions can be obtained by analyzing only one image using the scaled SFS method. The algorithm simulations show a perfect matching between the simulated structures and the results, the rebuilding root mean square error (RMSE) is below 0.6mm. Further experiment is performed by measuring a PVC tube internal surface, the overall measurement error lies below 2%.
Exploring the effect of power law social popularity on language evolution.
Gong, Tao; Shuai, Lan
2014-01-01
We evaluate the effect of a power-law-distributed social popularity on the origin and change of language, based on three artificial life models meticulously tracing the evolution of linguistic conventions including lexical items, categories, and simple syntax. A cross-model analysis reveals an optimal social popularity, in which the λ value of the power law distribution is around 1.0. Under this scaling, linguistic conventions can efficiently emerge and widely diffuse among individuals, thus maintaining a useful level of mutual understandability even in a big population. From an evolutionary perspective, we regard this social optimality as a tradeoff among social scaling, mutual understandability, and population growth. Empirical evidence confirms that such optimal power laws exist in many large-scale social systems that are constructed primarily via language-related interactions. This study contributes to the empirical explorations and theoretical discussions of the evolutionary relations between ubiquitous power laws in social systems and relevant individual behaviors.
Sequestering the standard model vacuum energy.
Kaloper, Nemanja; Padilla, Antonio
2014-03-07
We propose a very simple reformulation of general relativity, which completely sequesters from gravity all of the vacuum energy from a matter sector, including all loop corrections and renders all contributions from phase transitions automatically small. The idea is to make the dimensional parameters in the matter sector functionals of the 4-volume element of the Universe. For them to be nonzero, the Universe should be finite in spacetime. If this matter is the standard model of particle physics, our mechanism prevents any of its vacuum energy, classical or quantum, from sourcing the curvature of the Universe. The mechanism is consistent with the large hierarchy between the Planck scale, electroweak scale, and curvature scale, and early Universe cosmology, including inflation. Consequences of our proposal are that the vacuum curvature of an old and large universe is not zero, but very small, that w(DE) ≃ -1 is a transient, and that the Universe will collapse in the future.
Perspectives on scaling and multiscaling in passive scalar turbulence
NASA Astrophysics Data System (ADS)
Banerjee, Tirthankar; Basu, Abhik
2018-05-01
We revisit the well-known problem of multiscaling in substances passively advected by homogeneous and isotropic turbulent flows or passive scalar turbulence. To that end we propose a two-parameter continuum hydrodynamic model for an advected substance concentration θ , parametrized jointly by y and y ¯, that characterize the spatial scaling behavior of the variances of the advecting stochastic velocity and the stochastic additive driving force, respectively. We analyze it within a one-loop dynamic renormalization group method to calculate the multiscaling exponents of the equal-time structure functions of θ . We show how the interplay between the advective velocity and the additive force may lead to simple scaling or multiscaling. In one limit, our results reduce to the well-known results from the Kraichnan model for passive scalar. Our framework of analysis should be of help for analytical approaches for the still intractable problem of fluid turbulence itself.
Simple and Multiple Endmember Mixture Analysis in the Boreal Forest
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Gamon, John A.; Qiu, Hong-Lie
2000-01-01
A key scientific objective of the original Boreal Ecosystem-Atmospheric Study (BOREAS) field campaign (1993-1996) was to obtain the baseline data required for modeling and predicting fluxes of energy, mass, and trace gases in the boreal forest biome. These data sets are necessary to determine the sensitivity of the boreal forest biome to potential climatic changes and potential biophysical feedbacks on climate. A considerable volume of remotely sensed and supporting field data were acquired by numerous researchers to meet this objective. By design, remote sensing and modeling were considered critical components for scaling efforts, extending point measurements from flux towers and field sites over larger spatial and longer temporal scales. A major focus of the BOREAS Follow-on program was concerned with integrating the diverse remotely sensed and ground-based data sets to address specific questions such as carbon dynamics at local to regional scales.
On Two-Scale Modelling of Heat and Mass Transfer
NASA Astrophysics Data System (ADS)
Vala, J.; Št'astník, S.
2008-09-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
Optimisation of confinement in a fusion reactor using a nonlinear turbulence model
NASA Astrophysics Data System (ADS)
Highcock, E. G.; Mandell, N. R.; Barnes, M.
2018-04-01
The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.
ERIC Educational Resources Information Center
Shen, Linjun
As part of a longitudinal study of the growth of general medical knowledge among osteopathic medical students, a simple, convenient, and accurate vertical equating method was developed for constructing a scale for medical achievement. It was believed that Parts 1, 2, and 3 of the National Board of Osteopathic Medical Examiners' (NBOME) examination…